threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hello,\n\nWe have a query that is run almost each second and it's very important to\nsqueeze every other ms out of it. The query is:\n\nSELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\nWHERE (((c0.\"status\" = $1) AND NOT (c0.\"id\" = ANY($2))))\nOR ((c0.\"status\" = $3) AND (c0.\"failed_at\" > $4))\nOR ((c0.\"status\" = $5) AND (c0.\"started_at\" < $6))\nORDER BY c0.\"priority\" DESC, c0.\"times_failed\"\nLIMIT $7\nFOR UPDATE SKIP LOCKED\n\nI added following index:\n\nCREATE INDEX ON campaign_jobs(id, status, failed_at, started_at, priority\nDESC, times_failed);\n\nAnd it didn't help at all, even opposite - the planning phase time grew up\nfrom ~2ms up to ~40 ms leaving execution time intact:\n\n Limit (cost=29780.02..29781.27 rows=100 width=18) (actual\ntime=827.753..828.113 rows=100 loops=1)\n -> LockRows (cost=29780.02..32279.42 rows=199952 width=18) (actual\ntime=827.752..828.096 rows=100 loops=1)\n -> Sort (cost=29780.02..30279.90 rows=199952 width=18) (actual\ntime=827.623..827.653 rows=100 loops=1)\n Sort Key: priority DESC, times_failed\n Sort Method: external sort Disk: 5472kB\n -> Seq Scan on campaign_jobs c0 (cost=0.00..22138.00\nrows=199952 width=18) (actual time=1.072..321.410 rows=200000 loops=1)\n Filter: (((status = 0) AND (id <> ALL\n('{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,\n23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,\n43,44,45,46,47,48}'::integer[]))) OR ((status = 2) AND (failed_at >\n'2017-06-22 03:18:09'::timestamp without time zone)) OR ((status = 1) AND\n(started_at < '2017-06-23 03:11:09'::timestamp without time zone)))\n Planning time: 40.734 ms\n Execution time: 913.638 ms\n(9 rows)\n\n\nI see that query still went through the Seq Scan instead of Index Scan. Is\nit due to poorly crafted index or because of query structure? Is it\npossible to make this query faster?\n\n\nThanks\n\nHello,We have a query that is run almost each second and it's very important to squeeze every other ms out of it. The query is: SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0WHERE (((c0.\"status\" = $1) AND NOT (c0.\"id\" = ANY($2))))OR ((c0.\"status\" = $3) AND (c0.\"failed_at\" > $4))OR ((c0.\"status\" = $5) AND (c0.\"started_at\" < $6))ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"LIMIT $7FOR UPDATE SKIP LOCKEDI added following index: CREATE INDEX ON campaign_jobs(id, status, failed_at, started_at, priority DESC, times_failed);And it didn't help at all, even opposite - the planning phase time grew up from ~2ms up to ~40 ms leaving execution time intact: Limit (cost=29780.02..29781.27 rows=100 width=18) (actual time=827.753..828.113 rows=100 loops=1) -> LockRows (cost=29780.02..32279.42 rows=199952 width=18) (actual time=827.752..828.096 rows=100 loops=1) -> Sort (cost=29780.02..30279.90 rows=199952 width=18) (actual time=827.623..827.653 rows=100 loops=1) Sort Key: priority DESC, times_failed Sort Method: external sort Disk: 5472kB -> Seq Scan on campaign_jobs c0 (cost=0.00..22138.00 rows=199952 width=18) (actual time=1.072..321.410 rows=200000 loops=1) Filter: (((status = 0) AND (id <> ALL ('{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48}'::integer[]))) OR ((status = 2) AND (failed_at > '2017-06-22 03:18:09'::timestamp without time zone)) OR ((status = 1) AND (started_at < '2017-06-23 03:11:09'::timestamp without time zone))) Planning time: 40.734 ms Execution time: 913.638 ms(9 rows)I see that query still went through the Seq Scan instead of Index Scan. Is it due to poorly crafted index or because of query structure? Is it possible to make this query faster?Thanks",
"msg_date": "Wed, 28 Jun 2017 13:47:44 +0700",
"msg_from": "Yevhenii Kurtov <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
},
{
"msg_contents": "2017-06-28 8:47 GMT+02:00 Yevhenii Kurtov <[email protected]>:\n\n> Hello,\n>\n> We have a query that is run almost each second and it's very important to\n> squeeze every other ms out of it. The query is:\n>\n> SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\n> WHERE (((c0.\"status\" = $1) AND NOT (c0.\"id\" = ANY($2))))\n> OR ((c0.\"status\" = $3) AND (c0.\"failed_at\" > $4))\n> OR ((c0.\"status\" = $5) AND (c0.\"started_at\" < $6))\n> ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"\n> LIMIT $7\n> FOR UPDATE SKIP LOCKED\n>\n> I added following index:\n>\n> CREATE INDEX ON campaign_jobs(id, status, failed_at, started_at, priority\n> DESC, times_failed);\n>\n> And it didn't help at all, even opposite - the planning phase time grew up\n> from ~2ms up to ~40 ms leaving execution time intact:\n>\n> Limit (cost=29780.02..29781.27 rows=100 width=18) (actual\n> time=827.753..828.113 rows=100 loops=1)\n> -> LockRows (cost=29780.02..32279.42 rows=199952 width=18) (actual\n> time=827.752..828.096 rows=100 loops=1)\n> -> Sort (cost=29780.02..30279.90 rows=199952 width=18) (actual\n> time=827.623..827.653 rows=100 loops=1)\n> Sort Key: priority DESC, times_failed\n> Sort Method: external sort Disk: 5472kB\n> -> Seq Scan on campaign_jobs c0 (cost=0.00..22138.00\n> rows=199952 width=18) (actual time=1.072..321.410 rows=200000 loops=1)\n> Filter: (((status = 0) AND (id <> ALL\n> ('{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,\n> 23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,\n> 43,44,45,46,47,48}'::integer[]))) OR ((status = 2) AND (failed_at >\n> '2017-06-22 03:18:09'::timestamp without time zone)) OR ((status = 1) AND\n> (started_at < '2017-06-23 03:11:09'::timestamp without time zone)))\n> Planning time: 40.734 ms\n> Execution time: 913.638 ms\n> (9 rows)\n>\n>\n> I see that query still went through the Seq Scan instead of Index Scan. Is\n> it due to poorly crafted index or because of query structure? Is it\n> possible to make this query faster?\n>\n\nThere are few issues\n\na) parametrized LIMIT\nb) complex predicate with lot of OR\nc) slow external sort\n\nb) signalize maybe some strange in design .. try to replace \"OR\" by \"UNION\"\nquery\nc) if you can and you have good enough memory .. try to increase work_mem\n.. maybe 20MB\n\nif you change query to union queries, then you can use conditional indexes\n\ncreate index(id) where status = 0;\ncreate index(failed_at) where status = 2;\ncreate index(started_at) where status = 1;\n\nRegards\n\nPavel\n\n\n>\n> Thanks\n>\n\n2017-06-28 8:47 GMT+02:00 Yevhenii Kurtov <[email protected]>:Hello,We have a query that is run almost each second and it's very important to squeeze every other ms out of it. The query is: SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0WHERE (((c0.\"status\" = $1) AND NOT (c0.\"id\" = ANY($2))))OR ((c0.\"status\" = $3) AND (c0.\"failed_at\" > $4))OR ((c0.\"status\" = $5) AND (c0.\"started_at\" < $6))ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"LIMIT $7FOR UPDATE SKIP LOCKEDI added following index: CREATE INDEX ON campaign_jobs(id, status, failed_at, started_at, priority DESC, times_failed);And it didn't help at all, even opposite - the planning phase time grew up from ~2ms up to ~40 ms leaving execution time intact: Limit (cost=29780.02..29781.27 rows=100 width=18) (actual time=827.753..828.113 rows=100 loops=1) -> LockRows (cost=29780.02..32279.42 rows=199952 width=18) (actual time=827.752..828.096 rows=100 loops=1) -> Sort (cost=29780.02..30279.90 rows=199952 width=18) (actual time=827.623..827.653 rows=100 loops=1) Sort Key: priority DESC, times_failed Sort Method: external sort Disk: 5472kB -> Seq Scan on campaign_jobs c0 (cost=0.00..22138.00 rows=199952 width=18) (actual time=1.072..321.410 rows=200000 loops=1) Filter: (((status = 0) AND (id <> ALL ('{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48}'::integer[]))) OR ((status = 2) AND (failed_at > '2017-06-22 03:18:09'::timestamp without time zone)) OR ((status = 1) AND (started_at < '2017-06-23 03:11:09'::timestamp without time zone))) Planning time: 40.734 ms Execution time: 913.638 ms(9 rows)I see that query still went through the Seq Scan instead of Index Scan. Is it due to poorly crafted index or because of query structure? Is it possible to make this query faster?There are few issues a) parametrized LIMITb) complex predicate with lot of OR c) slow external sortb) signalize maybe some strange in design .. try to replace \"OR\" by \"UNION\" queryc) if you can and you have good enough memory .. try to increase work_mem .. maybe 20MBif you change query to union queries, then you can use conditional indexescreate index(id) where status = 0;create index(failed_at) where status = 2;create index(started_at) where status = 1;RegardsPavelThanks",
"msg_date": "Wed, 28 Jun 2017 09:12:35 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Hello Pavel,\n\nCan you please give a tip how to rewrite the query with UNION clause? I\ndidn't use it at all before actually and afraid that will not get it\nproperly from the first time :)\n\nOn Wed, Jun 28, 2017 at 2:12 PM, Pavel Stehule <[email protected]>\nwrote:\n\n>\n>\n> 2017-06-28 8:47 GMT+02:00 Yevhenii Kurtov <[email protected]>:\n>\n>> Hello,\n>>\n>> We have a query that is run almost each second and it's very important to\n>> squeeze every other ms out of it. The query is:\n>>\n>> SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\n>> WHERE (((c0.\"status\" = $1) AND NOT (c0.\"id\" = ANY($2))))\n>> OR ((c0.\"status\" = $3) AND (c0.\"failed_at\" > $4))\n>> OR ((c0.\"status\" = $5) AND (c0.\"started_at\" < $6))\n>> ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"\n>> LIMIT $7\n>> FOR UPDATE SKIP LOCKED\n>>\n>> I added following index:\n>>\n>> CREATE INDEX ON campaign_jobs(id, status, failed_at, started_at, priority\n>> DESC, times_failed);\n>>\n>> And it didn't help at all, even opposite - the planning phase time grew\n>> up from ~2ms up to ~40 ms leaving execution time intact:\n>>\n>> Limit (cost=29780.02..29781.27 rows=100 width=18) (actual\n>> time=827.753..828.113 rows=100 loops=1)\n>> -> LockRows (cost=29780.02..32279.42 rows=199952 width=18) (actual\n>> time=827.752..828.096 rows=100 loops=1)\n>> -> Sort (cost=29780.02..30279.90 rows=199952 width=18) (actual\n>> time=827.623..827.653 rows=100 loops=1)\n>> Sort Key: priority DESC, times_failed\n>> Sort Method: external sort Disk: 5472kB\n>> -> Seq Scan on campaign_jobs c0 (cost=0.00..22138.00\n>> rows=199952 width=18) (actual time=1.072..321.410 rows=200000 loops=1)\n>> Filter: (((status = 0) AND (id <> ALL\n>> ('{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,\n>> 23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,\n>> 43,44,45,46,47,48}'::integer[]))) OR ((status = 2) AND (failed_at >\n>> '2017-06-22 03:18:09'::timestamp without time zone)) OR ((status = 1) AND\n>> (started_at < '2017-06-23 03:11:09'::timestamp without time zone)))\n>> Planning time: 40.734 ms\n>> Execution time: 913.638 ms\n>> (9 rows)\n>>\n>>\n>> I see that query still went through the Seq Scan instead of Index Scan.\n>> Is it due to poorly crafted index or because of query structure? Is it\n>> possible to make this query faster?\n>>\n>\n> There are few issues\n>\n> a) parametrized LIMIT\n> b) complex predicate with lot of OR\n> c) slow external sort\n>\n> b) signalize maybe some strange in design .. try to replace \"OR\" by\n> \"UNION\" query\n> c) if you can and you have good enough memory .. try to increase work_mem\n> .. maybe 20MB\n>\n> if you change query to union queries, then you can use conditional indexes\n>\n> create index(id) where status = 0;\n> create index(failed_at) where status = 2;\n> create index(started_at) where status = 1;\n>\n> Regards\n>\n> Pavel\n>\n>\n>>\n>> Thanks\n>>\n>\n>\n\nHello Pavel,Can you please give a tip how to rewrite the query with UNION clause? I didn't use it at all before actually and afraid that will not get it properly from the first time :)On Wed, Jun 28, 2017 at 2:12 PM, Pavel Stehule <[email protected]> wrote:2017-06-28 8:47 GMT+02:00 Yevhenii Kurtov <[email protected]>:Hello,We have a query that is run almost each second and it's very important to squeeze every other ms out of it. The query is: SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0WHERE (((c0.\"status\" = $1) AND NOT (c0.\"id\" = ANY($2))))OR ((c0.\"status\" = $3) AND (c0.\"failed_at\" > $4))OR ((c0.\"status\" = $5) AND (c0.\"started_at\" < $6))ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"LIMIT $7FOR UPDATE SKIP LOCKEDI added following index: CREATE INDEX ON campaign_jobs(id, status, failed_at, started_at, priority DESC, times_failed);And it didn't help at all, even opposite - the planning phase time grew up from ~2ms up to ~40 ms leaving execution time intact: Limit (cost=29780.02..29781.27 rows=100 width=18) (actual time=827.753..828.113 rows=100 loops=1) -> LockRows (cost=29780.02..32279.42 rows=199952 width=18) (actual time=827.752..828.096 rows=100 loops=1) -> Sort (cost=29780.02..30279.90 rows=199952 width=18) (actual time=827.623..827.653 rows=100 loops=1) Sort Key: priority DESC, times_failed Sort Method: external sort Disk: 5472kB -> Seq Scan on campaign_jobs c0 (cost=0.00..22138.00 rows=199952 width=18) (actual time=1.072..321.410 rows=200000 loops=1) Filter: (((status = 0) AND (id <> ALL ('{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48}'::integer[]))) OR ((status = 2) AND (failed_at > '2017-06-22 03:18:09'::timestamp without time zone)) OR ((status = 1) AND (started_at < '2017-06-23 03:11:09'::timestamp without time zone))) Planning time: 40.734 ms Execution time: 913.638 ms(9 rows)I see that query still went through the Seq Scan instead of Index Scan. Is it due to poorly crafted index or because of query structure? Is it possible to make this query faster?There are few issues a) parametrized LIMITb) complex predicate with lot of OR c) slow external sortb) signalize maybe some strange in design .. try to replace \"OR\" by \"UNION\" queryc) if you can and you have good enough memory .. try to increase work_mem .. maybe 20MBif you change query to union queries, then you can use conditional indexescreate index(id) where status = 0;create index(failed_at) where status = 2;create index(started_at) where status = 1;RegardsPavelThanks",
"msg_date": "Wed, 28 Jun 2017 14:28:00 +0700",
"msg_from": "Yevhenii Kurtov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: "
},
{
"msg_contents": "2017-06-28 9:28 GMT+02:00 Yevhenii Kurtov <[email protected]>:\n\n> Hello Pavel,\n>\n> Can you please give a tip how to rewrite the query with UNION clause? I\n> didn't use it at all before actually and afraid that will not get it\n> properly from the first time :)\n>\n\nSELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\nWHERE (((c0.\"status\" = $1) AND NOT (c0.\"id\" = ANY($2))))\nUNION SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\nWHERE ((c0.\"status\" = $3) AND (c0.\"failed_at\" > $4))\nUNION SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\nWHERE ((c0.\"status\" = $5) AND (c0.\"started_at\" < $6))\nORDER BY c0.\"priority\" DESC, c0.\"times_failed\"\nLIMIT $7\nFOR UPDATE SKIP LOCKED\n\nSomething like this\n\nPavel\n\n\n>\n> On Wed, Jun 28, 2017 at 2:12 PM, Pavel Stehule <[email protected]>\n> wrote:\n>\n>>\n>>\n>> 2017-06-28 8:47 GMT+02:00 Yevhenii Kurtov <[email protected]>:\n>>\n>>> Hello,\n>>>\n>>> We have a query that is run almost each second and it's very important\n>>> to squeeze every other ms out of it. The query is:\n>>>\n>>> SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\n>>> WHERE (((c0.\"status\" = $1) AND NOT (c0.\"id\" = ANY($2))))\n>>> OR ((c0.\"status\" = $3) AND (c0.\"failed_at\" > $4))\n>>> OR ((c0.\"status\" = $5) AND (c0.\"started_at\" < $6))\n>>> ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"\n>>> LIMIT $7\n>>> FOR UPDATE SKIP LOCKED\n>>>\n>>> I added following index:\n>>>\n>>> CREATE INDEX ON campaign_jobs(id, status, failed_at, started_at,\n>>> priority DESC, times_failed);\n>>>\n>>> And it didn't help at all, even opposite - the planning phase time grew\n>>> up from ~2ms up to ~40 ms leaving execution time intact:\n>>>\n>>> Limit (cost=29780.02..29781.27 rows=100 width=18) (actual\n>>> time=827.753..828.113 rows=100 loops=1)\n>>> -> LockRows (cost=29780.02..32279.42 rows=199952 width=18) (actual\n>>> time=827.752..828.096 rows=100 loops=1)\n>>> -> Sort (cost=29780.02..30279.90 rows=199952 width=18)\n>>> (actual time=827.623..827.653 rows=100 loops=1)\n>>> Sort Key: priority DESC, times_failed\n>>> Sort Method: external sort Disk: 5472kB\n>>> -> Seq Scan on campaign_jobs c0 (cost=0.00..22138.00\n>>> rows=199952 width=18) (actual time=1.072..321.410 rows=200000 loops=1)\n>>> Filter: (((status = 0) AND (id <> ALL\n>>> ('{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,\n>>> 23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,\n>>> 43,44,45,46,47,48}'::integer[]))) OR ((status = 2) AND (failed_at >\n>>> '2017-06-22 03:18:09'::timestamp without time zone)) OR ((status = 1) AND\n>>> (started_at < '2017-06-23 03:11:09'::timestamp without time zone)))\n>>> Planning time: 40.734 ms\n>>> Execution time: 913.638 ms\n>>> (9 rows)\n>>>\n>>>\n>>> I see that query still went through the Seq Scan instead of Index Scan.\n>>> Is it due to poorly crafted index or because of query structure? Is it\n>>> possible to make this query faster?\n>>>\n>>\n>> There are few issues\n>>\n>> a) parametrized LIMIT\n>> b) complex predicate with lot of OR\n>> c) slow external sort\n>>\n>> b) signalize maybe some strange in design .. try to replace \"OR\" by\n>> \"UNION\" query\n>> c) if you can and you have good enough memory .. try to increase work_mem\n>> .. maybe 20MB\n>>\n>> if you change query to union queries, then you can use conditional indexes\n>>\n>> create index(id) where status = 0;\n>> create index(failed_at) where status = 2;\n>> create index(started_at) where status = 1;\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>>\n>>> Thanks\n>>>\n>>\n>>\n>\n\n2017-06-28 9:28 GMT+02:00 Yevhenii Kurtov <[email protected]>:Hello Pavel,Can you please give a tip how to rewrite the query with UNION clause? I didn't use it at all before actually and afraid that will not get it properly from the first time :)SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0WHERE (((c0.\"status\" = $1) AND NOT (c0.\"id\" = ANY($2))))UNION SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0WHERE ((c0.\"status\" = $3) AND (c0.\"failed_at\" > $4))UNION SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0WHERE ((c0.\"status\" = $5) AND (c0.\"started_at\" < $6))ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"LIMIT $7FOR UPDATE SKIP LOCKEDSomething like thisPavel On Wed, Jun 28, 2017 at 2:12 PM, Pavel Stehule <[email protected]> wrote:2017-06-28 8:47 GMT+02:00 Yevhenii Kurtov <[email protected]>:Hello,We have a query that is run almost each second and it's very important to squeeze every other ms out of it. The query is: SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0WHERE (((c0.\"status\" = $1) AND NOT (c0.\"id\" = ANY($2))))OR ((c0.\"status\" = $3) AND (c0.\"failed_at\" > $4))OR ((c0.\"status\" = $5) AND (c0.\"started_at\" < $6))ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"LIMIT $7FOR UPDATE SKIP LOCKEDI added following index: CREATE INDEX ON campaign_jobs(id, status, failed_at, started_at, priority DESC, times_failed);And it didn't help at all, even opposite - the planning phase time grew up from ~2ms up to ~40 ms leaving execution time intact: Limit (cost=29780.02..29781.27 rows=100 width=18) (actual time=827.753..828.113 rows=100 loops=1) -> LockRows (cost=29780.02..32279.42 rows=199952 width=18) (actual time=827.752..828.096 rows=100 loops=1) -> Sort (cost=29780.02..30279.90 rows=199952 width=18) (actual time=827.623..827.653 rows=100 loops=1) Sort Key: priority DESC, times_failed Sort Method: external sort Disk: 5472kB -> Seq Scan on campaign_jobs c0 (cost=0.00..22138.00 rows=199952 width=18) (actual time=1.072..321.410 rows=200000 loops=1) Filter: (((status = 0) AND (id <> ALL ('{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48}'::integer[]))) OR ((status = 2) AND (failed_at > '2017-06-22 03:18:09'::timestamp without time zone)) OR ((status = 1) AND (started_at < '2017-06-23 03:11:09'::timestamp without time zone))) Planning time: 40.734 ms Execution time: 913.638 ms(9 rows)I see that query still went through the Seq Scan instead of Index Scan. Is it due to poorly crafted index or because of query structure? Is it possible to make this query faster?There are few issues a) parametrized LIMITb) complex predicate with lot of OR c) slow external sortb) signalize maybe some strange in design .. try to replace \"OR\" by \"UNION\" queryc) if you can and you have good enough memory .. try to increase work_mem .. maybe 20MBif you change query to union queries, then you can use conditional indexescreate index(id) where status = 0;create index(failed_at) where status = 2;create index(started_at) where status = 1;RegardsPavelThanks",
"msg_date": "Wed, 28 Jun 2017 09:43:25 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "\n\n----- Mensaje original -----\n> De: \"Yevhenii Kurtov\" <[email protected]>\n> Para: [email protected]\n> Enviados: Miércoles, 28 de Junio 2017 3:47:44\n> Asunto: [PERFORM]\n> \n> Hello,\n> \n> We have a query that is run almost each second and it's very important to\n> squeeze every other ms out of it. The query is:\n> \n> SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\n> WHERE (((c0.\"status\" = $1) AND NOT (c0.\"id\" = ANY($2))))\n> OR ((c0.\"status\" = $3) AND (c0.\"failed_at\" > $4))\n> OR ((c0.\"status\" = $5) AND (c0.\"started_at\" < $6))\n> ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"\n> LIMIT $7\n> FOR UPDATE SKIP LOCKED\n> \n> I added following index:\n> \n> CREATE INDEX ON campaign_jobs(id, status, failed_at, started_at, priority\n> DESC, times_failed);\n> \n> And it didn't help at all, even opposite - the planning phase time grew up\n> from ~2ms up to ~40 ms leaving execution time intact:\n> \n> Limit (cost=29780.02..29781.27 rows=100 width=18) (actual\n> time=827.753..828.113 rows=100 loops=1)\n> -> LockRows (cost=29780.02..32279.42 rows=199952 width=18) (actual\n> time=827.752..828.096 rows=100 loops=1)\n> -> Sort (cost=29780.02..30279.90 rows=199952 width=18) (actual\n> time=827.623..827.653 rows=100 loops=1)\n> Sort Key: priority DESC, times_failed\n> Sort Method: external sort Disk: 5472kB\n> -> Seq Scan on campaign_jobs c0 (cost=0.00..22138.00\n> rows=199952 width=18) (actual time=1.072..321.410 rows=200000 loops=1)\n> Filter: (((status = 0) AND (id <> ALL\n> ('{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,\n> 23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,\n> 43,44,45,46,47,48}'::integer[]))) OR ((status = 2) AND (failed_at >\n> '2017-06-22 03:18:09'::timestamp without time zone)) OR ((status = 1) AND\n> (started_at < '2017-06-23 03:11:09'::timestamp without time zone)))\n> Planning time: 40.734 ms\n> Execution time: 913.638 ms\n> (9 rows)\n> \n> \n> I see that query still went through the Seq Scan instead of Index Scan. Is\n> it due to poorly crafted index or because of query structure? Is it\n> possible to make this query faster?\n> \n> \n> Thanks\n> \nWell, most of the time is spent ordering, and it is doing a (slow) disk sort. Try increasing work_mem for a in-memory sort.\n\nHow many rows in campaign_jobs? If the query is returning most of the rows in the table, it will not going to use any index anyway.\n\nHTH\nGerardo\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 28 Jun 2017 09:08:54 -0400 (EDT)",
"msg_from": "Gerardo Herzig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "\r\n\r\nOn 2017-06-28, Pavel Stehule wrote ...\r\n> On 2017-06-28, Yevhenii Kurtov wrote ...\r\n>> On 2017-06-28, Pavel Stehule wrote ...\r\n>>> On 2017-06-28, Yevhenii Kurtov wrote ...\r\n>>>> We have a query that is run almost each second and it's very important to squeeze every other ms out of it. The query is:\r\n>>>> ...\r\n>>>> I added following index: CREATE INDEX ON campaign_jobs(id, status, failed_at, started_at, priority DESC, times_failed);\r\n>>>> ...\r\n>>> There are few issues \r\n>>> a) parametrized LIMIT\r\n>>> b) complex predicate with lot of OR \r\n>>> c) slow external sort\r\n>>>\r\n>>> b) signalize maybe some strange in design .. try to replace \"OR\" by \"UNION\" query\r\n>>> c) if you can and you have good enough memory .. try to increase work_mem .. maybe 20MB\r\n>>>\r\n>>> if you change query to union queries, then you can use conditional indexes\r\n>>>\r\n>>> create index(id) where status = 0;\r\n>>> create index(failed_at) where status = 2;\r\n>>> create index(started_at) where status = 1;\r\n>>\r\n>> Can you please give a tip how to rewrite the query with UNION clause?\r\n>\r\n> SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\r\n> WHERE (((c0.\"status\" = $1) AND NOT (c0.\"id\" = ANY($2))))\r\n> UNION SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\r\n> WHERE ((c0.\"status\" = $3) AND (c0.\"failed_at\" > $4))\r\n> UNION SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\r\n> WHERE ((c0.\"status\" = $5) AND (c0.\"started_at\" < $6))\r\n> ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"\r\n> LIMIT $7\r\n> FOR UPDATE SKIP LOCKED\r\n\r\n\r\nNormally (at least for developers I've worked with), that kind of query structure is used when the \"status\" values don't overlap and don't change from query to query. Judging from Pavel's suggested conditional indexes (i.e. \"where status = <constant>\"), he also thinks that is likely.\r\n\r\nGive the optimizer that information so that it can use it. Assuming $1 = 0 and $3 = 2 and $5 = 1, substitute literals. Substitute literal for $7 in limit. Push order by and limit to each branch of the union all (or does Postgres figure that out automatically?) Replace union with union all (not sure about Postgres, but allows other dbms to avoid sorting and merging result sets to eliminate duplicates). (Use of UNION ALL assumes that \"id\" is unique across rows as implied by only \"id\" being selected with FOR UPDATE. If multiple rows can have the same \"id\", then use UNION to eliminate the duplicates.)\r\n\r\nSELECT \"id\" FROM \"campaign_jobs\" WHERE \"status\" = 0 AND NOT \"id\" = ANY($1)\r\n UNION ALL\r\nSELECT \"id\" FROM \"campaign_jobs\" WHERE \"status\" = 2 AND \"failed_at\" > $2\r\n UNION ALL\r\nSELECT \"id\" FROM \"campaign_jobs\" WHERE \"status\" = 1 AND \"started_at\" < $3\r\nORDER BY \"priority\" DESC, \"times_failed\"\r\nLIMIT 100\r\nFOR UPDATE SKIP LOCKED\r\n\r\n\r\nAnother thing that you could try is to push the ORDER BY and LIMIT to the branches of the UNION (or does Postgres figure that out automatically?) and use slightly different indexes. This may not make sense for all the branches but one nice thing about UNION is that each branch can be tweaked independently. Also, there are probably unmentioned functional dependencies that you can use to reduce the index size and/or improve your match rate. Example - if status = 1 means that the campaign_job has started but not failed or completed, then you may know that started_at is set, but failed_at and ended_at are null. The < comparison in and of itself implies that only rows where \"started_at\" is not null will match the condition.\r\n\r\nSELECT c0.\"id\" FROM \"campaign_jobs\" AS c0 WHERE (((c0.\"status\" = 0) AND NOT (c0.\"id\" = ANY($1)))) ORDER BY c0.\"priority\" DESC, c0.\"times_failed\" LIMIT 100\r\nUNION ALL\r\nSELECT c0.\"id\" FROM \"campaign_jobs\" AS c0 WHERE ((c0.\"status\" = 2) AND (c0.\"failed_at\" > $2)) ORDER BY c0.\"priority\" DESC, c0.\"times_failed\" LIMIT 100\r\nUNION ALL\r\nSELECT c0.\"id\" FROM \"campaign_jobs\" AS c0 WHERE ((c0.\"status\" = 1) AND (c0.\"started_at\" < $3)) ORDER BY c0.\"priority\" DESC, c0.\"times_failed\" LIMIT 100\r\nORDER BY c0.\"priority\" DESC, c0.\"times_failed\"\r\nLIMIT 100\r\nFOR UPDATE SKIP LOCKED\r\n\r\nIncluding the \"priority\", \"times_failed\" and \"id\" columns in the indexes along with \"failed_at\"/\"started_at\" allows the optimizer to do index only scans. (May still have to do random I/O to the data page to determine tuple version visibility but I don't think that can be eliminated.)\r\n\r\ncreate index ... (\"priority\" desc, \"times_failed\", \"id\") where \"status\" = 0;\r\ncreate index ... (\"priority\" desc, \"times_failed\", \"id\", \"failed_at\") where \"status\" = 2 and \"failed_at\" is not null;\r\ncreate index ... (\"priority\" desc, \"times_failed\", \"id\", \"started_at\") where \"status\" = 1 and \"started_at\" is not null; -- and ended_at is null and ...\r\n\r\n\r\nI'm assuming that the optimizer knows that \"where status = 1 and started_at < $3\" implies \"and started_at is not null\" and will consider the conditional index. If not, then the \"and started_at is not null\" needs to be explicit.\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 28 Jun 2017 13:10:13 +0000",
"msg_from": "Brad DeJong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Hello folks,\n\nThank you very much for analysis and suggested - there is a lot to learn\nhere. I just tried UNION queries and got following error:\n\nERROR: FOR UPDATE is not allowed with UNION/INTERSECT/EXCEPT\n\nI made a table dump for anyone who wants to give it a spin\nhttps://app.box.com/s/464b12glmlk5o4gvzz7krc4c8s2fxlwr\nand here is the gist for the original commands\nhttps://gist.github.com/lessless/33215d0c147645db721e74e07498ac53\n\nOn Wed, Jun 28, 2017 at 8:10 PM, Brad DeJong <[email protected]> wrote:\n\n>\n>\n> On 2017-06-28, Pavel Stehule wrote ...\n> > On 2017-06-28, Yevhenii Kurtov wrote ...\n> >> On 2017-06-28, Pavel Stehule wrote ...\n> >>> On 2017-06-28, Yevhenii Kurtov wrote ...\n> >>>> We have a query that is run almost each second and it's very\n> important to squeeze every other ms out of it. The query is:\n> >>>> ...\n> >>>> I added following index: CREATE INDEX ON campaign_jobs(id, status,\n> failed_at, started_at, priority DESC, times_failed);\n> >>>> ...\n> >>> There are few issues\n> >>> a) parametrized LIMIT\n> >>> b) complex predicate with lot of OR\n> >>> c) slow external sort\n> >>>\n> >>> b) signalize maybe some strange in design .. try to replace \"OR\" by\n> \"UNION\" query\n> >>> c) if you can and you have good enough memory .. try to increase\n> work_mem .. maybe 20MB\n> >>>\n> >>> if you change query to union queries, then you can use conditional\n> indexes\n> >>>\n> >>> create index(id) where status = 0;\n> >>> create index(failed_at) where status = 2;\n> >>> create index(started_at) where status = 1;\n> >>\n> >> Can you please give a tip how to rewrite the query with UNION clause?\n> >\n> > SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\n> > WHERE (((c0.\"status\" = $1) AND NOT (c0.\"id\" = ANY($2))))\n> > UNION SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\n> > WHERE ((c0.\"status\" = $3) AND (c0.\"failed_at\" > $4))\n> > UNION SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\n> > WHERE ((c0.\"status\" = $5) AND (c0.\"started_at\" < $6))\n> > ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"\n> > LIMIT $7\n> > FOR UPDATE SKIP LOCKED\n>\n>\n> Normally (at least for developers I've worked with), that kind of query\n> structure is used when the \"status\" values don't overlap and don't change\n> from query to query. Judging from Pavel's suggested conditional indexes\n> (i.e. \"where status = <constant>\"), he also thinks that is likely.\n>\n> Give the optimizer that information so that it can use it. Assuming $1 = 0\n> and $3 = 2 and $5 = 1, substitute literals. Substitute literal for $7 in\n> limit. Push order by and limit to each branch of the union all (or does\n> Postgres figure that out automatically?) Replace union with union all (not\n> sure about Postgres, but allows other dbms to avoid sorting and merging\n> result sets to eliminate duplicates). (Use of UNION ALL assumes that \"id\"\n> is unique across rows as implied by only \"id\" being selected with FOR\n> UPDATE. If multiple rows can have the same \"id\", then use UNION to\n> eliminate the duplicates.)\n>\n> SELECT \"id\" FROM \"campaign_jobs\" WHERE \"status\" = 0 AND NOT \"id\" = ANY($1)\n> UNION ALL\n> SELECT \"id\" FROM \"campaign_jobs\" WHERE \"status\" = 2 AND \"failed_at\" > $2\n> UNION ALL\n> SELECT \"id\" FROM \"campaign_jobs\" WHERE \"status\" = 1 AND \"started_at\" < $3\n> ORDER BY \"priority\" DESC, \"times_failed\"\n> LIMIT 100\n> FOR UPDATE SKIP LOCKED\n>\n>\n> Another thing that you could try is to push the ORDER BY and LIMIT to the\n> branches of the UNION (or does Postgres figure that out automatically?) and\n> use slightly different indexes. This may not make sense for all the\n> branches but one nice thing about UNION is that each branch can be tweaked\n> independently. Also, there are probably unmentioned functional dependencies\n> that you can use to reduce the index size and/or improve your match rate.\n> Example - if status = 1 means that the campaign_job has started but not\n> failed or completed, then you may know that started_at is set, but\n> failed_at and ended_at are null. The < comparison in and of itself implies\n> that only rows where \"started_at\" is not null will match the condition.\n>\n> SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0 WHERE (((c0.\"status\" = 0) AND\n> NOT (c0.\"id\" = ANY($1)))) ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"\n> LIMIT 100\n> UNION ALL\n> SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0 WHERE ((c0.\"status\" = 2) AND\n> (c0.\"failed_at\" > $2)) ORDER BY c0.\"priority\" DESC, c0.\"times_failed\" LIMIT\n> 100\n> UNION ALL\n> SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0 WHERE ((c0.\"status\" = 1) AND\n> (c0.\"started_at\" < $3)) ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"\n> LIMIT 100\n> ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"\n> LIMIT 100\n> FOR UPDATE SKIP LOCKED\n>\n> Including the \"priority\", \"times_failed\" and \"id\" columns in the indexes\n> along with \"failed_at\"/\"started_at\" allows the optimizer to do index only\n> scans. (May still have to do random I/O to the data page to determine tuple\n> version visibility but I don't think that can be eliminated.)\n>\n> create index ... (\"priority\" desc, \"times_failed\", \"id\")\n> where \"status\" = 0;\n> create index ... (\"priority\" desc, \"times_failed\", \"id\", \"failed_at\")\n> where \"status\" = 2 and \"failed_at\" is not null;\n> create index ... (\"priority\" desc, \"times_failed\", \"id\", \"started_at\")\n> where \"status\" = 1 and \"started_at\" is not null; -- and ended_at is null\n> and ...\n>\n>\n> I'm assuming that the optimizer knows that \"where status = 1 and\n> started_at < $3\" implies \"and started_at is not null\" and will consider the\n> conditional index. If not, then the \"and started_at is not null\" needs to\n> be explicit.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHello folks,Thank you very much for analysis and suggested - there is a lot to learn here. I just tried UNION queries and got following error:ERROR: FOR UPDATE is not allowed with UNION/INTERSECT/EXCEPTI made a table dump for anyone who wants to give it a spin https://app.box.com/s/464b12glmlk5o4gvzz7krc4c8s2fxlwrand here is the gist for the original commands https://gist.github.com/lessless/33215d0c147645db721e74e07498ac53On Wed, Jun 28, 2017 at 8:10 PM, Brad DeJong <[email protected]> wrote:\n\nOn 2017-06-28, Pavel Stehule wrote ...\n> On 2017-06-28, Yevhenii Kurtov wrote ...\n>> On 2017-06-28, Pavel Stehule wrote ...\n>>> On 2017-06-28, Yevhenii Kurtov wrote ...\n>>>> We have a query that is run almost each second and it's very important to squeeze every other ms out of it. The query is:\n>>>> ...\n>>>> I added following index: CREATE INDEX ON campaign_jobs(id, status, failed_at, started_at, priority DESC, times_failed);\n>>>> ...\n>>> There are few issues\n>>> a) parametrized LIMIT\n>>> b) complex predicate with lot of OR\n>>> c) slow external sort\n>>>\n>>> b) signalize maybe some strange in design .. try to replace \"OR\" by \"UNION\" query\n>>> c) if you can and you have good enough memory .. try to increase work_mem .. maybe 20MB\n>>>\n>>> if you change query to union queries, then you can use conditional indexes\n>>>\n>>> create index(id) where status = 0;\n>>> create index(failed_at) where status = 2;\n>>> create index(started_at) where status = 1;\n>>\n>> Can you please give a tip how to rewrite the query with UNION clause?\n>\n> SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\n> WHERE (((c0.\"status\" = $1) AND NOT (c0.\"id\" = ANY($2))))\n> UNION SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\n> WHERE ((c0.\"status\" = $3) AND (c0.\"failed_at\" > $4))\n> UNION SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\n> WHERE ((c0.\"status\" = $5) AND (c0.\"started_at\" < $6))\n> ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"\n> LIMIT $7\n> FOR UPDATE SKIP LOCKED\n\n\nNormally (at least for developers I've worked with), that kind of query structure is used when the \"status\" values don't overlap and don't change from query to query. Judging from Pavel's suggested conditional indexes (i.e. \"where status = <constant>\"), he also thinks that is likely.\n\nGive the optimizer that information so that it can use it. Assuming $1 = 0 and $3 = 2 and $5 = 1, substitute literals. Substitute literal for $7 in limit. Push order by and limit to each branch of the union all (or does Postgres figure that out automatically?) Replace union with union all (not sure about Postgres, but allows other dbms to avoid sorting and merging result sets to eliminate duplicates). (Use of UNION ALL assumes that \"id\" is unique across rows as implied by only \"id\" being selected with FOR UPDATE. If multiple rows can have the same \"id\", then use UNION to eliminate the duplicates.)\n\nSELECT \"id\" FROM \"campaign_jobs\" WHERE \"status\" = 0 AND NOT \"id\" = ANY($1)\n UNION ALL\nSELECT \"id\" FROM \"campaign_jobs\" WHERE \"status\" = 2 AND \"failed_at\" > $2\n UNION ALL\nSELECT \"id\" FROM \"campaign_jobs\" WHERE \"status\" = 1 AND \"started_at\" < $3\nORDER BY \"priority\" DESC, \"times_failed\"\nLIMIT 100\nFOR UPDATE SKIP LOCKED\n\n\nAnother thing that you could try is to push the ORDER BY and LIMIT to the branches of the UNION (or does Postgres figure that out automatically?) and use slightly different indexes. This may not make sense for all the branches but one nice thing about UNION is that each branch can be tweaked independently. Also, there are probably unmentioned functional dependencies that you can use to reduce the index size and/or improve your match rate. Example - if status = 1 means that the campaign_job has started but not failed or completed, then you may know that started_at is set, but failed_at and ended_at are null. The < comparison in and of itself implies that only rows where \"started_at\" is not null will match the condition.\n\nSELECT c0.\"id\" FROM \"campaign_jobs\" AS c0 WHERE (((c0.\"status\" = 0) AND NOT (c0.\"id\" = ANY($1)))) ORDER BY c0.\"priority\" DESC, c0.\"times_failed\" LIMIT 100\nUNION ALL\nSELECT c0.\"id\" FROM \"campaign_jobs\" AS c0 WHERE ((c0.\"status\" = 2) AND (c0.\"failed_at\" > $2)) ORDER BY c0.\"priority\" DESC, c0.\"times_failed\" LIMIT 100\nUNION ALL\nSELECT c0.\"id\" FROM \"campaign_jobs\" AS c0 WHERE ((c0.\"status\" = 1) AND (c0.\"started_at\" < $3)) ORDER BY c0.\"priority\" DESC, c0.\"times_failed\" LIMIT 100\nORDER BY c0.\"priority\" DESC, c0.\"times_failed\"\nLIMIT 100\nFOR UPDATE SKIP LOCKED\n\nIncluding the \"priority\", \"times_failed\" and \"id\" columns in the indexes along with \"failed_at\"/\"started_at\" allows the optimizer to do index only scans. (May still have to do random I/O to the data page to determine tuple version visibility but I don't think that can be eliminated.)\n\ncreate index ... (\"priority\" desc, \"times_failed\", \"id\") where \"status\" = 0;\ncreate index ... (\"priority\" desc, \"times_failed\", \"id\", \"failed_at\") where \"status\" = 2 and \"failed_at\" is not null;\ncreate index ... (\"priority\" desc, \"times_failed\", \"id\", \"started_at\") where \"status\" = 1 and \"started_at\" is not null; -- and ended_at is null and ...\n\n\nI'm assuming that the optimizer knows that \"where status = 1 and started_at < $3\" implies \"and started_at is not null\" and will consider the conditional index. If not, then the \"and started_at is not null\" needs to be explicit.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 29 Jun 2017 12:17:44 +0700",
"msg_from": "Yevhenii Kurtov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: "
},
{
"msg_contents": "2017-06-29 7:17 GMT+02:00 Yevhenii Kurtov <[email protected]>:\n\n> Hello folks,\n>\n> Thank you very much for analysis and suggested - there is a lot to learn\n> here. I just tried UNION queries and got following error:\n>\n> ERROR: FOR UPDATE is not allowed with UNION/INTERSECT/EXCEPT\n>\n\nit is sad :(\n\nmaybe bitmap index scan can work\n\npostgres=# create table test(id int, started date, failed date, status int);\nCREATE TABLE\npostgres=# create index on test(id) where status = 0;\nCREATE INDEX\npostgres=# create index on test(started) where status = 1;\nCREATE INDEX\npostgres=# create index on test(failed ) where status = 2;\nCREATE INDEX\npostgres=# explain select id from test where (status = 0 and id in\n(1,2,3,4,5)) or (status = 1 and started < current_date) or (status = 2 and\nfailed > current_date);\n┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n│\n QUERY PLAN\n╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════\n│ Bitmap Heap Scan on test (cost=12.93..22.50 rows=6 width=4)\n\n│ Recheck Cond: (((id = ANY ('{1,2,3,4,5}'::integer[])) AND (status = 0))\nOR ((started < CURRENT_DATE) AND (status = 1)) OR ((faile\n│ Filter: (((status = 0) AND (id = ANY ('{1,2,3,4,5}'::integer[]))) OR\n((status = 1) AND (started < CURRENT_DATE)) OR ((status = 2)\n│ -> BitmapOr (cost=12.93..12.93 rows=6 width=0)\n\n│ -> Bitmap Index Scan on test_id_idx (cost=0.00..4.66 rows=1\nwidth=0)\n│ Index Cond: (id = ANY ('{1,2,3,4,5}'::integer[]))\n\n│ -> Bitmap Index Scan on test_started_idx (cost=0.00..4.13\nrows=3 width=0)\n│ Index Cond: (started < CURRENT_DATE)\n\n│ -> Bitmap Index Scan on test_failed_idx (cost=0.00..4.13 rows=3\nwidth=0)\n│ Index Cond: (failed > CURRENT_DATE)\n\n└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n(10 rows)\n\n\n\n>\n> I made a table dump for anyone who wants to give it a spin\n> https://app.box.com/s/464b12glmlk5o4gvzz7krc4c8s2fxlwr\n> and here is the gist for the original commands https://gist.github.\n> com/lessless/33215d0c147645db721e74e07498ac53\n>\n> On Wed, Jun 28, 2017 at 8:10 PM, Brad DeJong <[email protected]>\n> wrote:\n>\n>>\n>>\n>> On 2017-06-28, Pavel Stehule wrote ...\n>> > On 2017-06-28, Yevhenii Kurtov wrote ...\n>> >> On 2017-06-28, Pavel Stehule wrote ...\n>> >>> On 2017-06-28, Yevhenii Kurtov wrote ...\n>> >>>> We have a query that is run almost each second and it's very\n>> important to squeeze every other ms out of it. The query is:\n>> >>>> ...\n>> >>>> I added following index: CREATE INDEX ON campaign_jobs(id, status,\n>> failed_at, started_at, priority DESC, times_failed);\n>> >>>> ...\n>> >>> There are few issues\n>> >>> a) parametrized LIMIT\n>> >>> b) complex predicate with lot of OR\n>> >>> c) slow external sort\n>> >>>\n>> >>> b) signalize maybe some strange in design .. try to replace \"OR\" by\n>> \"UNION\" query\n>> >>> c) if you can and you have good enough memory .. try to increase\n>> work_mem .. maybe 20MB\n>> >>>\n>> >>> if you change query to union queries, then you can use conditional\n>> indexes\n>> >>>\n>> >>> create index(id) where status = 0;\n>> >>> create index(failed_at) where status = 2;\n>> >>> create index(started_at) where status = 1;\n>> >>\n>> >> Can you please give a tip how to rewrite the query with UNION clause?\n>> >\n>> > SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\n>> > WHERE (((c0.\"status\" = $1) AND NOT (c0.\"id\" = ANY($2))))\n>> > UNION SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\n>> > WHERE ((c0.\"status\" = $3) AND (c0.\"failed_at\" > $4))\n>> > UNION SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\n>> > WHERE ((c0.\"status\" = $5) AND (c0.\"started_at\" < $6))\n>> > ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"\n>> > LIMIT $7\n>> > FOR UPDATE SKIP LOCKED\n>>\n>>\n>> Normally (at least for developers I've worked with), that kind of query\n>> structure is used when the \"status\" values don't overlap and don't change\n>> from query to query. Judging from Pavel's suggested conditional indexes\n>> (i.e. \"where status = <constant>\"), he also thinks that is likely.\n>>\n>> Give the optimizer that information so that it can use it. Assuming $1 =\n>> 0 and $3 = 2 and $5 = 1, substitute literals. Substitute literal for $7 in\n>> limit. Push order by and limit to each branch of the union all (or does\n>> Postgres figure that out automatically?) Replace union with union all (not\n>> sure about Postgres, but allows other dbms to avoid sorting and merging\n>> result sets to eliminate duplicates). (Use of UNION ALL assumes that \"id\"\n>> is unique across rows as implied by only \"id\" being selected with FOR\n>> UPDATE. If multiple rows can have the same \"id\", then use UNION to\n>> eliminate the duplicates.)\n>>\n>> SELECT \"id\" FROM \"campaign_jobs\" WHERE \"status\" = 0 AND NOT \"id\" = ANY($1)\n>> UNION ALL\n>> SELECT \"id\" FROM \"campaign_jobs\" WHERE \"status\" = 2 AND \"failed_at\" > $2\n>> UNION ALL\n>> SELECT \"id\" FROM \"campaign_jobs\" WHERE \"status\" = 1 AND \"started_at\" < $3\n>> ORDER BY \"priority\" DESC, \"times_failed\"\n>> LIMIT 100\n>> FOR UPDATE SKIP LOCKED\n>>\n>>\n>> Another thing that you could try is to push the ORDER BY and LIMIT to the\n>> branches of the UNION (or does Postgres figure that out automatically?) and\n>> use slightly different indexes. This may not make sense for all the\n>> branches but one nice thing about UNION is that each branch can be tweaked\n>> independently. Also, there are probably unmentioned functional dependencies\n>> that you can use to reduce the index size and/or improve your match rate.\n>> Example - if status = 1 means that the campaign_job has started but not\n>> failed or completed, then you may know that started_at is set, but\n>> failed_at and ended_at are null. The < comparison in and of itself implies\n>> that only rows where \"started_at\" is not null will match the condition.\n>>\n>> SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0 WHERE (((c0.\"status\" = 0) AND\n>> NOT (c0.\"id\" = ANY($1)))) ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"\n>> LIMIT 100\n>> UNION ALL\n>> SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0 WHERE ((c0.\"status\" = 2) AND\n>> (c0.\"failed_at\" > $2)) ORDER BY c0.\"priority\" DESC, c0.\"times_failed\" LIMIT\n>> 100\n>> UNION ALL\n>> SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0 WHERE ((c0.\"status\" = 1) AND\n>> (c0.\"started_at\" < $3)) ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"\n>> LIMIT 100\n>> ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"\n>> LIMIT 100\n>> FOR UPDATE SKIP LOCKED\n>>\n>> Including the \"priority\", \"times_failed\" and \"id\" columns in the indexes\n>> along with \"failed_at\"/\"started_at\" allows the optimizer to do index only\n>> scans. (May still have to do random I/O to the data page to determine tuple\n>> version visibility but I don't think that can be eliminated.)\n>>\n>> create index ... (\"priority\" desc, \"times_failed\", \"id\")\n>> where \"status\" = 0;\n>> create index ... (\"priority\" desc, \"times_failed\", \"id\", \"failed_at\")\n>> where \"status\" = 2 and \"failed_at\" is not null;\n>> create index ... (\"priority\" desc, \"times_failed\", \"id\", \"started_at\")\n>> where \"status\" = 1 and \"started_at\" is not null; -- and ended_at is null\n>> and ...\n>>\n>>\n>> I'm assuming that the optimizer knows that \"where status = 1 and\n>> started_at < $3\" implies \"and started_at is not null\" and will consider the\n>> conditional index. If not, then the \"and started_at is not null\" needs to\n>> be explicit.\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]\n>> )\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n\n2017-06-29 7:17 GMT+02:00 Yevhenii Kurtov <[email protected]>:Hello folks,Thank you very much for analysis and suggested - there is a lot to learn here. I just tried UNION queries and got following error:ERROR: FOR UPDATE is not allowed with UNION/INTERSECT/EXCEPTit is sad :(maybe bitmap index scan can workpostgres=# create table test(id int, started date, failed date, status int);CREATE TABLEpostgres=# create index on test(id) where status = 0;CREATE INDEXpostgres=# create index on test(started) where status = 1;CREATE INDEXpostgres=# create index on test(failed ) where status = 2;CREATE INDEXpostgres=# explain select id from test where (status = 0 and id in (1,2,3,4,5)) or (status = 1 and started < current_date) or (status = 2 and failed > current_date);┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────│ QUERY PLAN ╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════│ Bitmap Heap Scan on test (cost=12.93..22.50 rows=6 width=4) │ Recheck Cond: (((id = ANY ('{1,2,3,4,5}'::integer[])) AND (status = 0)) OR ((started < CURRENT_DATE) AND (status = 1)) OR ((faile│ Filter: (((status = 0) AND (id = ANY ('{1,2,3,4,5}'::integer[]))) OR ((status = 1) AND (started < CURRENT_DATE)) OR ((status = 2)│ -> BitmapOr (cost=12.93..12.93 rows=6 width=0) │ -> Bitmap Index Scan on test_id_idx (cost=0.00..4.66 rows=1 width=0) │ Index Cond: (id = ANY ('{1,2,3,4,5}'::integer[])) │ -> Bitmap Index Scan on test_started_idx (cost=0.00..4.13 rows=3 width=0) │ Index Cond: (started < CURRENT_DATE) │ -> Bitmap Index Scan on test_failed_idx (cost=0.00..4.13 rows=3 width=0) │ Index Cond: (failed > CURRENT_DATE) └────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────(10 rows) I made a table dump for anyone who wants to give it a spin https://app.box.com/s/464b12glmlk5o4gvzz7krc4c8s2fxlwrand here is the gist for the original commands https://gist.github.com/lessless/33215d0c147645db721e74e07498ac53On Wed, Jun 28, 2017 at 8:10 PM, Brad DeJong <[email protected]> wrote:\n\nOn 2017-06-28, Pavel Stehule wrote ...\n> On 2017-06-28, Yevhenii Kurtov wrote ...\n>> On 2017-06-28, Pavel Stehule wrote ...\n>>> On 2017-06-28, Yevhenii Kurtov wrote ...\n>>>> We have a query that is run almost each second and it's very important to squeeze every other ms out of it. The query is:\n>>>> ...\n>>>> I added following index: CREATE INDEX ON campaign_jobs(id, status, failed_at, started_at, priority DESC, times_failed);\n>>>> ...\n>>> There are few issues\n>>> a) parametrized LIMIT\n>>> b) complex predicate with lot of OR\n>>> c) slow external sort\n>>>\n>>> b) signalize maybe some strange in design .. try to replace \"OR\" by \"UNION\" query\n>>> c) if you can and you have good enough memory .. try to increase work_mem .. maybe 20MB\n>>>\n>>> if you change query to union queries, then you can use conditional indexes\n>>>\n>>> create index(id) where status = 0;\n>>> create index(failed_at) where status = 2;\n>>> create index(started_at) where status = 1;\n>>\n>> Can you please give a tip how to rewrite the query with UNION clause?\n>\n> SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\n> WHERE (((c0.\"status\" = $1) AND NOT (c0.\"id\" = ANY($2))))\n> UNION SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\n> WHERE ((c0.\"status\" = $3) AND (c0.\"failed_at\" > $4))\n> UNION SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\n> WHERE ((c0.\"status\" = $5) AND (c0.\"started_at\" < $6))\n> ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"\n> LIMIT $7\n> FOR UPDATE SKIP LOCKED\n\n\nNormally (at least for developers I've worked with), that kind of query structure is used when the \"status\" values don't overlap and don't change from query to query. Judging from Pavel's suggested conditional indexes (i.e. \"where status = <constant>\"), he also thinks that is likely.\n\nGive the optimizer that information so that it can use it. Assuming $1 = 0 and $3 = 2 and $5 = 1, substitute literals. Substitute literal for $7 in limit. Push order by and limit to each branch of the union all (or does Postgres figure that out automatically?) Replace union with union all (not sure about Postgres, but allows other dbms to avoid sorting and merging result sets to eliminate duplicates). (Use of UNION ALL assumes that \"id\" is unique across rows as implied by only \"id\" being selected with FOR UPDATE. If multiple rows can have the same \"id\", then use UNION to eliminate the duplicates.)\n\nSELECT \"id\" FROM \"campaign_jobs\" WHERE \"status\" = 0 AND NOT \"id\" = ANY($1)\n UNION ALL\nSELECT \"id\" FROM \"campaign_jobs\" WHERE \"status\" = 2 AND \"failed_at\" > $2\n UNION ALL\nSELECT \"id\" FROM \"campaign_jobs\" WHERE \"status\" = 1 AND \"started_at\" < $3\nORDER BY \"priority\" DESC, \"times_failed\"\nLIMIT 100\nFOR UPDATE SKIP LOCKED\n\n\nAnother thing that you could try is to push the ORDER BY and LIMIT to the branches of the UNION (or does Postgres figure that out automatically?) and use slightly different indexes. This may not make sense for all the branches but one nice thing about UNION is that each branch can be tweaked independently. Also, there are probably unmentioned functional dependencies that you can use to reduce the index size and/or improve your match rate. Example - if status = 1 means that the campaign_job has started but not failed or completed, then you may know that started_at is set, but failed_at and ended_at are null. The < comparison in and of itself implies that only rows where \"started_at\" is not null will match the condition.\n\nSELECT c0.\"id\" FROM \"campaign_jobs\" AS c0 WHERE (((c0.\"status\" = 0) AND NOT (c0.\"id\" = ANY($1)))) ORDER BY c0.\"priority\" DESC, c0.\"times_failed\" LIMIT 100\nUNION ALL\nSELECT c0.\"id\" FROM \"campaign_jobs\" AS c0 WHERE ((c0.\"status\" = 2) AND (c0.\"failed_at\" > $2)) ORDER BY c0.\"priority\" DESC, c0.\"times_failed\" LIMIT 100\nUNION ALL\nSELECT c0.\"id\" FROM \"campaign_jobs\" AS c0 WHERE ((c0.\"status\" = 1) AND (c0.\"started_at\" < $3)) ORDER BY c0.\"priority\" DESC, c0.\"times_failed\" LIMIT 100\nORDER BY c0.\"priority\" DESC, c0.\"times_failed\"\nLIMIT 100\nFOR UPDATE SKIP LOCKED\n\nIncluding the \"priority\", \"times_failed\" and \"id\" columns in the indexes along with \"failed_at\"/\"started_at\" allows the optimizer to do index only scans. (May still have to do random I/O to the data page to determine tuple version visibility but I don't think that can be eliminated.)\n\ncreate index ... (\"priority\" desc, \"times_failed\", \"id\") where \"status\" = 0;\ncreate index ... (\"priority\" desc, \"times_failed\", \"id\", \"failed_at\") where \"status\" = 2 and \"failed_at\" is not null;\ncreate index ... (\"priority\" desc, \"times_failed\", \"id\", \"started_at\") where \"status\" = 1 and \"started_at\" is not null; -- and ended_at is null and ...\n\n\nI'm assuming that the optimizer knows that \"where status = 1 and started_at < $3\" implies \"and started_at is not null\" and will consider the conditional index. If not, then the \"and started_at is not null\" needs to be explicit.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 29 Jun 2017 17:50:27 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "On Tue, Jun 27, 2017 at 11:47 PM, Yevhenii Kurtov <[email protected]\n> wrote:\n\n> Hello,\n>\n> We have a query that is run almost each second and it's very important to\n> squeeze every other ms out of it. The query is:\n>\n> SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\n> WHERE (((c0.\"status\" = $1) AND NOT (c0.\"id\" = ANY($2))))\n> OR ((c0.\"status\" = $3) AND (c0.\"failed_at\" > $4))\n> OR ((c0.\"status\" = $5) AND (c0.\"started_at\" < $6))\n> ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"\n> LIMIT $7\n> FOR UPDATE SKIP LOCKED\n>\n>\n\n\n>\n> I see that query still went through the Seq Scan instead of Index Scan. Is\n> it due to poorly crafted index or because of query structure? Is it\n> possible to make this query faster?\n>\n\nAn index on (priority desc, times_failed) should speed this up massively.\nMight want to include status at the end as well. However, your example data\nis not terribly realistic.\n\nWhat version of PostgreSQL are you using?\n\nCheers,\n\nJeff\n\nOn Tue, Jun 27, 2017 at 11:47 PM, Yevhenii Kurtov <[email protected]> wrote:Hello,We have a query that is run almost each second and it's very important to squeeze every other ms out of it. The query is: SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0WHERE (((c0.\"status\" = $1) AND NOT (c0.\"id\" = ANY($2))))OR ((c0.\"status\" = $3) AND (c0.\"failed_at\" > $4))OR ((c0.\"status\" = $5) AND (c0.\"started_at\" < $6))ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"LIMIT $7FOR UPDATE SKIP LOCKED I see that query still went through the Seq Scan instead of Index Scan. Is it due to poorly crafted index or because of query structure? Is it possible to make this query faster?An index on (priority desc, times_failed) should speed this up massively. Might want to include status at the end as well. However, your example data is not terribly realistic.What version of PostgreSQL are you using?Cheers,Jeff",
"msg_date": "Thu, 29 Jun 2017 11:11:03 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Hi Jeff,\n\nThat is just a sample data, we are going live in Jun and I don't have\nanything real so far. Right now it's 9.6 and it will be a latest stable\navailable release on the date that we go live.\n\nOn Fri, Jun 30, 2017 at 1:11 AM, Jeff Janes <[email protected]> wrote:\n\n> On Tue, Jun 27, 2017 at 11:47 PM, Yevhenii Kurtov <\n> [email protected]> wrote:\n>\n>> Hello,\n>>\n>> We have a query that is run almost each second and it's very important to\n>> squeeze every other ms out of it. The query is:\n>>\n>> SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0\n>> WHERE (((c0.\"status\" = $1) AND NOT (c0.\"id\" = ANY($2))))\n>> OR ((c0.\"status\" = $3) AND (c0.\"failed_at\" > $4))\n>> OR ((c0.\"status\" = $5) AND (c0.\"started_at\" < $6))\n>> ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"\n>> LIMIT $7\n>> FOR UPDATE SKIP LOCKED\n>>\n>>\n>\n>\n>>\n>> I see that query still went through the Seq Scan instead of Index Scan.\n>> Is it due to poorly crafted index or because of query structure? Is it\n>> possible to make this query faster?\n>>\n>\n> An index on (priority desc, times_failed) should speed this up massively.\n> Might want to include status at the end as well. However, your example data\n> is not terribly realistic.\n>\n> What version of PostgreSQL are you using?\n>\n> Cheers,\n>\n> Jeff\n>\n\nHi Jeff,That is just a sample data, we are going live in Jun and I don't have anything real so far. Right now it's 9.6 and it will be a latest stable available release on the date that we go live. On Fri, Jun 30, 2017 at 1:11 AM, Jeff Janes <[email protected]> wrote:On Tue, Jun 27, 2017 at 11:47 PM, Yevhenii Kurtov <[email protected]> wrote:Hello,We have a query that is run almost each second and it's very important to squeeze every other ms out of it. The query is: SELECT c0.\"id\" FROM \"campaign_jobs\" AS c0WHERE (((c0.\"status\" = $1) AND NOT (c0.\"id\" = ANY($2))))OR ((c0.\"status\" = $3) AND (c0.\"failed_at\" > $4))OR ((c0.\"status\" = $5) AND (c0.\"started_at\" < $6))ORDER BY c0.\"priority\" DESC, c0.\"times_failed\"LIMIT $7FOR UPDATE SKIP LOCKED I see that query still went through the Seq Scan instead of Index Scan. Is it due to poorly crafted index or because of query structure? Is it possible to make this query faster?An index on (priority desc, times_failed) should speed this up massively. Might want to include status at the end as well. However, your example data is not terribly realistic.What version of PostgreSQL are you using?Cheers,Jeff",
"msg_date": "Fri, 30 Jun 2017 02:11:43 +0700",
"msg_from": "Yevhenii Kurtov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: "
},
{
"msg_contents": "Pavel Stehule wrote:\n> 2017-06-29 7:17 GMT+02:00 Yevhenii Kurtov <[email protected]>:\n> \n\n> > I just tried UNION queries and got following error:\n> >\n> > ERROR: FOR UPDATE is not allowed with UNION/INTERSECT/EXCEPT\n> \n> it is sad :(\n\nI think we could lift this restriction for UNION ALL, but UNION sounds\ndifficult.\n\n\nBTW I wonder how much of the original problem is caused by using a\nprepared query. I understand the desire to avoid repeated planning\nwork, but I think in this case it may be working against you.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 29 Jun 2017 17:28:23 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "On Thu, Jun 29, 2017 at 1:11 PM, Yevhenii Kurtov\n<[email protected]> wrote:\n> Hi Jeff,\n>\n> That is just a sample data, we are going live in Jun and I don't have\n> anything real so far. Right now it's 9.6 and it will be a latest stable\n> available release on the date that we go live.\n\nTrust me on this one, you want to get some realistic fake data in\nthere, and in realistic quantities before you go live to test.\nPostgresql's planner makes decisions based on size of the data it has\nto trundle through and statistical analysis of the data in the tables\netc. You don't wanna go from 500 rows in a test table with the same\nvalues to 10,000,000 rows with wildly varying data in production\nwithout having some clue where that db is gonna be headed performance\nwise.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 29 Jun 2017 21:09:28 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "On Thu, Jun 29, 2017 at 12:11 PM, Yevhenii Kurtov <[email protected]\n> wrote:\n\n> Hi Jeff,\n>\n> That is just a sample data, we are going live in Jun and I don't have\n> anything real so far. Right now it's 9.6 and it will be a latest stable\n> available release on the date that we go live.\n>\n\n\nYou need to use your knowledge of the application to come up with some\nplausible sample data.\n\nWhat happens when something succeeds? Does it get deleted from the table,\nor does it get retained but with a certain value of the status column? If\nit is retained, what happens to the priority and times_failed fields?\n\nThe performance of your queuing table will critically depend on that.\n\nIf you need to keep it once it succeeds, you should probably do that by\ndeleting it from the queuing table and inserting it into a history table.\nIt is much easier to keep performance up with that kind of design.\n\nCheers,\n\nJeff\n\nOn Thu, Jun 29, 2017 at 12:11 PM, Yevhenii Kurtov <[email protected]> wrote:Hi Jeff,That is just a sample data, we are going live in Jun and I don't have anything real so far. Right now it's 9.6 and it will be a latest stable available release on the date that we go live. You need to use your knowledge of the application to come up with some plausible sample data.What happens when something succeeds? Does it get deleted from the table, or does it get retained but with a certain value of the status column? If it is retained, what happens to the priority and times_failed fields?The performance of your queuing table will critically depend on that. If you need to keep it once it succeeds, you should probably do that by deleting it from the queuing table and inserting it into a history table. It is much easier to keep performance up with that kind of design.Cheers,Jeff",
"msg_date": "Fri, 30 Jun 2017 10:46:45 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
}
] |
[
{
"msg_contents": "Hi List,\n\nI have a Server where a simple SQL is taking a long time to return the\nresults the Server settings are as follows:\n\nDebian GNU/Linux 7 (wheezy)\nCPU: Intel(R) Xeon(R) CPU E5405 @ 2.00GHz\nMem: 16GB\nHD: SSG 120 GB\nPostgresql 9.2\n\npostgresql.conf\nshared_buffers = 1536MB\nwork_mem = 32MB\nmaintenance_work_mem = 960MB\neffective_cache_size = 4864MB\n\nI did a test with the following SQL:\n\nselect * from MINHATABELA\n\n\nIt took 7 minutes to return the result.\n\n\nI did the same test on a Server:\n\nWindows Server 2012 Standard\nCPU: Intel(R) Xeon(R) CPU E5-2450 @ 2.10GHz\nMem: 24GB\nHD: HD 500 GB\nPostgresql 9.2\n\n\npostgresql.conf Default settings that come with the installation\n\nThe same SQL returned in 3 minutes.\n\nThe test in both Servers were done bench.\n\nThis table has 1888240 records whose size is 458 MB\n\nI believe that in both Servers the response time of this SQL is very high,\nbut the main thing in LINUX Server has something very wrong, I think it is\nsomething in the settings.\n\nWhat can I be checking?\n\n\n-- \nAtenciosamente\nDaviramos Roussenq Fortunato\n\nHi List,I have a Server where a simple SQL is taking a long time to return the results the Server settings are as follows:Debian GNU/Linux 7 (wheezy)CPU: Intel(R) Xeon(R) CPU E5405 @ 2.00GHzMem: 16GBHD: SSG 120 GBPostgresql 9.2postgresql.confshared_buffers = 1536MBwork_mem = 32MB maintenance_work_mem = 960MBeffective_cache_size = 4864MBI did a test with the following SQL:select * from MINHATABELAIt took 7 minutes to return the result.I did the same test on a Server:Windows Server 2012 StandardCPU: Intel(R) Xeon(R) CPU E5-2450 @ 2.10GHzMem: 24GBHD: HD 500 GBPostgresql 9.2postgresql.conf Default settings that come with the installationThe same SQL returned in 3 minutes.The test in both Servers were done bench.This table has 1888240 records whose size is 458 MBI believe that in both Servers the response time of this SQL is very high, but the main thing in LINUX Server has something very wrong, I think it is something in the settings.What can I be checking?-- AtenciosamenteDaviramos Roussenq Fortunato",
"msg_date": "Fri, 30 Jun 2017 16:14:33 -0300",
"msg_from": "Daviramos Roussenq Fortunato <[email protected]>",
"msg_from_op": true,
"msg_subject": "Simple SQL too slow"
},
{
"msg_contents": "On 30 June 2017 20:14:33 GMT+01:00, Daviramos Roussenq Fortunato <[email protected]> wrote:\n>Hi List,\n>\n>I have a Server where a simple SQL is taking a long time to return the\n>results the Server settings are as follows:\n>\n>Debian GNU/Linux 7 (wheezy)\n>CPU: Intel(R) Xeon(R) CPU E5405 @ 2.00GHz\n>Mem: 16GB\n>HD: SSG 120 GB\n>Postgresql 9.2\n>\n>postgresql.conf\n>shared_buffers = 1536MB\n>work_mem = 32MB\n>maintenance_work_mem = 960MB\n>effective_cache_size = 4864MB\n>\n>I did a test with the following SQL:\n>\n>select * from MINHATABELA\n>\n>\n>It took 7 minutes to return the result.\n>\n>\n>I did the same test on a Server:\n>\n>Windows Server 2012 Standard\n>CPU: Intel(R) Xeon(R) CPU E5-2450 @ 2.10GHz\n>Mem: 24GB\n>HD: HD 500 GB\n>Postgresql 9.2\n>\n>\n>postgresql.conf Default settings that come with the installation\n>\n>The same SQL returned in 3 minutes.\n>\n>The test in both Servers were done bench.\n>\n>This table has 1888240 records whose size is 458 MB\n>\n>I believe that in both Servers the response time of this SQL is very\n>high,\n>but the main thing in LINUX Server has something very wrong, I think it\n>is\n>something in the settings.\n>\n>What can I be checking?\n\nThe query needs a full table scan, so it mainly depends on the speed of your disk. Maybe you have s bloated table. Please check reltuples and relpages from pg_class on both servers and compare.\n\n\n-- \n2ndQuadrant - The PostgreSQL Support Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 30 Jun 2017 20:50:45 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple SQL too slow"
},
{
"msg_contents": "Debian:\n\nSELECT reltuples::numeric FROM pg_class WHERE oid = 'mytable'::regclass;\nretuples=1883770\n --31ms\n\nSELECT pg_relation_filepath(oid), relpages FROM pg_class WHERE relname =\n'mytable';\npg_relation_filepath=base/1003173/1204921\nrelpages=30452\n--31ms\n\n\nWindows\n\nSELECT reltuples::numeric FROM pg_class WHERE oid = 'mytable'::regclass;\nretuples=1883970\n--15ms\n\nSELECT pg_relation_filepath(oid), relpages FROM pg_class WHERE relname =\n'mytable';\npg_relation_filepath=base/24576/205166\nrelpages=30449\n--16ms\n\n2017-06-30 16:50 GMT-03:00 Andreas Kretschmer <[email protected]>:\n\n> On 30 June 2017 20:14:33 GMT+01:00, Daviramos Roussenq Fortunato <\n> [email protected]> wrote:\n> >Hi List,\n> >\n> >I have a Server where a simple SQL is taking a long time to return the\n> >results the Server settings are as follows:\n> >\n> >Debian GNU/Linux 7 (wheezy)\n> >CPU: Intel(R) Xeon(R) CPU E5405 @ 2.00GHz\n> >Mem: 16GB\n> >HD: SSG 120 GB\n> >Postgresql 9.2\n> >\n> >postgresql.conf\n> >shared_buffers = 1536MB\n> >work_mem = 32MB\n> >maintenance_work_mem = 960MB\n> >effective_cache_size = 4864MB\n> >\n> >I did a test with the following SQL:\n> >\n> >select * from MINHATABELA\n> >\n> >\n> >It took 7 minutes to return the result.\n> >\n> >\n> >I did the same test on a Server:\n> >\n> >Windows Server 2012 Standard\n> >CPU: Intel(R) Xeon(R) CPU E5-2450 @ 2.10GHz\n> >Mem: 24GB\n> >HD: HD 500 GB\n> >Postgresql 9.2\n> >\n> >\n> >postgresql.conf Default settings that come with the installation\n> >\n> >The same SQL returned in 3 minutes.\n> >\n> >The test in both Servers were done bench.\n> >\n> >This table has 1888240 records whose size is 458 MB\n> >\n> >I believe that in both Servers the response time of this SQL is very\n> >high,\n> >but the main thing in LINUX Server has something very wrong, I think it\n> >is\n> >something in the settings.\n> >\n> >What can I be checking?\n>\n> The query needs a full table scan, so it mainly depends on the speed of\n> your disk. Maybe you have s bloated table. Please check reltuples and\n> relpages from pg_class on both servers and compare.\n>\n>\n> --\n> 2ndQuadrant - The PostgreSQL Support Company\n>\n\n\n\n-- \nAtenciosamente\nDaviramos Roussenq Fortunato\n\nDebian:SELECT reltuples::numeric FROM pg_class WHERE oid = 'mytable'::regclass; retuples=1883770 --31msSELECT pg_relation_filepath(oid), relpages FROM pg_class WHERE relname = 'mytable';pg_relation_filepath=base/1003173/1204921relpages=30452--31msWindowsSELECT reltuples::numeric FROM pg_class WHERE oid = 'mytable'::regclass; retuples=1883970--15msSELECT pg_relation_filepath(oid), relpages FROM pg_class WHERE relname = 'mytable';pg_relation_filepath=base/24576/205166relpages=30449--16ms2017-06-30 16:50 GMT-03:00 Andreas Kretschmer <[email protected]>:On 30 June 2017 20:14:33 GMT+01:00, Daviramos Roussenq Fortunato <[email protected]> wrote:\n>Hi List,\n>\n>I have a Server where a simple SQL is taking a long time to return the\n>results the Server settings are as follows:\n>\n>Debian GNU/Linux 7 (wheezy)\n>CPU: Intel(R) Xeon(R) CPU E5405 @ 2.00GHz\n>Mem: 16GB\n>HD: SSG 120 GB\n>Postgresql 9.2\n>\n>postgresql.conf\n>shared_buffers = 1536MB\n>work_mem = 32MB\n>maintenance_work_mem = 960MB\n>effective_cache_size = 4864MB\n>\n>I did a test with the following SQL:\n>\n>select * from MINHATABELA\n>\n>\n>It took 7 minutes to return the result.\n>\n>\n>I did the same test on a Server:\n>\n>Windows Server 2012 Standard\n>CPU: Intel(R) Xeon(R) CPU E5-2450 @ 2.10GHz\n>Mem: 24GB\n>HD: HD 500 GB\n>Postgresql 9.2\n>\n>\n>postgresql.conf Default settings that come with the installation\n>\n>The same SQL returned in 3 minutes.\n>\n>The test in both Servers were done bench.\n>\n>This table has 1888240 records whose size is 458 MB\n>\n>I believe that in both Servers the response time of this SQL is very\n>high,\n>but the main thing in LINUX Server has something very wrong, I think it\n>is\n>something in the settings.\n>\n>What can I be checking?\n\nThe query needs a full table scan, so it mainly depends on the speed of your disk. Maybe you have s bloated table. Please check reltuples and relpages from pg_class on both servers and compare.\n\n\n--\n2ndQuadrant - The PostgreSQL Support Company\n-- AtenciosamenteDaviramos Roussenq Fortunato",
"msg_date": "Sat, 1 Jul 2017 13:56:13 -0300",
"msg_from": "Daviramos Roussenq Fortunato <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple SQL too slow"
},
{
"msg_contents": "On 1 July 2017 17:56:13 GMT+01:00, Daviramos Roussenq Fortunato <[email protected]> wrote:\n>Debian:\n>\n>SELECT reltuples::numeric FROM pg_class WHERE oid =\n>'mytable'::regclass;\n>retuples=1883770\n> --31ms\n>\n>SELECT pg_relation_filepath(oid), relpages FROM pg_class WHERE relname\n>=\n>'mytable';\n>pg_relation_filepath=base/1003173/1204921\n>relpages=30452\n>--31ms\n>\n>\n>Windows\n>\n>SELECT reltuples::numeric FROM pg_class WHERE oid =\n>'mytable'::regclass;\n>retuples=1883970\n>--15ms\n>\n>SELECT pg_relation_filepath(oid), relpages FROM pg_class WHERE relname\n>=\n>'mytable';\n>pg_relation_filepath=base/24576/205166\n>relpages=30449\n>--16ms\n>\n>2017-06-30 16:50 GMT-03:00 Andreas Kretschmer\n><[email protected]>:\n>\n>> On 30 June 2017 20:14:33 GMT+01:00, Daviramos Roussenq Fortunato <\n>> [email protected]> wrote:\n>> >Hi List,\n>> >\n>> >I have a Server where a simple SQL is taking a long time to return\n>the\n>> >results the Server settings are as follows:\n>> >\n>> >Debian GNU/Linux 7 (wheezy)\n>> >CPU: Intel(R) Xeon(R) CPU E5405 @ 2.00GHz\n>> >Mem: 16GB\n>> >HD: SSG 120 GB\n>> >Postgresql 9.2\n>> >\n>> >postgresql.conf\n>> >shared_buffers = 1536MB\n>> >work_mem = 32MB\n>> >maintenance_work_mem = 960MB\n>> >effective_cache_size = 4864MB\n>> >\n>> >I did a test with the following SQL:\n>> >\n>> >select * from MINHATABELA\n>> >\n>> >\n>> >It took 7 minutes to return the result.\n>> >\n>> >\n>> >I did the same test on a Server:\n>> >\n>> >Windows Server 2012 Standard\n>> >CPU: Intel(R) Xeon(R) CPU E5-2450 @ 2.10GHz\n>> >Mem: 24GB\n>> >HD: HD 500 GB\n>> >Postgresql 9.2\n>> >\n>> >\n>> >postgresql.conf Default settings that come with the installation\n>> >\n>> >The same SQL returned in 3 minutes.\n>> >\n>> >The test in both Servers were done bench.\n>> >\n>> >This table has 1888240 records whose size is 458 MB\n>> >\n>> >I believe that in both Servers the response time of this SQL is very\n>> >high,\n>> >but the main thing in LINUX Server has something very wrong, I think\n>it\n>> >is\n>> >something in the settings.\n>> >\n>> >What can I be checking?\n>>\n>> The query needs a full table scan, so it mainly depends on the speed\n>of\n>> your disk. Maybe you have s bloated table. Please check reltuples and\n>> relpages from pg_class on both servers and compare.\n>>\n>>\n>> --\n>> 2ndQuadrant - The PostgreSQL Support Company\n>>\n\nHrm. Settings seems okay (you can increase shared buffers up to 4-6 GB, and also effective_cache_size to 75% of ram, but i think that's not the reason for the bad performance.\nWindows contains 50% more ram, maybe better/more caching. But i'm not sure if this can be the reason. The pg_class - queries are also slower, so i think there is something wrong on os-level. Hard to guess what.\n\nRegards, Andreas\n-- \n2ndQuadrant - The PostgreSQL Support Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 01 Jul 2017 20:44:06 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple SQL too slow"
},
{
"msg_contents": "What tests could I do. Rigid Linux disk is much faster than Windows, I\nshould get a much better perfomace on this Linux.\n\nWhat test battery do you recommend I do?\n\n\n2017-07-01 16:44 GMT-03:00 Andreas Kretschmer <[email protected]>:\n\n> On 1 July 2017 17:56:13 GMT+01:00, Daviramos Roussenq Fortunato <\n> [email protected]> wrote:\n> >Debian:\n> >\n> >SELECT reltuples::numeric FROM pg_class WHERE oid =\n> >'mytable'::regclass;\n> >retuples=1883770\n> > --31ms\n> >\n> >SELECT pg_relation_filepath(oid), relpages FROM pg_class WHERE relname\n> >=\n> >'mytable';\n> >pg_relation_filepath=base/1003173/1204921\n> >relpages=30452\n> >--31ms\n> >\n> >\n> >Windows\n> >\n> >SELECT reltuples::numeric FROM pg_class WHERE oid =\n> >'mytable'::regclass;\n> >retuples=1883970\n> >--15ms\n> >\n> >SELECT pg_relation_filepath(oid), relpages FROM pg_class WHERE relname\n> >=\n> >'mytable';\n> >pg_relation_filepath=base/24576/205166\n> >relpages=30449\n> >--16ms\n> >\n> >2017-06-30 16:50 GMT-03:00 Andreas Kretschmer\n> ><[email protected]>:\n> >\n> >> On 30 June 2017 20:14:33 GMT+01:00, Daviramos Roussenq Fortunato <\n> >> [email protected]> wrote:\n> >> >Hi List,\n> >> >\n> >> >I have a Server where a simple SQL is taking a long time to return\n> >the\n> >> >results the Server settings are as follows:\n> >> >\n> >> >Debian GNU/Linux 7 (wheezy)\n> >> >CPU: Intel(R) Xeon(R) CPU E5405 @ 2.00GHz\n> >> >Mem: 16GB\n> >> >HD: SSG 120 GB\n> >> >Postgresql 9.2\n> >> >\n> >> >postgresql.conf\n> >> >shared_buffers = 1536MB\n> >> >work_mem = 32MB\n> >> >maintenance_work_mem = 960MB\n> >> >effective_cache_size = 4864MB\n> >> >\n> >> >I did a test with the following SQL:\n> >> >\n> >> >select * from MINHATABELA\n> >> >\n> >> >\n> >> >It took 7 minutes to return the result.\n> >> >\n> >> >\n> >> >I did the same test on a Server:\n> >> >\n> >> >Windows Server 2012 Standard\n> >> >CPU: Intel(R) Xeon(R) CPU E5-2450 @ 2.10GHz\n> >> >Mem: 24GB\n> >> >HD: HD 500 GB\n> >> >Postgresql 9.2\n> >> >\n> >> >\n> >> >postgresql.conf Default settings that come with the installation\n> >> >\n> >> >The same SQL returned in 3 minutes.\n> >> >\n> >> >The test in both Servers were done bench.\n> >> >\n> >> >This table has 1888240 records whose size is 458 MB\n> >> >\n> >> >I believe that in both Servers the response time of this SQL is very\n> >> >high,\n> >> >but the main thing in LINUX Server has something very wrong, I think\n> >it\n> >> >is\n> >> >something in the settings.\n> >> >\n> >> >What can I be checking?\n> >>\n> >> The query needs a full table scan, so it mainly depends on the speed\n> >of\n> >> your disk. Maybe you have s bloated table. Please check reltuples and\n> >> relpages from pg_class on both servers and compare.\n> >>\n> >>\n> >> --\n> >> 2ndQuadrant - The PostgreSQL Support Company\n> >>\n>\n> Hrm. Settings seems okay (you can increase shared buffers up to 4-6 GB,\n> and also effective_cache_size to 75% of ram, but i think that's not the\n> reason for the bad performance.\n> Windows contains 50% more ram, maybe better/more caching. But i'm not sure\n> if this can be the reason. The pg_class - queries are also slower, so i\n> think there is something wrong on os-level. Hard to guess what.\n>\n> Regards, Andreas\n> --\n> 2ndQuadrant - The PostgreSQL Support Company\n>\n\n\n\n-- \nAtenciosamente\nDaviramos Roussenq Fortunato\n\nWhat tests could I do. Rigid Linux disk is much faster than Windows, I should get a much better perfomace on this Linux.\n\nWhat test battery do you recommend I do?2017-07-01 16:44 GMT-03:00 Andreas Kretschmer <[email protected]>:On 1 July 2017 17:56:13 GMT+01:00, Daviramos Roussenq Fortunato <[email protected]> wrote:\n>Debian:\n>\n>SELECT reltuples::numeric FROM pg_class WHERE oid =\n>'mytable'::regclass;\n>retuples=1883770\n> --31ms\n>\n>SELECT pg_relation_filepath(oid), relpages FROM pg_class WHERE relname\n>=\n>'mytable';\n>pg_relation_filepath=base/1003173/1204921\n>relpages=30452\n>--31ms\n>\n>\n>Windows\n>\n>SELECT reltuples::numeric FROM pg_class WHERE oid =\n>'mytable'::regclass;\n>retuples=1883970\n>--15ms\n>\n>SELECT pg_relation_filepath(oid), relpages FROM pg_class WHERE relname\n>=\n>'mytable';\n>pg_relation_filepath=base/24576/205166\n>relpages=30449\n>--16ms\n>\n>2017-06-30 16:50 GMT-03:00 Andreas Kretschmer\n><[email protected]>:\n>\n>> On 30 June 2017 20:14:33 GMT+01:00, Daviramos Roussenq Fortunato <\n>> [email protected]> wrote:\n>> >Hi List,\n>> >\n>> >I have a Server where a simple SQL is taking a long time to return\n>the\n>> >results the Server settings are as follows:\n>> >\n>> >Debian GNU/Linux 7 (wheezy)\n>> >CPU: Intel(R) Xeon(R) CPU E5405 @ 2.00GHz\n>> >Mem: 16GB\n>> >HD: SSG 120 GB\n>> >Postgresql 9.2\n>> >\n>> >postgresql.conf\n>> >shared_buffers = 1536MB\n>> >work_mem = 32MB\n>> >maintenance_work_mem = 960MB\n>> >effective_cache_size = 4864MB\n>> >\n>> >I did a test with the following SQL:\n>> >\n>> >select * from MINHATABELA\n>> >\n>> >\n>> >It took 7 minutes to return the result.\n>> >\n>> >\n>> >I did the same test on a Server:\n>> >\n>> >Windows Server 2012 Standard\n>> >CPU: Intel(R) Xeon(R) CPU E5-2450 @ 2.10GHz\n>> >Mem: 24GB\n>> >HD: HD 500 GB\n>> >Postgresql 9.2\n>> >\n>> >\n>> >postgresql.conf Default settings that come with the installation\n>> >\n>> >The same SQL returned in 3 minutes.\n>> >\n>> >The test in both Servers were done bench.\n>> >\n>> >This table has 1888240 records whose size is 458 MB\n>> >\n>> >I believe that in both Servers the response time of this SQL is very\n>> >high,\n>> >but the main thing in LINUX Server has something very wrong, I think\n>it\n>> >is\n>> >something in the settings.\n>> >\n>> >What can I be checking?\n>>\n>> The query needs a full table scan, so it mainly depends on the speed\n>of\n>> your disk. Maybe you have s bloated table. Please check reltuples and\n>> relpages from pg_class on both servers and compare.\n>>\n>>\n>> --\n>> 2ndQuadrant - The PostgreSQL Support Company\n>>\n\nHrm. Settings seems okay (you can increase shared buffers up to 4-6 GB, and also effective_cache_size to 75% of ram, but i think that's not the reason for the bad performance.\nWindows contains 50% more ram, maybe better/more caching. But i'm not sure if this can be the reason. The pg_class - queries are also slower, so i think there is something wrong on os-level. Hard to guess what.\n\nRegards, Andreas\n--\n2ndQuadrant - The PostgreSQL Support Company\n-- AtenciosamenteDaviramos Roussenq Fortunato",
"msg_date": "Sat, 1 Jul 2017 17:39:55 -0300",
"msg_from": "Daviramos Roussenq Fortunato <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple SQL too slow"
},
{
"msg_contents": "Hello,\n\nOn 07/01/2017 10:39 PM, Daviramos Roussenq Fortunato wrote:\n> What tests could I do. Rigid Linux disk is much faster than Windows, I \n> should get a much better perfomace on this Linux. What test battery do \n> you recommend I do?\n> \n\nI'm not sure what you mean by \"rigid disk\" or \"test battery\", but I \nagree with Andreas that clearly there's something wrong at the system \nlevel. It's hard to guess what exactly, but sequential scan on 250MB \ntable (computed the relpages values) should only take a few seconds on \nany decent hardware, and not 3 or 7 minutes.\n\nThe first thing I would do is running basic system-level tests, for \nexample benchmarking storage using fio.\n\nAfter that, you need to determine what is the bottleneck. Perhaps the \nresources are saturated by something else running on the system - other \nqueries, maybe something else running next to PostgreSQL. Look at top \nand iotop while running the queries, and other system tools.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 1 Jul 2017 22:58:30 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple SQL too slow"
},
{
"msg_contents": "On 01/07/2017 22:58, Tomas Vondra wrote:\n> After that, you need to determine what is the bottleneck. Perhaps the\n> resources are saturated by something else running on the system - other\n> queries, maybe something else running next to PostgreSQL. Look at top\n> and iotop while running the queries, and other system tools.\n> \n\nAnother explanation would be network issue. Are they stored in\ndifferent locations? And dhoes\n\nEXPLAIN ANALYZE select * from MINHATABELA\n\nhas similar timings on both environment?\n\nAlso, I didn't see any indication about how exactly were the tests\nperformed. Was it using psql, pgAdmin or something else ?\n\n-- \nJulien Rouhaud\nhttp://dalibo.com - http://dalibo.org\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 1 Jul 2017 23:17:02 +0200",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple SQL too slow"
},
{
"msg_contents": "I am using pgAdmin for SQL test.\n\nLinux:\n\nEXPLAIN ANALYZE select * from\n\"Seq Scan on lancamentosteste (cost=0.00..49289.74 rows=1883774 width=92)\n(actual time=0.016..1194.453 rows=1883699 loops=1)\"\n\"Total runtime: 2139.067 ms\"\n\nWindows:\n\"Seq Scan on lancamentosteste (cost=0.00..49288.67 rows=1883967 width=92)\n(actual time=0.036..745.409 rows=1883699 loops=1)\"\n\"Total runtime: 797.159 ms\"\n\n\n\nI did some test reading the disk and monitored with iotop.\n\n#hdparm -t /dev/sdc\n\n/dev/sdc:\n Timing buffered disk reads: 730 MB in 3.01 seconds = 242.65 MB/sec\n\n\n#hdparm -T /dev/sdc\n\n/dev/sdc:\n Timing cached reads: 9392 MB in 2.00 seconds = 4706.06 MB/sec\n\n\n\n#time sh -c \"dd if=/dev/zero of=ddfile bs=8k count=250000 && sync\"; rm\nddfile\n250000+0 registros de entrada\n250000+0 registros de saÃda\n2048000000 bytes (2,0 GB) copiados, 5,84926 s, 350 MB/s\n\nreal 0m9.488s\nuser 0m0.068s\nsys 0m5.488s\n\n\nIn the tests monitoring the disk by iotop, it kept constant the reading\nbetween 100MB/s to 350MB/s\n\nBy doing the same monitoring on iotop and running SELECT, the disk reading\ndoes not exceed 100kb/s, I have the impression that some configuration of\nLINUX or Postgres is limiting the use of the total capacity of DISCO.\n\nDoes anyone know if there is any setting for this?\n\n2017-07-01 18:17 GMT-03:00 Julien Rouhaud <[email protected]>:\n\n> On 01/07/2017 22:58, Tomas Vondra wrote:\n> > After that, you need to determine what is the bottleneck. Perhaps the\n> > resources are saturated by something else running on the system - other\n> > queries, maybe something else running next to PostgreSQL. Look at top\n> > and iotop while running the queries, and other system tools.\n> >\n>\n> Another explanation would be network issue. Are they stored in\n> different locations? And dhoes\n>\n> EXPLAIN ANALYZE select * from MINHATABELA\n>\n> has similar timings on both environment?\n>\n> Also, I didn't see any indication about how exactly were the tests\n> performed. Was it using psql, pgAdmin or something else ?\n>\n> --\n> Julien Rouhaud\n> http://dalibo.com - http://dalibo.org\n>\n\n\n\n-- \nAtenciosamente\nDaviramos Roussenq Fortunato\n\nI am using pgAdmin for SQL test.Linux: EXPLAIN ANALYZE select * from\"Seq Scan on lancamentosteste (cost=0.00..49289.74 rows=1883774 width=92) (actual time=0.016..1194.453 rows=1883699 loops=1)\"\"Total runtime: 2139.067 ms\"Windows:\"Seq Scan on lancamentosteste (cost=0.00..49288.67 rows=1883967 width=92) (actual time=0.036..745.409 rows=1883699 loops=1)\"\"Total runtime: 797.159 ms\"I did some test reading the disk and monitored with iotop.#hdparm -t /dev/sdc/dev/sdc: Timing buffered disk reads: 730 MB in 3.01 seconds = 242.65 MB/sec #hdparm -T /dev/sdc/dev/sdc: Timing cached reads: 9392 MB in 2.00 seconds = 4706.06 MB/sec #time sh -c \"dd if=/dev/zero of=ddfile bs=8k count=250000 && sync\"; rm ddfile250000+0 registros de entrada250000+0 registros de saÃda2048000000 bytes (2,0 GB) copiados, 5,84926 s, 350 MB/sreal 0m9.488suser 0m0.068ssys 0m5.488sIn the tests monitoring the disk by iotop, it kept constant the reading between 100MB/s to 350MB/sBy doing the same monitoring on iotop and running SELECT, the disk reading does not exceed 100kb/s, I have the impression that some configuration of LINUX or Postgres is limiting the use of the total capacity of DISCO.Does anyone know if there is any setting for this?2017-07-01 18:17 GMT-03:00 Julien Rouhaud <[email protected]>:On 01/07/2017 22:58, Tomas Vondra wrote:\n> After that, you need to determine what is the bottleneck. Perhaps the\n> resources are saturated by something else running on the system - other\n> queries, maybe something else running next to PostgreSQL. Look at top\n> and iotop while running the queries, and other system tools.\n>\n\nAnother explanation would be network issue. Are they stored in\ndifferent locations? And dhoes\n\nEXPLAIN ANALYZE select * from MINHATABELA\n\nhas similar timings on both environment?\n\nAlso, I didn't see any indication about how exactly were the tests\nperformed. Was it using psql, pgAdmin or something else ?\n\n--\nJulien Rouhaud\nhttp://dalibo.com - http://dalibo.org\n-- AtenciosamenteDaviramos Roussenq Fortunato",
"msg_date": "Sat, 1 Jul 2017 22:26:01 -0300",
"msg_from": "Daviramos Roussenq Fortunato <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple SQL too slow"
},
{
"msg_contents": "On 2 July 2017 02:26:01 GMT+01:00, Daviramos Roussenq Fortunato <[email protected]> wrote:\n>I am using pgAdmin for SQL test.\n>\n>\n\nAre you using real hardware or is it vitual? Needs the query without explain analyse the same time? Can you try it with psql (THE command line interface)?\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 02 Jul 2017 05:25:02 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple SQL too slow"
},
{
"msg_contents": "\n\nOn 07/02/2017 03:26 AM, Daviramos Roussenq Fortunato wrote:\n> I am using pgAdmin for SQL test.\n> \n> Linux:\n> \n> EXPLAIN ANALYZE select * from\n> \"Seq Scan on lancamentosteste (cost=0.00..49289.74 rows=1883774 \n> width=92) (actual time=0.016..1194.453 rows=1883699 loops=1)\"\n> \"Total runtime: 2139.067 ms\"\n> \n> Windows:\n> \"Seq Scan on lancamentosteste (cost=0.00..49288.67 rows=1883967 \n> width=92) (actual time=0.036..745.409 rows=1883699 loops=1)\"\n> \"Total runtime: 797.159 ms\"\n> \n\nI'm really, really confused. In the first message you claimed the \nqueries take 7 and 3 minutes, yet here we see the queries taking just a \nfew seconds.\n\n> \n> \n> I did some test reading the disk and monitored with iotop.\n> \n> #hdparm -t /dev/sdc\n> \n> /dev/sdc:\n> Timing buffered disk reads: 730 MB in 3.01 seconds = 242.65 MB/sec\n> \n> #hdparm -T /dev/sdc\n> \n> /dev/sdc:\n> Timing cached reads: 9392 MB in 2.00 seconds = 4706.06 MB/sec\n> #time sh -c \"dd if=/dev/zero of=ddfile bs=8k count=250000 && sync\"; rm \n> ddfile\n> 250000+0 registros de entrada\n> 250000+0 registros de saÃda\n> 2048000000 bytes (2,0 GB) copiados, 5,84926 s, 350 MB/s\n> \n> real 0m9.488s\n> user 0m0.068s\n> sys 0m5.488s\n> \n> \n> In the tests monitoring the disk by iotop, it kept constant the reading \n> between 100MB/s to 350MB/s\n> \n> By doing the same monitoring on iotop and running SELECT, the disk \n> reading does not exceed 100kb/s, I have the impression that some \n> configuration of LINUX or Postgres is limiting the use of the total \n> capacity of DISCO.\n> \n> Does anyone know if there is any setting for this?\n> \n\nThere is no such setting. But it's possible that the network is very \nslow, so transferring the results from the server to the client takes \nvery long. Or that formatting the results in the client takes a lot of \ntime (I'm not sure why there'd be a difference between Windows and Linux \nthough).\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 2 Jul 2017 10:39:09 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple SQL too slow"
},
{
"msg_contents": "Le 2 juillet 2017 10:39:09 GMT+02:00, Tomas Vondra <[email protected]> a écrit :\n>\n>\n>On 07/02/2017 03:26 AM, Daviramos Roussenq Fortunato wrote:\n>> I am using pgAdmin for SQL test.\n>> \n>> Linux:\n>> \n>> EXPLAIN ANALYZE select * from\n>> \"Seq Scan on lancamentosteste (cost=0.00..49289.74 rows=1883774 \n>> width=92) (actual time=0.016..1194.453 rows=1883699 loops=1)\"\n>> \"Total runtime: 2139.067 ms\"\n>> \n>> Windows:\n>> \"Seq Scan on lancamentosteste (cost=0.00..49288.67 rows=1883967 \n>> width=92) (actual time=0.036..745.409 rows=1883699 loops=1)\"\n>> \"Total runtime: 797.159 ms\"\n>> \n>\n>I'm really, really confused. In the first message you claimed the \n>queries take 7 and 3 minutes, yet here we see the queries taking just a\n>\n>few seconds.\n>\n>> \n>> \n>> I did some test reading the disk and monitored with iotop.\n>> \n>> #hdparm -t /dev/sdc\n>> \n>> /dev/sdc:\n>> Timing buffered disk reads: 730 MB in 3.01 seconds = 242.65 MB/sec\n>> \n>> #hdparm -T /dev/sdc\n>> \n>> /dev/sdc:\n>> Timing cached reads: 9392 MB in 2.00 seconds = 4706.06 MB/sec\n>> #time sh -c \"dd if=/dev/zero of=ddfile bs=8k count=250000 && sync\";\n>rm \n>> ddfile\n>> 250000+0 registros de entrada\n>> 250000+0 registros de saÃda\n>> 2048000000 bytes (2,0 GB) copiados, 5,84926 s, 350 MB/s\n>> \n>> real 0m9.488s\n>> user 0m0.068s\n>> sys 0m5.488s\n>> \n>> \n>> In the tests monitoring the disk by iotop, it kept constant the\n>reading \n>> between 100MB/s to 350MB/s\n>> \n>> By doing the same monitoring on iotop and running SELECT, the disk \n>> reading does not exceed 100kb/s, I have the impression that some \n>> configuration of LINUX or Postgres is limiting the use of the total \n>> capacity of DISCO.\n>> \n>> Does anyone know if there is any setting for this?\n>> \n>\n>There is no such setting. But it's possible that the network is very \n>slow, so transferring the results from the server to the client takes \n>very long. Or that formatting the results in the client takes a lot of \n>time (I'm not sure why there'd be a difference between Windows and\n>Linux \n>though).\n\nCould it be that you are doing your queries on pgadmin that is remote from the Linux server, and local to the windows server, hence the difference in perceived performance?\n\nNicolas\n>\n>regards\n\n\n-- \nEnvoyé de mon appareil Android avec K-9 Mail. Veuillez excuser ma brièveté.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 02 Jul 2017 12:01:44 +0200",
"msg_from": "Nicolas CHARLES <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple SQL too slow"
},
{
"msg_contents": "REAL HARDWARE.\n\nI ran the same SQL via pgsql it took only 13 seconds.\n\nMy bottleneck has everything to be network.\n\n#tcptrack -i eth1\n\nAnalyzing the traffic on the network, the speed is only 512Kb / s on port\n5432.\n\n# ethtool eth1\nSettings for eth1:\n Supported ports: [ TP ]\n Supported link modes: 10baseT/Half 10baseT/Full\n 100baseT/Half 100baseT/Full\n 1000baseT/Full\n Supported pause frame use: No\n Supports auto-negotiation: Yes\n Advertised link modes: 10baseT/Half 10baseT/Full\n 100baseT/Half 100baseT/Full\n 1000baseT/Full\n Advertised pause frame use: No\n Advertised auto-negotiation: Yes\n Speed: 1000Mb/s\n Duplex: Full\n Port: Twisted Pair\n PHYAD: 1\n Transceiver: internal\n Auto-negotiation: on\n MDI-X: off\n Supports Wake-on: pumbg\n Wake-on: g\n Current message level: 0x00000007 (7)\n drv probe link\n Link detected: yes\n\n# iptables --list\nChain INPUT (policy ACCEPT)\ntarget prot opt source destination\n\nChain FORWARD (policy ACCEPT)\ntarget prot opt source destination\n\nChain OUTPUT (policy ACCEPT)\ntarget prot opt source destination\n\n\nI tested the file transfer, the port speed did not exceed 512Kb/s on port\n22.\n\nI have some limitation on the network.\n\nBut I can not figure out why. This linux was installed by me, with only\nminimal packages to install postgres.\n\nWhat can it be?\nWell it is identified that the problem is not the postgres, but the\noperating systems, maybe I should look for the solution in another list.\n\n2017-07-02 1:25 GMT-03:00 Andreas Kretschmer <[email protected]>:\n\n> On 2 July 2017 02:26:01 GMT+01:00, Daviramos Roussenq Fortunato <\n> [email protected]> wrote:\n> >I am using pgAdmin for SQL test.\n> >\n> >\n>\n> Are you using real hardware or is it vitual? Needs the query without\n> explain analyse the same time? Can you try it with psql (THE command line\n> interface)?\n>\n>\n> Regards, Andreas\n>\n> --\n> 2ndQuadrant - The PostgreSQL Support Company\n>\n\n\n\n-- \nAtenciosamente\nDaviramos Roussenq Fortunato\n\nREAL HARDWARE.I ran the same SQL via pgsql it took only 13 seconds.My bottleneck has everything to be network.#tcptrack -i eth1Analyzing the traffic on the network, the speed is only 512Kb / s on port 5432.# ethtool eth1Settings for eth1: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: off Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000007 (7) drv probe link Link detected: yes # iptables --listChain INPUT (policy ACCEPT)target prot opt source destinationChain FORWARD (policy ACCEPT)target prot opt source destinationChain OUTPUT (policy ACCEPT)target prot opt source destinationI tested the file transfer, the port speed did not exceed 512Kb/s on port 22.I have some limitation on the network.But I can not figure out why. This linux was installed by me, with only minimal packages to install postgres.What can it be?Well it is identified that the problem is not the postgres, but the operating systems, maybe I should look for the solution in another list.2017-07-02 1:25 GMT-03:00 Andreas Kretschmer <[email protected]>:On 2 July 2017 02:26:01 GMT+01:00, Daviramos Roussenq Fortunato <[email protected]> wrote:\n>I am using pgAdmin for SQL test.\n>\n>\n\nAre you using real hardware or is it vitual? Needs the query without explain analyse the same time? Can you try it with psql (THE command line interface)?\n\n\nRegards, Andreas\n\n--\n2ndQuadrant - The PostgreSQL Support Company\n-- AtenciosamenteDaviramos Roussenq Fortunato",
"msg_date": "Sun, 2 Jul 2017 10:57:49 -0300",
"msg_from": "Daviramos Roussenq Fortunato <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple SQL too slow"
}
] |
[
{
"msg_contents": "Hi experts,\nWe have configured a replication environment in Windows 10. But I am getting an error below the error messages while starting slave instance.\n\nError:\n\n2017-07-05 00:00:02 IST LOG: restored log file \"000000010000000000000022\" from archive\n2017-07-05 00:00:02 IST LOG: WAL file is from different database system: WAL file database system identifier is 6438799484563175092, pg_control database system identifier is 6379088242155134709.\n2017-07-05 00:00:02 IST FATAL: database system identifier differs between the primary and standby\n2017-07-05 00:00:02 IST DETAIL: The primary's identifier is 6438799484563175092, the standby's identifier is 6379088242155134709.\n2017-07-05 00:00:06 IST LOG: restored log file \"000000010000000000000022\" from archive\n2017-07-05 00:00:06 IST LOG: WAL file is from different database system: WAL file database system identifier is 6438799484563175092, pg_control database system identifier is 6379088242155134709.\n2017-07-05 00:00:06 IST FATAL: database system identifier differs between the primary and standby\n2017-07-05 00:00:06 IST DETAIL: The primary's identifier is 6438799484563175092, the standby's identifier is 6379088242155134709.\n2017-07-05 00:00:11 IST LOG: restored log file \"000000010000000000000022\" from archive\n2017-07-05 00:00:11 IST LOG: WAL file is from different database system: WAL file database system identifier is 6438799484563175092, pg_control database system identifier is 6379088242155134709.\n2017-07-05 00:00:11 IST FATAL: database system identifier differs between the primary and standby\n2017-07-05 00:00:11 IST DETAIL: The primary's identifier is 6438799484563175092, the standby's identifier is 6379088242155134709.\n2017-07-05 00:00:16 IST LOG: restored log file \"000000010000000000000022\" from archive\n2017-07-05 00:00:16 IST LOG: WAL file is from different database system: WAL file database system identifier is 6438799484563175092, pg\n\n\nBelow are the parameter at Primary/standby & Recovery.conf as well as hba.conf file.\n\n@MASTER\nwal_level = hot_standby\nshared_buffers = 128MB\nport = 5432\nmax_connections = 100\nwal_level = hot_standby #\narchive_mode = on\narchive_command = 'copy \"%p\" \"\\\\\\\\192.168.1.111\\\\archive\\\\%f\"'\nmax_wal_senders = 1\nwal_keep_segments = 10\nlisten_addresses = '*'\n\n@SLAVE\n'[email protected]'\n\nport = 5432\nhot_standby = on\nmax_connections = 100\nlisten_addresses = '*'\nshared_buffers = 128MB\n\nIn Recovery.conf\nrestore_command = 'copy \"\\\\\\\\192.168.1.111\\\\archive\\\\%f\" \"%p\"'\nstandby_mode = 'on'\nprimary_coninfo = 'host = 192.168.1.111 port = 5432 user = postgres password = postgres'\n\nIn Hba.conf @ Master side\n# TYPE DATABASE USER ADDRESS METHOD\n# replication privilege.\nhost replication postgres 192.168.1.106/32 trust\n\nPlease help on this issue. What things I have to changed & checked.\n\nRegards,\nDaulat\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n\n\nHi experts,\nWe have configured a replication environment in Windows 10. But I am getting an error below the error messages while starting slave instance.\n \nError:\n \n2017-07-05 00:00:02 IST LOG: restored log file \"000000010000000000000022\" from archive\n2017-07-05 00:00:02 IST LOG: WAL file is from different database system: WAL file database system identifier is 6438799484563175092, pg_control database\n system identifier is 6379088242155134709.\n2017-07-05 00:00:02 IST FATAL: database system identifier differs between the primary and standby\n2017-07-05 00:00:02 IST DETAIL: The primary's identifier is 6438799484563175092, the standby's identifier is 6379088242155134709.\n2017-07-05 00:00:06 IST LOG: restored log file \"000000010000000000000022\" from archive\n2017-07-05 00:00:06 IST LOG: WAL file is from different database system: WAL file database system identifier is 6438799484563175092, pg_control database\n system identifier is 6379088242155134709.\n2017-07-05 00:00:06 IST FATAL: database system identifier differs between the primary and standby\n2017-07-05 00:00:06 IST DETAIL: The primary's identifier is 6438799484563175092, the standby's identifier is 6379088242155134709.\n2017-07-05 00:00:11 IST LOG: restored log file \"000000010000000000000022\" from archive\n2017-07-05 00:00:11 IST LOG: WAL file is from different database system: WAL file database system identifier is 6438799484563175092, pg_control database\n system identifier is 6379088242155134709.\n2017-07-05 00:00:11 IST FATAL: database system identifier differs between the primary and standby\n2017-07-05 00:00:11 IST DETAIL: The primary's identifier is 6438799484563175092, the standby's identifier is 6379088242155134709.\n2017-07-05 00:00:16 IST LOG: restored log file \"000000010000000000000022\" from archive\n2017-07-05 00:00:16 IST LOG: WAL file is from different database system: WAL file database system identifier is 6438799484563175092, pg\n \n \nBelow are the parameter at Primary/standby & Recovery.conf as well as hba.conf file.\n \n@MASTER\n\nwal_level = hot_standby\nshared_buffers = 128MB \n\nport = 5432 \n\nmax_connections = 100 \n\nwal_level = hot_standby # \n\narchive_mode = on \n\narchive_command = 'copy \"%p\" \"\\\\\\\\192.168.1.111\\\\archive\\\\%f\"' \n\nmax_wal_senders = 1 \n\nwal_keep_segments = 10\nlisten_addresses = '*' \n\n \n@SLAVE\n'[email protected]'\n \nport = 5432 \n\nhot_standby = on \n\nmax_connections = 100 \n\nlisten_addresses = '*' \n\nshared_buffers = 128MB \n\n \n\nIn Recovery.conf\nrestore_command = 'copy \"\\\\\\\\192.168.1.111\\\\archive\\\\%f\" \"%p\"'\nstandby_mode = 'on'\nprimary_coninfo = 'host = 192.168.1.111 port = 5432 user = postgres password = postgres'\n \nIn Hba.conf @ Master side\n# TYPE DATABASE USER ADDRESS METHOD\n# replication privilege.\nhost replication postgres 192.168.1.106/32 trust\n \nPlease help on this issue. What things I have to changed & checked.\n \nRegards,\nDaulat\n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Wed, 5 Jul 2017 09:14:23 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unable to start the slave instance"
}
] |
[
{
"msg_contents": "Hi experts,\nWe have configured a replication environment in Windows 10. But I am getting below the error messages while starting slave instance.\n\nError:\n\n2017-07-05 00:00:02 IST LOG: restored log file \"000000010000000000000022\" from archive\n2017-07-05 00:00:02 IST LOG: WAL file is from different database system: WAL file database system identifier is 6438799484563175092, pg_control database system identifier is 6379088242155134709.\n2017-07-05 00:00:02 IST FATAL: database system identifier differs between the primary and standby\n2017-07-05 00:00:02 IST DETAIL: The primary's identifier is 6438799484563175092, the standby's identifier is 6379088242155134709.\n2017-07-05 00:00:06 IST LOG: restored log file \"000000010000000000000022\" from archive\n2017-07-05 00:00:06 IST LOG: WAL file is from different database system: WAL file database system identifier is 6438799484563175092, pg_control database system identifier is 6379088242155134709.\n2017-07-05 00:00:06 IST FATAL: database system identifier differs between the primary and standby\n2017-07-05 00:00:06 IST DETAIL: The primary's identifier is 6438799484563175092, the standby's identifier is 6379088242155134709.\n2017-07-05 00:00:11 IST LOG: restored log file \"000000010000000000000022\" from archive\n2017-07-05 00:00:11 IST LOG: WAL file is from different database system: WAL file database system identifier is 6438799484563175092, pg_control database system identifier is 6379088242155134709.\n2017-07-05 00:00:11 IST FATAL: database system identifier differs between the primary and standby\n2017-07-05 00:00:11 IST DETAIL: The primary's identifier is 6438799484563175092, the standby's identifier is 6379088242155134709.\n2017-07-05 00:00:16 IST LOG: restored log file \"000000010000000000000022\" from archive\n2017-07-05 00:00:16 IST LOG: WAL file is from different database system: WAL file database system identifier is 6438799484563175092, pg\n\n\nBelow are the parameters at Primary/standby & Recovery.conf as well as hba.conf file.\n\n@MASTER\nwal_level = hot_standby\nshared_buffers = 128MB\nport = 5432\nmax_connections = 100\nwal_level = hot_standby #\narchive_mode = on\narchive_command = 'copy \"%p\" \"\\\\\\\\192.168.1.111\\\\archive\\\\%f<file://192.168.1.111/archive/%25f>\"'\nmax_wal_senders = 1\nwal_keep_segments = 10\nlisten_addresses = '*'\n\n@SLAVE\n'[email protected]'\n\nport = 5432\nhot_standby = on\nmax_connections = 100\nlisten_addresses = '*'\nshared_buffers = 128MB\n\nIn Recovery.conf\nrestore_command = 'copy \"\\\\\\\\192.168.1.111\\\\archive\\\\%f<file://192.168.1.111/archive/%25f>\" \"%p\"'\nstandby_mode = 'on'\nprimary_coninfo = 'host = 192.168.1.111 port = 5432 user = postgres password = postgres'\n\nIn Hba.conf @ Master side\n# TYPE DATABASE USER ADDRESS METHOD\n# replication privilege.\nhost replication postgres 192.168.1.106/32 trust\n\nPlease help on this issue. What things I have to changed & checked.\n\nRegards,\nDaulat\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n\n\nHi experts,\nWe have configured a replication environment in Windows 10. But I am getting below the error messages while starting slave instance.\n \nError:\n \n2017-07-05 00:00:02 IST LOG: restored log file \"000000010000000000000022\" from archive\n2017-07-05 00:00:02 IST LOG: WAL file is from different database system: WAL file database system identifier is 6438799484563175092, pg_control database\n system identifier is 6379088242155134709.\n2017-07-05 00:00:02 IST FATAL: database system identifier differs between the primary and standby\n2017-07-05 00:00:02 IST DETAIL: The primary's identifier is 6438799484563175092, the standby's identifier is 6379088242155134709.\n2017-07-05 00:00:06 IST LOG: restored log file \"000000010000000000000022\" from archive\n2017-07-05 00:00:06 IST LOG: WAL file is from different database system: WAL file database system identifier is 6438799484563175092, pg_control database\n system identifier is 6379088242155134709.\n2017-07-05 00:00:06 IST FATAL: database system identifier differs between the primary and standby\n2017-07-05 00:00:06 IST DETAIL: The primary's identifier is 6438799484563175092, the standby's identifier is 6379088242155134709.\n2017-07-05 00:00:11 IST LOG: restored log file \"000000010000000000000022\" from archive\n2017-07-05 00:00:11 IST LOG: WAL file is from different database system: WAL file database system identifier is 6438799484563175092, pg_control database\n system identifier is 6379088242155134709.\n2017-07-05 00:00:11 IST FATAL: database system identifier differs between the primary and standby\n2017-07-05 00:00:11 IST DETAIL: The primary's identifier is 6438799484563175092, the standby's identifier is 6379088242155134709.\n2017-07-05 00:00:16 IST LOG: restored log file \"000000010000000000000022\" from archive\n2017-07-05 00:00:16 IST LOG: WAL file is from different database system: WAL file database system identifier is 6438799484563175092, pg\n \n \nBelow are the parameters at Primary/standby & Recovery.conf as well as hba.conf file.\n \n@MASTER\n\nwal_level = hot_standby\nshared_buffers = 128MB \n\nport = 5432 \n\nmax_connections = 100 \n\nwal_level = hot_standby # \n\narchive_mode = on \n\narchive_command = 'copy \"%p\" \"\\\\\\\\192.168.1.111\\\\archive\\\\%f\"' \n \nmax_wal_senders = 1 \n\nwal_keep_segments = 10\nlisten_addresses = '*' \n\n \n@SLAVE\n'[email protected]'\n \nport = 5432 \n\nhot_standby = on \n\nmax_connections = 100 \n\nlisten_addresses = '*' \n\nshared_buffers = 128MB \n\n \n\nIn Recovery.conf\nrestore_command = 'copy \"\\\\\\\\192.168.1.111\\\\archive\\\\%f\" \"%p\"'\nstandby_mode = 'on'\nprimary_coninfo = 'host = 192.168.1.111 port = 5432 user = postgres password = postgres'\n \nIn Hba.conf @ Master side\n# TYPE DATABASE USER ADDRESS METHOD\n# replication privilege.\nhost replication postgres 192.168.1.106/32 trust\n \nPlease help on this issue. What things I have to changed & checked.\n \nRegards,\nDaulat\n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Wed, 5 Jul 2017 09:26:01 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unable to start the slave instance"
},
{
"msg_contents": "On Wed, Jul 5, 2017 at 3:26 AM, Daulat Ram <[email protected]> wrote:\n> Hi experts,\n>\n> We have configured a replication environment in Windows 10. But I am getting\n> below the error messages while starting slave instance.\n>\n>\n>\n> Error:\n>\n>\n>\n> 2017-07-05 00:00:02 IST LOG: restored log file \"000000010000000000000022\"\n> from archive\n>\n> 2017-07-05 00:00:02 IST LOG: WAL file is from different database system:\n> WAL file database system identifier is 6438799484563175092, pg_control\n> database system identifier is 6379088242155134709.\n>\n> 2017-07-05 00:00:02 IST FATAL: database system identifier differs between\n> the primary and standby\n>\n> 2017-07-05 00:00:02 IST DETAIL: The primary's identifier is\n> 6438799484563175092, the standby's identifier is 6379088242155134709.\n>\n\nSo how did you get to here? It doesn't look like a proper rsync or\npg_basebackup method got you here.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 5 Jul 2017 07:10:19 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unable to start the slave instance"
},
{
"msg_contents": "Hi Michael,\r\n\r\nWe are using different releases of windows. Is this issue reported due to different versions of windows releases.\r\nMaster server : Windows 7 Professional\r\nSlave server : Windows 10 Professional\r\n\r\nNote: We have followed the following steps to configure the replication\r\n\r\nStep:1\r\nsudo passwd postgres\r\nSwitch over to the postgres user like this:\r\nsudo su – postgres\r\n\r\nStep:2 Generate an ssh key for the postgres user:\r\nssh-keygen\r\nPress “ENTER” to all the prompts that follow.\r\n\r\nStep:3\r\nTransfer the keys to the other server by following cmd:\r\nssh-copy-id IP if opposite_server\r\n\r\nStep:4\r\nConfigure the Master Server\r\nFirst, we will create a user called “rep” that can be used solely for replication:\r\npsql -c “CREATE USER rep REPLICATION LOGIN CONNECTION LIMIT 1 ENCRYPTED PASSWORD ‘yourpassword’;”\r\n\r\nStep:5\r\ncd /etc/postgresql/9.3/main\r\nvim pg_hba.conf\r\nadd below line at bottom of the file\r\nhost replication rep xxx.xxx.xxx.xx/32 (IP address of slave) md5\r\n: wq! (SAVE FILE)\r\nStep:6\r\nvim postgresql.conf\r\nFind below parameters. Uncomment them if they are commented.\r\nlisten_addresses = ‘localhost, xxx.xxx.xxx.xx’ (IP address of current host) wal_level = ‘hot_standby’\r\narchive_mode = on\r\narchive_command = ‘cd.’\r\nmax_wal_senders = 1\r\nhot_standby = on\r\n: wq! (SAVE FILE)\r\nRestart the master server to take effect your changes:\r\nservice postgresql restart\r\nStep:7\r\nConfigure the Slave Server\r\nservice postgresql stop\r\ncd /etc/postgresql/9.3/main\r\nAdjust the access file to allow the other server to connect to this.\r\nvim pg_hba.conf\r\nadd below line at bottom of the file\r\nhost replication rep xxx.xxx.xxx.xx/32 (IP address of master) md5\r\n: wq! (SAVE FILE)\r\n\r\nStep:8\r\nvim postgresql.conf\r\nYou can use the same configuration options you set for the master server, modifying only the IP address to reflect the slave server’s address:\r\nlisten_addresses = ‘localhost, xxx.xxx.xxx.xx’ (IP address of THIS host) wal_level = ‘hot_standby’\r\narchive_mode = on\r\narchive_command = ‘cd.’\r\nmax_wal_senders = 1\r\nhot_standby = on\r\n:wq!(SAVE FILE)\r\n\r\nStep:9\r\nReplicating the Initial database:\r\nOn the master server, we can use an internal postgres backup start command to create a backup label command. We then will transfer the database data to our slave and then issue an internal backup stop command to clean up:\r\npsql -c “select pg_start_backup(‘initial_backup’);”\r\nrsync -cva –inplace –exclude=*pg_xlog* /var/lib/postgresql/9.3/main/ slave_IP_address:/var/lib/postgresql/9.3/main/\r\npsql -c “select pg_stop_backup ();”\r\n\r\nStep:10\r\nWe now have to configure a recovery file on our slave.\r\ncd /var/lib/postgresql/9.3/main\r\nvim recovery. conf\r\nFill in the following information in to it, make sure to change the IP address of your master server and the password for the rep user you created:\r\nstandby_mode = ‘on’\r\nprimary_conninfo = ‘host=master_IP_address port=5432 user=rep password=yourpassword’\r\ntrigger_file = ‘/tmp/postgresql.trigger.5432’\r\n\r\nThe last line in the file, trigger file, is one of the most interesting parts of the entire configuration. If you create a file at that location on your slave machine, your slave will reconfigure itself to act as a master.\r\nNow start your slave server. Type:\r\nservice postgresql start\r\n\r\nStep:11\r\nYou’ll want to check the logs to see if there are any problems. They are located on both machines here:\r\nless /var/log/postgresql/postgresql-9.3-main.log\r\n\r\nStep:12\r\nTest the Replication\r\nOn the master server, as the postgres user, log into the postgres system by typing:\r\npsql\r\nWe will create a test table to create some changes:\r\nCREATE TABLE rep_test (test varchar(50)); Now insert value into it INSERT INTO rep_test VALUES (‘data1’); INSERT INTO rep_test VALUES (‘data2’); INSERT INTO rep_test VALUES (‘data3’); INSERT INTO rep_test VALUES (‘data4’); INSERT INTO rep_test VALUES (‘data5’); You can now exit out of this interface by typing:\r\n\\q\r\n\r\nRegards,\r\nDaulat\r\n\r\n\r\n-----Original Message-----\r\nFrom: Scott Marlowe [mailto:[email protected]]\r\nSent: 05 July, 2017 6:40 PM\r\nTo: Daulat Ram <[email protected]>\r\nCc: [email protected]\r\nSubject: [EXTERNAL]Re: [PERFORM] Unable to start the slave instance\r\n\r\nOn Wed, Jul 5, 2017 at 3:26 AM, Daulat Ram <[email protected]> wrote:\r\n> Hi experts,\r\n>\r\n> We have configured a replication environment in Windows 10. But I am\r\n> getting below the error messages while starting slave instance.\r\n>\r\n>\r\n>\r\n> Error:\r\n>\r\n>\r\n>\r\n> 2017-07-05 00:00:02 IST LOG: restored log file \"000000010000000000000022\"\r\n> from archive\r\n>\r\n> 2017-07-05 00:00:02 IST LOG: WAL file is from different database system:\r\n> WAL file database system identifier is 6438799484563175092, pg_control\r\n> database system identifier is 6379088242155134709.\r\n>\r\n> 2017-07-05 00:00:02 IST FATAL: database system identifier differs\r\n> between the primary and standby\r\n>\r\n> 2017-07-05 00:00:02 IST DETAIL: The primary's identifier is\r\n> 6438799484563175092, the standby's identifier is 6379088242155134709.\r\n>\r\n\r\nSo how did you get to here? It doesn't look like a proper rsync or pg_basebackup method got you here.\r\n\r\n________________________________\r\n\r\nDISCLAIMER:\r\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 6 Jul 2017 05:02:11 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unable to start the slave instance"
},
{
"msg_contents": "On Thu, Jul 6, 2017 at 2:02 PM, Daulat Ram <[email protected]> wrote:\n> We are using different releases of windows. Is this issue reported due to different versions of windows releases.\n> Master server : Windows 7 Professional\n> Slave server : Windows 10 Professional\n\nPlease do not top-post.\n\nThat may be a problem. Versions of PostgreSQL compiled across\ndifferent platforms are different things, and replication is not\nsupported for that as things happen at a low binary level.\n\n> Step:9\n> Replicating the Initial database:\n> On the master server, we can use an internal postgres backup start command to create a backup label command. We then will transfer the database data to our slave and then issue an internal backup stop command to clean up:\n> psql -c “select pg_start_backup(‘initial_backup’);”\n> rsync -cva –inplace –exclude=*pg_xlog* /var/lib/postgresql/9.3/main/ slave_IP_address:/var/lib/postgresql/9.3/main/\n> psql -c “select pg_stop_backup ();”\n\nShouldn't you remove the data of the slave as well first?\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 6 Jul 2017 14:10:00 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unable to start the slave instance"
}
] |
[
{
"msg_contents": "I'm pondering approaches to partitioning large materialized views and was\nhoping for some feedback and thoughts on it from the [perform] minds.\n\nPostgreSQL 9.6.3 on Ubuntu 16.04 in the Google Cloud.\n\nI have a foreign table with 250M or so rows and 50 or so columns, with a\nUUID as the primary key. Queries to the foreign table have high latency.\n (From several minutes to more than an hour to run)\n\nIf I create a materialized view of this FT, including indexes, it takes\nabout 3-4 hours.\n\nIf I refresh the materialized view concurrently, it takes 4-5 DAYS.\n\nWhen I run \"refresh materialized view concurrently\", it takes about an hour\nfor it to download the 250M rows and load them onto the SSD tempspace. At\nthat point we flatline a single core, and run I/O on the main tablespace up\npretty high, and then stay that way until the refresh is complete.\n\nIn order to speed up the concurrent refreshes, I have it broken into 4\nmaterialized views, manually partitioned (by date) with a \"union all view\"\nin front of them. Refreshing the data which is changing regularly (new\ndata, in one of the partitions) doesn't require refreshing the entire data\nset. This works fairly well, and I can refresh the most recent partition\nin 1 - 2 hours (daily).\n\nHowever, sometimes I have to reach back in time and refresh the deeper\npartitions. This is taking 3 or more days to complete, even with the data\nbroken into 4 materialized views. This approache lets me refresh all of\nthe partitions at the same time, which uses more cores at the same time\n(and more tempspace), [I'd like to use as much of my system resources as\npossible to get the refresh to finish faster.] Unfortunately I am finding\nI need to refresh the deeper data more and more often (at least once per\nweek), and my table growth is going to jump from adding 3-5M rows per day\nto adding 10-20M rows per day over the next month or two. Waiting 3 or 4\ndays for the deeper data to be ready for consumption in PostgreSQL is no\nlonger acceptable to the business.\n\nIt doesn't look like partman supports partitioning materialized views. It\nalso doesn't look like PG 10's new partitioning features will work with\nmaterialized views (although I haven't tried it yet). Citus DB also seems\nto be unable to help in this scenario.\n\nI could create new materialized views every time I need new data, and then\nswap out the view that is in front of them. There are other objects in the\ndatabase which have dependencies on that view. In my experiments so far,\n\"create and replace\" seems to let me get away with this as long as the\ncolumns don't change.\n\nAlternatively, I could come up with a new partitioning scheme that lets me\nmore selectively run \"refresh concurrently\", and run more of those at the\nsame time.\n\nI was leaning towards this latter solution.\n\nSuppose I make a separate materialized view for each month of data. At the\nbeginning of each month I would have to make a new materialized view, and\nthen add it into the \"union all view\" on the fly.\n\nI would then need a \"refresh all\" script which refreshed as many of them\nconcurrently as I am willing to dedicate cores to. And I need some handy\nways to selectively refresh specific months when I know data for a\nparticular month or set of months changed.\n\nSo, I actually have 2 of these 250M row tables in the Foreign Database,\nthat I want to do this with. And maybe more coming soon?\n\nI'm curious if I'm overlooking other possible architectures or tools that\nmight make this simpler to manage.\n\n\nSimilarly, could I construct the \"union all view\" in front of the\npartitions to be partition aware so that the query planner doesn't try to\nlook in every one of the materialized views behind it to find the rows I\nwant? If I go with the monthly partition, I'll start with about 36\nmaterialized views behind the main view.\n\nI'm pondering approaches to partitioning large materialized views and was hoping for some feedback and thoughts on it from the [perform] minds.PostgreSQL 9.6.3 on Ubuntu 16.04 in the Google Cloud.I have a foreign table with 250M or so rows and 50 or so columns, with a UUID as the primary key. Queries to the foreign table have high latency. (From several minutes to more than an hour to run)If I create a materialized view of this FT, including indexes, it takes about 3-4 hours. If I refresh the materialized view concurrently, it takes 4-5 DAYS.When I run \"refresh materialized view concurrently\", it takes about an hour for it to download the 250M rows and load them onto the SSD tempspace. At that point we flatline a single core, and run I/O on the main tablespace up pretty high, and then stay that way until the refresh is complete.In order to speed up the concurrent refreshes, I have it broken into 4 materialized views, manually partitioned (by date) with a \"union all view\" in front of them. Refreshing the data which is changing regularly (new data, in one of the partitions) doesn't require refreshing the entire data set. This works fairly well, and I can refresh the most recent partition in 1 - 2 hours (daily).However, sometimes I have to reach back in time and refresh the deeper partitions. This is taking 3 or more days to complete, even with the data broken into 4 materialized views. This approache lets me refresh all of the partitions at the same time, which uses more cores at the same time (and more tempspace), [I'd like to use as much of my system resources as possible to get the refresh to finish faster.] Unfortunately I am finding I need to refresh the deeper data more and more often (at least once per week), and my table growth is going to jump from adding 3-5M rows per day to adding 10-20M rows per day over the next month or two. Waiting 3 or 4 days for the deeper data to be ready for consumption in PostgreSQL is no longer acceptable to the business.It doesn't look like partman supports partitioning materialized views. It also doesn't look like PG 10's new partitioning features will work with materialized views (although I haven't tried it yet). Citus DB also seems to be unable to help in this scenario.I could create new materialized views every time I need new data, and then swap out the view that is in front of them. There are other objects in the database which have dependencies on that view. In my experiments so far, \"create and replace\" seems to let me get away with this as long as the columns don't change.Alternatively, I could come up with a new partitioning scheme that lets me more selectively run \"refresh concurrently\", and run more of those at the same time.I was leaning towards this latter solution. Suppose I make a separate materialized view for each month of data. At the beginning of each month I would have to make a new materialized view, and then add it into the \"union all view\" on the fly.I would then need a \"refresh all\" script which refreshed as many of them concurrently as I am willing to dedicate cores to. And I need some handy ways to selectively refresh specific months when I know data for a particular month or set of months changed.So, I actually have 2 of these 250M row tables in the Foreign Database, that I want to do this with. And maybe more coming soon? I'm curious if I'm overlooking other possible architectures or tools that might make this simpler to manage.Similarly, could I construct the \"union all view\" in front of the partitions to be partition aware so that the query planner doesn't try to look in every one of the materialized views behind it to find the rows I want? If I go with the monthly partition, I'll start with about 36 materialized views behind the main view.",
"msg_date": "Thu, 6 Jul 2017 11:03:54 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": true,
"msg_subject": "partitioning materialized views"
},
{
"msg_contents": "> I'm curious if I'm overlooking other possible architectures or tools that might make this simpler to manage.\n\nOne of the issues with materialized views is that they are based on\nviews... For a concurrent update, it essentially performs a looped\nmerge, which can be pretty ugly. That's the price you pay to be\nnon-blocking. For this particular setup, I'd actually recommend using\nsomething like pglogical to just maintain a live copy of the remote\ntable or wait for Postgres 10's logical replication. If you _can't_ do\nthat due to cloud restrictions, you'd actually be better off doing an\natomic swap.\n\nCREATE MATERIALIZED VIEW y AS ...;\n\nBEGIN;\nALTER MATERIALIZED VIEW x RENAME TO x_old;\nALTER MATERIALIZED VIEW y RENAME TO x;\nDROP MATERIALIZED VIEW x_old;\nCOMMIT;\n\nYou could still follow your partitioned plan if you don't want to\nupdate all of the data at once. Let's face it, 3-4 hours is still a\nton of data transfer and calculation.\n\n-- \nShaun M Thomas - 2ndQuadrant\nPostgreSQL Training, Services and Support\[email protected] | www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 6 Jul 2017 10:25:23 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partitioning materialized views"
},
{
"msg_contents": "On Thu, Jul 6, 2017 at 11:25 AM, Shaun Thomas <[email protected]>\nwrote:\n\n> > I'm curious if I'm overlooking other possible architectures or tools\n> that might make this simpler to manage.\n>\n> One of the issues with materialized views is that they are based on\n> views... For a concurrent update, it essentially performs a looped\n> merge, which can be pretty ugly. That's the price you pay to be\n> non-blocking. For this particular setup, I'd actually recommend using\n> something like pglogical to just maintain a live copy of the remote\n> table or wait for Postgres 10's logical replication.\n\n\nUnfortunately the foreign database is Hadoop. (As A Service)\n\n\n\n> If you _can't_ do\n> that due to cloud restrictions, you'd actually be better off doing an\n> atomic swap.\n>\n> CREATE MATERIALIZED VIEW y AS ...;\n>\n> BEGIN;\n> ALTER MATERIALIZED VIEW x RENAME TO x_old;\n> ALTER MATERIALIZED VIEW y RENAME TO x;\n> DROP MATERIALIZED VIEW x_old;\n> COMMIT;\n>\n> This is an interesting idea. Thanks! I'll ponder that one.\n\n\n\n> You could still follow your partitioned plan if you don't want to\n> update all of the data at once. Let's face it, 3-4 hours is still a\n> ton of data transfer and calculation.\n>\n>\nyup.\n\nOn Thu, Jul 6, 2017 at 11:25 AM, Shaun Thomas <[email protected]> wrote:> I'm curious if I'm overlooking other possible architectures or tools that might make this simpler to manage.\n\nOne of the issues with materialized views is that they are based on\nviews... For a concurrent update, it essentially performs a looped\nmerge, which can be pretty ugly. That's the price you pay to be\nnon-blocking. For this particular setup, I'd actually recommend using\nsomething like pglogical to just maintain a live copy of the remote\ntable or wait for Postgres 10's logical replication. Unfortunately the foreign database is Hadoop. (As A Service) If you _can't_ do\nthat due to cloud restrictions, you'd actually be better off doing an\natomic swap.\n\nCREATE MATERIALIZED VIEW y AS ...;\n\nBEGIN;\nALTER MATERIALIZED VIEW x RENAME TO x_old;\nALTER MATERIALIZED VIEW y RENAME TO x;\nDROP MATERIALIZED VIEW x_old;\nCOMMIT;\nThis is an interesting idea. Thanks! I'll ponder that one. \nYou could still follow your partitioned plan if you don't want to\nupdate all of the data at once. Let's face it, 3-4 hours is still a\nton of data transfer and calculation.\nyup.",
"msg_date": "Thu, 6 Jul 2017 12:05:33 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: partitioning materialized views"
},
{
"msg_contents": ">\n>\n> If you _can't_ do\n>> that due to cloud restrictions, you'd actually be better off doing an\n>> atomic swap.\n>>\n>> CREATE MATERIALIZED VIEW y AS ...;\n>>\n>> BEGIN;\n>> ALTER MATERIALIZED VIEW x RENAME TO x_old;\n>> ALTER MATERIALIZED VIEW y RENAME TO x;\n>> DROP MATERIALIZED VIEW x_old;\n>> COMMIT;\n>>\n>> This is an interesting idea. Thanks! I'll ponder that one.\n>\n>\nI don't think the downstream dependencies will let that work without\nrebuilding them as well. The drop fails (without a cascade), and the\nother views and matviews that are built off of this all simply point to\nx_old.\n\nIf you _can't_ do\nthat due to cloud restrictions, you'd actually be better off doing an\natomic swap.\n\nCREATE MATERIALIZED VIEW y AS ...;\n\nBEGIN;\nALTER MATERIALIZED VIEW x RENAME TO x_old;\nALTER MATERIALIZED VIEW y RENAME TO x;\nDROP MATERIALIZED VIEW x_old;\nCOMMIT;\nThis is an interesting idea. Thanks! I'll ponder that one.I don't think the downstream dependencies will let that work without rebuilding them as well. The drop fails (without a cascade), and the other views and matviews that are built off of this all simply point to x_old.",
"msg_date": "Thu, 6 Jul 2017 12:27:27 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: partitioning materialized views"
},
{
"msg_contents": "> I don't think the downstream dependencies will let that work without\n> rebuilding them as well. The drop fails (without a cascade), and the other\n> views and matviews that are built off of this all simply point to x_old.\n\nWow, ouch. Yeah, I'd neglected to consider dependent objects. Your\nonly \"out\" at this point is to either add or utilize a \"modified_date\"\ncolumn of some kind, so you can maintain a different MV with some\nrecent window of data, and regularly merge that into a physical local\ncopy (not an MV) sort of like a running ETL. Though that won't help\nwith deletes, unfortunately.\n\n-- \nShaun M Thomas - 2ndQuadrant\nPostgreSQL Training, Services and Support\[email protected] | www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 7 Jul 2017 08:12:58 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partitioning materialized views"
},
{
"msg_contents": "On Fri, Jul 7, 2017 at 10:12 AM, Shaun Thomas\n<[email protected]> wrote:\n>> I don't think the downstream dependencies will let that work without\n>> rebuilding them as well. The drop fails (without a cascade), and the other\n>> views and matviews that are built off of this all simply point to x_old.\n>\n> Wow, ouch. Yeah, I'd neglected to consider dependent objects. Your\n> only \"out\" at this point is to either add or utilize a \"modified_date\"\n> column of some kind, so you can maintain a different MV with some\n> recent window of data, and regularly merge that into a physical local\n> copy (not an MV) sort of like a running ETL. Though that won't help\n> with deletes, unfortunately.\n\nYou have another out: rebuild the dependent views before the drop.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 19 Jul 2017 22:23:28 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partitioning materialized views"
}
] |
[
{
"msg_contents": "I’m running PostgreSQL 9.6.3 on Ubuntu 16.10 (kernel 4.4.0-85-generic).\nHardware is:\n\n*2x Intel Xeon E5550\n\n*72GB RAM\n\n*Hardware RAID10 (4 x 146GB SAS 10k) P410i controller with 1GB FBWC (80%\nread/20% write) for Postgresql data only:\n\nLogical Drive: 3\n\nSize: 273.4 GB\n\nFault Tolerance: 1+0\n\nHeads: 255\n\nSectors Per Track: 32\n\nCylinders: 65535\n\nStrip Size: 128 KB\n\nFull Stripe Size: 256 KB\n\nStatus: OK\n\nCaching: Enabled\n\nUnique Identifier: 600508B1001037383941424344450A00\n\nDisk Name: /dev/sdc\n\nMount Points: /mnt/data 273.4 GB\n\nOS Status: LOCKED\n\nLogical Drive Label: A00A194750123456789ABCDE516F\n\nMirror Group 0:\n\nphysicaldrive 2I:1:5 (port 2I:box 1:bay 5, SAS, 146 GB, OK)\n\nphysicaldrive 2I:1:6 (port 2I:box 1:bay 6, SAS, 146 GB, OK)\n\nMirror Group 1:\n\nphysicaldrive 2I:1:7 (port 2I:box 1:bay 7, SAS, 146 GB, OK)\n\nphysicaldrive 2I:1:8 (port 2I:box 1:bay 8, SAS, 146 GB, OK)\n\nDrive Type: Data\n\nFormatted with ext4 with: sudo mkfs.ext4 -E stride=32,stripe_width=64 -v\n/dev/sdc1.\n\nMounted in /etc/fstab with this line:\n\"UUID=99fef4ae-51dc-4365-9210-0b153b1cbbd0 /mnt/data ext4\nrw,nodiratime,user_xattr,noatime,nobarrier,errors=remount-ro 0 1\"\n\nPostgresql is the only application running on this server.\n\n\nPostgresql is used as a mini data warehouse to generate reports and do\nstatistical analysis. It is used by at most 2 users and fresh data is added\nevery 10 days. The database has 16 tables: one is 224GB big and the rest\nare between 16kB and 470MB big.\n\n\nMy configuration is:\n\n\nname | current_setting | source\n\n---------------------------------+------------------------------------------------+----------------------\n\napplication_name | psql | client\n\nautovacuum_vacuum_scale_factor | 0 | configuration file\n\nautovacuum_vacuum_threshold | 2000 | configuration file\n\ncheckpoint_completion_target | 0.9 | configuration file\n\ncheckpoint_timeout | 30min | configuration file\n\nclient_encoding | UTF8 | client\n\nclient_min_messages | log | configuration file\n\ncluster_name | 9.6/main | configuration file\n\ncpu_index_tuple_cost | 0.001 | configuration file\n\ncpu_operator_cost | 0.0005 | configuration file\n\ncpu_tuple_cost | 0.003 | configuration file\n\nDateStyle | ISO, YMD | configuration file\n\ndefault_statistics_target | 100 | configuration file\n\ndefault_text_search_config | pg_catalog.english | configuration file\n\ndynamic_shared_memory_type | posix | configuration file\n\neffective_cache_size | 22GB | configuration file\n\neffective_io_concurrency | 4 | configuration file\n\nexternal_pid_file | /var/run/postgresql/9.6-main.pid | configuration file\n\nlc_messages | C | configuration file\n\nlc_monetary | en_CA.UTF-8 | configuration file\n\nlc_numeric | en_CA.UTF-8 | configuration file\n\nlc_time | en_CA.UTF-8 | configuration file\n\nlisten_addresses | * | configuration file\n\nlock_timeout | 100s | configuration file\n\nlog_autovacuum_min_duration | 0 | configuration file\n\nlog_checkpoints | on | configuration file\n\nlog_connections | on | configuration file\n\nlog_destination | csvlog | configuration file\n\nlog_directory | /mnt/bigzilla/data/toburn/hp/postgresql/pg_log |\nconfiguration file\n\nlog_disconnections | on | configuration file\n\nlog_error_verbosity | default | configuration file\n\nlog_file_mode | 0600 | configuration file\n\nlog_filename | postgresql-%Y-%m-%d_%H%M%S.log | configuration file\n\nlog_line_prefix | user=%u,db=%d,app=%aclient=%h | configuration file\n\nlog_lock_waits | on | configuration file\n\nlog_min_duration_statement | 0 | configuration file\n\nlog_min_error_statement | debug1 | configuration file\n\nlog_min_messages | debug1 | configuration file\n\nlog_rotation_size | 1GB | configuration file\n\nlog_temp_files | 0 | configuration file\n\nlog_timezone | localtime | configuration file\n\nlogging_collector | on | configuration file\n\nmaintenance_work_mem | 3GB | configuration file\n\nmax_connections | 10 | configuration file\n\nmax_locks_per_transaction | 256 | configuration file\n\nmax_parallel_workers_per_gather | 14 | configuration file\n\nmax_stack_depth | 2MB | environment variable\n\nmax_wal_size | 4GB | configuration file\n\nmax_worker_processes | 14 | configuration file\n\nmin_wal_size | 2GB | configuration file\n\nparallel_setup_cost | 1000 | configuration file\n\nparallel_tuple_cost | 0.012 | configuration file\n\nport | 5432 | configuration file\n\nrandom_page_cost | 22 | configuration file\n\nseq_page_cost | 1 | configuration file\n\nshared_buffers | 34GB | configuration file\n\nshared_preload_libraries | pg_stat_statements | configuration file\n\nssl | on | configuration file\n\nssl_cert_file | /etc/ssl/certs/ssl-cert-snakeoil.pem | configuration file\n\nssl_key_file | /etc/ssl/private/ssl-cert-snakeoil.key | configuration file\n\nstatement_timeout | 1000000s | configuration file\n\nstats_temp_directory | /var/run/postgresql/9.6-main.pg_stat_tmp |\nconfiguration file\n\nsuperuser_reserved_connections | 1 | configuration file\n\nsyslog_facility | local1 | configuration file\n\nsyslog_ident | postgres | configuration file\n\nsyslog_sequence_numbers | on | configuration file\n\ntemp_file_limit | 80GB | configuration file\n\nTimeZone | localtime | configuration file\n\ntrack_activities | on | configuration file\n\ntrack_counts | on | configuration file\n\ntrack_functions | all | configuration file\n\nunix_socket_directories | /var/run/postgresql | configuration file\n\nvacuum_cost_delay | 1ms | configuration file\n\nvacuum_cost_limit | 5000 | configuration file\n\nvacuum_cost_page_dirty | 200 | configuration file\n\nvacuum_cost_page_hit | 10 | configuration file\n\nvacuum_cost_page_miss | 100 | configuration file\n\nwal_buffers | 16MB | configuration file\n\nwal_compression | on | configuration file\n\nwal_sync_method | fdatasync | configuration file\n\nwork_mem | 1468006kB | configuration file\n\n\nThe part of /etc/sysctl.conf I modified is:\n\nvm.swappiness = 1\n\nvm.dirty_background_bytes = 134217728\n\nvm.dirty_bytes = 1073741824\n\nvm.overcommit_ratio = 100\n\nvm.zone_reclaim_mode = 0\n\nkernel.numa_balancing = 0\n\nkernel.sched_autogroup_enabled = 0\n\nkernel.sched_migration_cost_ns = 5000000\n\n\nThe problem I have is very poor read. When I benchmark my array with fio I\nget random reads of about 200MB/s and 1100IOPS and sequential reads of\nabout 286MB/s and 21000IPS. But when I watch my queries using pg_activity,\nI get at best 4MB/s. Also using dstat I can see that iowait time is at\nabout 25%. This problem is not query-dependent.\n\nI backed up the database, I reformated the array making sure it is well\naligned then restored the database and got the same result.\n\nWhere should I target my troubleshooting at this stage? I reformatted my\ndrive, I tuned my postgresql.conf and OS as much as I could. The hardware\ndoesn’t seem to have any issues, I am really puzzled.\n\nThanks!\n\n\nCharles\n\n-- \nCharles Nadeau Ph.D.\n\n\nI’m running\nPostgreSQL 9.6.3 on Ubuntu 16.10 (kernel 4.4.0-85-generic). Hardware\nis:\n*2x Intel Xeon E5550\n*72GB RAM\n*Hardware RAID10 (4\nx 146GB SAS 10k) P410i controller with 1GB FBWC (80% read/20% write)\nfor Postgresql data only:\n Logical Drive:\n3\n Size: 273.4\nGB\n Fault\nTolerance: 1+0\n Heads: 255\n Sectors Per\nTrack: 32\n Cylinders:\n65535\n Strip Size:\n128 KB\n Full Stripe\nSize: 256 KB\n Status: OK\n Caching: \nEnabled\n Unique\nIdentifier: 600508B1001037383941424344450A00\n Disk Name:\n/dev/sdc\n Mount\nPoints: /mnt/data 273.4 GB\n OS Status:\nLOCKED\n Logical\nDrive Label: A00A194750123456789ABCDE516F\n Mirror\nGroup 0:\n \nphysicaldrive 2I:1:5 (port 2I:box 1:bay 5, SAS, 146 GB, OK)\n \nphysicaldrive 2I:1:6 (port 2I:box 1:bay 6, SAS, 146 GB, OK)\n Mirror\nGroup 1:\n \nphysicaldrive 2I:1:7 (port 2I:box 1:bay 7, SAS, 146 GB, OK)\n \nphysicaldrive 2I:1:8 (port 2I:box 1:bay 8, SAS, 146 GB, OK)\n Drive Type:\nData\nFormatted with ext4\nwith: sudo mkfs.ext4 -E stride=32,stripe_width=64 -v /dev/sdc1.\nMounted in\n/etc/fstab with this line: \"UUID=99fef4ae-51dc-4365-9210-0b153b1cbbd0\n/mnt/data ext4\nrw,nodiratime,user_xattr,noatime,nobarrier,errors=remount-ro 0 1\"\nPostgresql is the\nonly application running on this server.\n\n\nPostgresql is used\nas a mini data warehouse to generate reports and do statistical\nanalysis. It is used by at most 2 users and fresh data is added every\n10 days. The database has 16 tables: one is 224GB big and the rest\nare between 16kB and 470MB big.\n\n\nMy configuration is:\n\n\n name \n | current_setting | \nsource \n\n---------------------------------+------------------------------------------------+----------------------\n application_name \n | psql | client\n\nautovacuum_vacuum_scale_factor | 0 \n | configuration file\n\nautovacuum_vacuum_threshold | 2000 \n | configuration file\n\ncheckpoint_completion_target | 0.9 \n | configuration file\n checkpoint_timeout \n | 30min |\nconfiguration file\n client_encoding \n | UTF8 | client\n client_min_messages\n | log |\nconfiguration file\n cluster_name \n | 9.6/main |\nconfiguration file\n\ncpu_index_tuple_cost | 0.001 \n | configuration file\n cpu_operator_cost \n | 0.0005 |\nconfiguration file\n cpu_tuple_cost \n | 0.003 |\nconfiguration file\n DateStyle \n | ISO, YMD |\nconfiguration file\n\ndefault_statistics_target | 100 \n | configuration file\n\ndefault_text_search_config | pg_catalog.english \n | configuration file\n\ndynamic_shared_memory_type | posix \n | configuration file\n\neffective_cache_size | 22GB \n | configuration file\n\neffective_io_concurrency | 4 \n | configuration file\n external_pid_file \n | /var/run/postgresql/9.6-main.pid |\nconfiguration file\n lc_messages \n | C |\nconfiguration file\n lc_monetary \n | en_CA.UTF-8 |\nconfiguration file\n lc_numeric \n | en_CA.UTF-8 |\nconfiguration file\n lc_time \n | en_CA.UTF-8 |\nconfiguration file\n listen_addresses \n | * |\nconfiguration file\n lock_timeout \n | 100s |\nconfiguration file\n\nlog_autovacuum_min_duration | 0 \n | configuration file\n log_checkpoints \n | on |\nconfiguration file\n log_connections \n | on |\nconfiguration file\n log_destination \n | csvlog |\nconfiguration file\n log_directory \n | /mnt/bigzilla/data/toburn/hp/postgresql/pg_log |\nconfiguration file\n log_disconnections \n | on |\nconfiguration file\n log_error_verbosity\n | default |\nconfiguration file\n log_file_mode \n | 0600 |\nconfiguration file\n log_filename \n | postgresql-%Y-%m-%d_%H%M%S.log |\nconfiguration file\n log_line_prefix \n | user=%u,db=%d,app=%aclient=%h |\nconfiguration file\n log_lock_waits \n | on |\nconfiguration file\n\nlog_min_duration_statement | 0 \n | configuration file\n\nlog_min_error_statement | debug1 \n | configuration file\n log_min_messages \n | debug1 |\nconfiguration file\n log_rotation_size \n | 1GB |\nconfiguration file\n log_temp_files \n | 0 |\nconfiguration file\n log_timezone \n | localtime |\nconfiguration file\n logging_collector \n | on |\nconfiguration file\n\nmaintenance_work_mem | 3GB \n | configuration file\n max_connections \n | 10 |\nconfiguration file\n\nmax_locks_per_transaction | 256 \n | configuration file\n\nmax_parallel_workers_per_gather | 14 \n | configuration file\n max_stack_depth \n | 2MB |\nenvironment variable\n max_wal_size \n | 4GB |\nconfiguration file\n\nmax_worker_processes | 14 \n | configuration file\n min_wal_size \n | 2GB |\nconfiguration file\n parallel_setup_cost\n | 1000 |\nconfiguration file\n parallel_tuple_cost\n | 0.012 |\nconfiguration file\n port \n | 5432 |\nconfiguration file\n random_page_cost \n | 22 |\nconfiguration file\n seq_page_cost \n | 1 |\nconfiguration file\n shared_buffers \n | 34GB |\nconfiguration file\n\nshared_preload_libraries | pg_stat_statements \n | configuration file\n ssl \n | on |\nconfiguration file\n ssl_cert_file \n | /etc/ssl/certs/ssl-cert-snakeoil.pem |\nconfiguration file\n ssl_key_file \n | /etc/ssl/private/ssl-cert-snakeoil.key |\nconfiguration file\n statement_timeout \n | 1000000s |\nconfiguration file\n\nstats_temp_directory |\n/var/run/postgresql/9.6-main.pg_stat_tmp | configuration file\n\nsuperuser_reserved_connections | 1 \n | configuration file\n syslog_facility \n | local1 |\nconfiguration file\n syslog_ident \n | postgres |\nconfiguration file\n\nsyslog_sequence_numbers | on \n | configuration file\n temp_file_limit \n | 80GB |\nconfiguration file\n TimeZone \n | localtime |\nconfiguration file\n track_activities \n | on |\nconfiguration file\n track_counts \n | on |\nconfiguration file\n track_functions \n | all |\nconfiguration file\n\nunix_socket_directories | /var/run/postgresql \n | configuration file\n vacuum_cost_delay \n | 1ms |\nconfiguration file\n vacuum_cost_limit \n | 5000 |\nconfiguration file\n\nvacuum_cost_page_dirty | 200 \n | configuration file\n\nvacuum_cost_page_hit | 10 \n | configuration file\n\nvacuum_cost_page_miss | 100 \n | configuration file\n wal_buffers \n | 16MB |\nconfiguration file\n wal_compression \n | on |\nconfiguration file\n wal_sync_method \n | fdatasync |\nconfiguration file\n work_mem \n | 1468006kB |\nconfiguration file\n\n\nThe part of\n/etc/sysctl.conf I modified is:\nvm.swappiness = 1\nvm.dirty_background_bytes\n= 134217728\nvm.dirty_bytes =\n1073741824\nvm.overcommit_ratio\n= 100\nvm.zone_reclaim_mode\n= 0\nkernel.numa_balancing\n= 0\nkernel.sched_autogroup_enabled\n= 0\nkernel.sched_migration_cost_ns\n= 5000000\n\n\nThe problem I have\nis very poor read. When I benchmark my array with fio I get random\nreads of about 200MB/s and 1100IOPS and sequential reads of about\n286MB/s and 21000IPS. But when I watch my queries using pg_activity,\nI get at best 4MB/s. Also using dstat I can see that iowait time is\nat about 25%. This problem is not query-dependent.\nI backed up the\ndatabase, I reformated the array making sure it is well aligned then\nrestored the database and got the same result.\nWhere should I\ntarget my troubleshooting at this stage? I reformatted my drive, I\ntuned my postgresql.conf and OS as much as I could. The hardware\ndoesn’t seem to have any issues, I am really puzzled.\nThanks!\n\n\nCharles-- Charles Nadeau Ph.D.",
"msg_date": "Mon, 10 Jul 2017 16:03:36 +0200",
"msg_from": "Charles Nadeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Very poor read performance, query independent"
},
{
"msg_contents": "Although probably not the root cause, at the least I would set up hugepages\n (\nhttps://www.postgresql.org/docs/9.6/static/kernel-resources.html#LINUX-HUGE-PAGES\n), and bump effective_io_concurrency up quite a bit as well (256 ?).\n\n\nOn Mon, Jul 10, 2017 at 10:03 AM, Charles Nadeau <[email protected]>\nwrote:\n\n> I’m running PostgreSQL 9.6.3 on Ubuntu 16.10 (kernel 4.4.0-85-generic).\n> Hardware is:\n>\n> *2x Intel Xeon E5550\n>\n> *72GB RAM\n>\n> *Hardware RAID10 (4 x 146GB SAS 10k) P410i controller with 1GB FBWC (80%\n> read/20% write) for Postgresql data only:\n>\n> Logical Drive: 3\n>\n> Size: 273.4 GB\n>\n> Fault Tolerance: 1+0\n>\n> Heads: 255\n>\n> Sectors Per Track: 32\n>\n> Cylinders: 65535\n>\n> Strip Size: 128 KB\n>\n> Full Stripe Size: 256 KB\n>\n> Status: OK\n>\n> Caching: Enabled\n>\n> Unique Identifier: 600508B1001037383941424344450A00\n>\n> Disk Name: /dev/sdc\n>\n> Mount Points: /mnt/data 273.4 GB\n>\n> OS Status: LOCKED\n>\n> Logical Drive Label: A00A194750123456789ABCDE516F\n>\n> Mirror Group 0:\n>\n> physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SAS, 146 GB, OK)\n>\n> physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SAS, 146 GB, OK)\n>\n> Mirror Group 1:\n>\n> physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SAS, 146 GB, OK)\n>\n> physicaldrive 2I:1:8 (port 2I:box 1:bay 8, SAS, 146 GB, OK)\n>\n> Drive Type: Data\n>\n> Formatted with ext4 with: sudo mkfs.ext4 -E stride=32,stripe_width=64 -v\n> /dev/sdc1.\n>\n> Mounted in /etc/fstab with this line: \"UUID=99fef4ae-51dc-4365-9210-0b153b1cbbd0\n> /mnt/data ext4 rw,nodiratime,user_xattr,noatime,nobarrier,errors=remount-ro\n> 0 1\"\n>\n> Postgresql is the only application running on this server.\n>\n>\n> Postgresql is used as a mini data warehouse to generate reports and do\n> statistical analysis. It is used by at most 2 users and fresh data is added\n> every 10 days. The database has 16 tables: one is 224GB big and the rest\n> are between 16kB and 470MB big.\n>\n>\n> My configuration is:\n>\n>\n> name | current_setting | source\n>\n> ---------------------------------+--------------------------\n> ----------------------+----------------------\n>\n> application_name | psql | client\n>\n> autovacuum_vacuum_scale_factor | 0 | configuration file\n>\n> autovacuum_vacuum_threshold | 2000 | configuration file\n>\n> checkpoint_completion_target | 0.9 | configuration file\n>\n> checkpoint_timeout | 30min | configuration file\n>\n> client_encoding | UTF8 | client\n>\n> client_min_messages | log | configuration file\n>\n> cluster_name | 9.6/main | configuration file\n>\n> cpu_index_tuple_cost | 0.001 | configuration file\n>\n> cpu_operator_cost | 0.0005 | configuration file\n>\n> cpu_tuple_cost | 0.003 | configuration file\n>\n> DateStyle | ISO, YMD | configuration file\n>\n> default_statistics_target | 100 | configuration file\n>\n> default_text_search_config | pg_catalog.english | configuration file\n>\n> dynamic_shared_memory_type | posix | configuration file\n>\n> effective_cache_size | 22GB | configuration file\n>\n> effective_io_concurrency | 4 | configuration file\n>\n> external_pid_file | /var/run/postgresql/9.6-main.pid | configuration file\n>\n> lc_messages | C | configuration file\n>\n> lc_monetary | en_CA.UTF-8 | configuration file\n>\n> lc_numeric | en_CA.UTF-8 | configuration file\n>\n> lc_time | en_CA.UTF-8 | configuration file\n>\n> listen_addresses | * | configuration file\n>\n> lock_timeout | 100s | configuration file\n>\n> log_autovacuum_min_duration | 0 | configuration file\n>\n> log_checkpoints | on | configuration file\n>\n> log_connections | on | configuration file\n>\n> log_destination | csvlog | configuration file\n>\n> log_directory | /mnt/bigzilla/data/toburn/hp/postgresql/pg_log |\n> configuration file\n>\n> log_disconnections | on | configuration file\n>\n> log_error_verbosity | default | configuration file\n>\n> log_file_mode | 0600 | configuration file\n>\n> log_filename | postgresql-%Y-%m-%d_%H%M%S.log | configuration file\n>\n> log_line_prefix | user=%u,db=%d,app=%aclient=%h | configuration file\n>\n> log_lock_waits | on | configuration file\n>\n> log_min_duration_statement | 0 | configuration file\n>\n> log_min_error_statement | debug1 | configuration file\n>\n> log_min_messages | debug1 | configuration file\n>\n> log_rotation_size | 1GB | configuration file\n>\n> log_temp_files | 0 | configuration file\n>\n> log_timezone | localtime | configuration file\n>\n> logging_collector | on | configuration file\n>\n> maintenance_work_mem | 3GB | configuration file\n>\n> max_connections | 10 | configuration file\n>\n> max_locks_per_transaction | 256 | configuration file\n>\n> max_parallel_workers_per_gather | 14 | configuration file\n>\n> max_stack_depth | 2MB | environment variable\n>\n> max_wal_size | 4GB | configuration file\n>\n> max_worker_processes | 14 | configuration file\n>\n> min_wal_size | 2GB | configuration file\n>\n> parallel_setup_cost | 1000 | configuration file\n>\n> parallel_tuple_cost | 0.012 | configuration file\n>\n> port | 5432 | configuration file\n>\n> random_page_cost | 22 | configuration file\n>\n> seq_page_cost | 1 | configuration file\n>\n> shared_buffers | 34GB | configuration file\n>\n> shared_preload_libraries | pg_stat_statements | configuration file\n>\n> ssl | on | configuration file\n>\n> ssl_cert_file | /etc/ssl/certs/ssl-cert-snakeoil.pem | configuration file\n>\n> ssl_key_file | /etc/ssl/private/ssl-cert-snakeoil.key | configuration file\n>\n> statement_timeout | 1000000s | configuration file\n>\n> stats_temp_directory | /var/run/postgresql/9.6-main.pg_stat_tmp |\n> configuration file\n>\n> superuser_reserved_connections | 1 | configuration file\n>\n> syslog_facility | local1 | configuration file\n>\n> syslog_ident | postgres | configuration file\n>\n> syslog_sequence_numbers | on | configuration file\n>\n> temp_file_limit | 80GB | configuration file\n>\n> TimeZone | localtime | configuration file\n>\n> track_activities | on | configuration file\n>\n> track_counts | on | configuration file\n>\n> track_functions | all | configuration file\n>\n> unix_socket_directories | /var/run/postgresql | configuration file\n>\n> vacuum_cost_delay | 1ms | configuration file\n>\n> vacuum_cost_limit | 5000 | configuration file\n>\n> vacuum_cost_page_dirty | 200 | configuration file\n>\n> vacuum_cost_page_hit | 10 | configuration file\n>\n> vacuum_cost_page_miss | 100 | configuration file\n>\n> wal_buffers | 16MB | configuration file\n>\n> wal_compression | on | configuration file\n>\n> wal_sync_method | fdatasync | configuration file\n>\n> work_mem | 1468006kB | configuration file\n>\n>\n> The part of /etc/sysctl.conf I modified is:\n>\n> vm.swappiness = 1\n>\n> vm.dirty_background_bytes = 134217728\n>\n> vm.dirty_bytes = 1073741824\n>\n> vm.overcommit_ratio = 100\n>\n> vm.zone_reclaim_mode = 0\n>\n> kernel.numa_balancing = 0\n>\n> kernel.sched_autogroup_enabled = 0\n>\n> kernel.sched_migration_cost_ns = 5000000\n>\n>\n> The problem I have is very poor read. When I benchmark my array with fio I\n> get random reads of about 200MB/s and 1100IOPS and sequential reads of\n> about 286MB/s and 21000IPS. But when I watch my queries using pg_activity,\n> I get at best 4MB/s. Also using dstat I can see that iowait time is at\n> about 25%. This problem is not query-dependent.\n>\n> I backed up the database, I reformated the array making sure it is well\n> aligned then restored the database and got the same result.\n>\n> Where should I target my troubleshooting at this stage? I reformatted my\n> drive, I tuned my postgresql.conf and OS as much as I could. The hardware\n> doesn’t seem to have any issues, I am really puzzled.\n>\n> Thanks!\n>\n>\n> Charles\n>\n> --\n> Charles Nadeau Ph.D.\n>\n\nAlthough probably not the root cause, at the least I would set up hugepages ( https://www.postgresql.org/docs/9.6/static/kernel-resources.html#LINUX-HUGE-PAGES ), and bump effective_io_concurrency up quite a bit as well (256 ?).On Mon, Jul 10, 2017 at 10:03 AM, Charles Nadeau <[email protected]> wrote:\nI’m running\nPostgreSQL 9.6.3 on Ubuntu 16.10 (kernel 4.4.0-85-generic). Hardware\nis:\n*2x Intel Xeon E5550\n*72GB RAM\n*Hardware RAID10 (4\nx 146GB SAS 10k) P410i controller with 1GB FBWC (80% read/20% write)\nfor Postgresql data only:\n Logical Drive:\n3\n Size: 273.4\nGB\n Fault\nTolerance: 1+0\n Heads: 255\n Sectors Per\nTrack: 32\n Cylinders:\n65535\n Strip Size:\n128 KB\n Full Stripe\nSize: 256 KB\n Status: OK\n Caching: \nEnabled\n Unique\nIdentifier: 600508B1001037383941424344450A00\n Disk Name:\n/dev/sdc\n Mount\nPoints: /mnt/data 273.4 GB\n OS Status:\nLOCKED\n Logical\nDrive Label: A00A194750123456789ABCDE516F\n Mirror\nGroup 0:\n \nphysicaldrive 2I:1:5 (port 2I:box 1:bay 5, SAS, 146 GB, OK)\n \nphysicaldrive 2I:1:6 (port 2I:box 1:bay 6, SAS, 146 GB, OK)\n Mirror\nGroup 1:\n \nphysicaldrive 2I:1:7 (port 2I:box 1:bay 7, SAS, 146 GB, OK)\n \nphysicaldrive 2I:1:8 (port 2I:box 1:bay 8, SAS, 146 GB, OK)\n Drive Type:\nData\nFormatted with ext4\nwith: sudo mkfs.ext4 -E stride=32,stripe_width=64 -v /dev/sdc1.\nMounted in\n/etc/fstab with this line: \"UUID=99fef4ae-51dc-4365-9210-0b153b1cbbd0\n/mnt/data ext4\nrw,nodiratime,user_xattr,noatime,nobarrier,errors=remount-ro 0 1\"\nPostgresql is the\nonly application running on this server.\n\n\nPostgresql is used\nas a mini data warehouse to generate reports and do statistical\nanalysis. It is used by at most 2 users and fresh data is added every\n10 days. The database has 16 tables: one is 224GB big and the rest\nare between 16kB and 470MB big.\n\n\nMy configuration is:\n\n\n name \n | current_setting | \nsource \n\n---------------------------------+------------------------------------------------+----------------------\n application_name \n | psql | client\n\nautovacuum_vacuum_scale_factor | 0 \n | configuration file\n\nautovacuum_vacuum_threshold | 2000 \n | configuration file\n\ncheckpoint_completion_target | 0.9 \n | configuration file\n checkpoint_timeout \n | 30min |\nconfiguration file\n client_encoding \n | UTF8 | client\n client_min_messages\n | log |\nconfiguration file\n cluster_name \n | 9.6/main |\nconfiguration file\n\ncpu_index_tuple_cost | 0.001 \n | configuration file\n cpu_operator_cost \n | 0.0005 |\nconfiguration file\n cpu_tuple_cost \n | 0.003 |\nconfiguration file\n DateStyle \n | ISO, YMD |\nconfiguration file\n\ndefault_statistics_target | 100 \n | configuration file\n\ndefault_text_search_config | pg_catalog.english \n | configuration file\n\ndynamic_shared_memory_type | posix \n | configuration file\n\neffective_cache_size | 22GB \n | configuration file\n\neffective_io_concurrency | 4 \n | configuration file\n external_pid_file \n | /var/run/postgresql/9.6-main.pid |\nconfiguration file\n lc_messages \n | C |\nconfiguration file\n lc_monetary \n | en_CA.UTF-8 |\nconfiguration file\n lc_numeric \n | en_CA.UTF-8 |\nconfiguration file\n lc_time \n | en_CA.UTF-8 |\nconfiguration file\n listen_addresses \n | * |\nconfiguration file\n lock_timeout \n | 100s |\nconfiguration file\n\nlog_autovacuum_min_duration | 0 \n | configuration file\n log_checkpoints \n | on |\nconfiguration file\n log_connections \n | on |\nconfiguration file\n log_destination \n | csvlog |\nconfiguration file\n log_directory \n | /mnt/bigzilla/data/toburn/hp/postgresql/pg_log |\nconfiguration file\n log_disconnections \n | on |\nconfiguration file\n log_error_verbosity\n | default |\nconfiguration file\n log_file_mode \n | 0600 |\nconfiguration file\n log_filename \n | postgresql-%Y-%m-%d_%H%M%S.log |\nconfiguration file\n log_line_prefix \n | user=%u,db=%d,app=%aclient=%h |\nconfiguration file\n log_lock_waits \n | on |\nconfiguration file\n\nlog_min_duration_statement | 0 \n | configuration file\n\nlog_min_error_statement | debug1 \n | configuration file\n log_min_messages \n | debug1 |\nconfiguration file\n log_rotation_size \n | 1GB |\nconfiguration file\n log_temp_files \n | 0 |\nconfiguration file\n log_timezone \n | localtime |\nconfiguration file\n logging_collector \n | on |\nconfiguration file\n\nmaintenance_work_mem | 3GB \n | configuration file\n max_connections \n | 10 |\nconfiguration file\n\nmax_locks_per_transaction | 256 \n | configuration file\n\nmax_parallel_workers_per_gather | 14 \n | configuration file\n max_stack_depth \n | 2MB |\nenvironment variable\n max_wal_size \n | 4GB |\nconfiguration file\n\nmax_worker_processes | 14 \n | configuration file\n min_wal_size \n | 2GB |\nconfiguration file\n parallel_setup_cost\n | 1000 |\nconfiguration file\n parallel_tuple_cost\n | 0.012 |\nconfiguration file\n port \n | 5432 |\nconfiguration file\n random_page_cost \n | 22 |\nconfiguration file\n seq_page_cost \n | 1 |\nconfiguration file\n shared_buffers \n | 34GB |\nconfiguration file\n\nshared_preload_libraries | pg_stat_statements \n | configuration file\n ssl \n | on |\nconfiguration file\n ssl_cert_file \n | /etc/ssl/certs/ssl-cert-snakeoil.pem |\nconfiguration file\n ssl_key_file \n | /etc/ssl/private/ssl-cert-snakeoil.key |\nconfiguration file\n statement_timeout \n | 1000000s |\nconfiguration file\n\nstats_temp_directory |\n/var/run/postgresql/9.6-main.pg_stat_tmp | configuration file\n\nsuperuser_reserved_connections | 1 \n | configuration file\n syslog_facility \n | local1 |\nconfiguration file\n syslog_ident \n | postgres |\nconfiguration file\n\nsyslog_sequence_numbers | on \n | configuration file\n temp_file_limit \n | 80GB |\nconfiguration file\n TimeZone \n | localtime |\nconfiguration file\n track_activities \n | on |\nconfiguration file\n track_counts \n | on |\nconfiguration file\n track_functions \n | all |\nconfiguration file\n\nunix_socket_directories | /var/run/postgresql \n | configuration file\n vacuum_cost_delay \n | 1ms |\nconfiguration file\n vacuum_cost_limit \n | 5000 |\nconfiguration file\n\nvacuum_cost_page_dirty | 200 \n | configuration file\n\nvacuum_cost_page_hit | 10 \n | configuration file\n\nvacuum_cost_page_miss | 100 \n | configuration file\n wal_buffers \n | 16MB |\nconfiguration file\n wal_compression \n | on |\nconfiguration file\n wal_sync_method \n | fdatasync |\nconfiguration file\n work_mem \n | 1468006kB |\nconfiguration file\n\n\nThe part of\n/etc/sysctl.conf I modified is:\nvm.swappiness = 1\nvm.dirty_background_bytes\n= 134217728\nvm.dirty_bytes =\n1073741824\nvm.overcommit_ratio\n= 100\nvm.zone_reclaim_mode\n= 0\nkernel.numa_balancing\n= 0\nkernel.sched_autogroup_enabled\n= 0\nkernel.sched_migration_cost_ns\n= 5000000\n\n\nThe problem I have\nis very poor read. When I benchmark my array with fio I get random\nreads of about 200MB/s and 1100IOPS and sequential reads of about\n286MB/s and 21000IPS. But when I watch my queries using pg_activity,\nI get at best 4MB/s. Also using dstat I can see that iowait time is\nat about 25%. This problem is not query-dependent.\nI backed up the\ndatabase, I reformated the array making sure it is well aligned then\nrestored the database and got the same result.\nWhere should I\ntarget my troubleshooting at this stage? I reformatted my drive, I\ntuned my postgresql.conf and OS as much as I could. The hardware\ndoesn’t seem to have any issues, I am really puzzled.\nThanks!\n\n\nCharles-- Charles Nadeau Ph.D.",
"msg_date": "Mon, 10 Jul 2017 11:25:45 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "\n\nAm 10.07.2017 um 16:03 schrieb Charles Nadeau:\n> random_page_cost | 22 \n\n\nwhy such a high value for random_page_cost?\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 10 Jul 2017 17:35:56 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Andreas,\n\nBecause the ratio between the Sequential IOPS and Random IOPS is about 29.\nTaking into account that part of the data is in RAM, I obtained an\n\"effective\" ratio of about 22.\nThanks!\n\nCharles\n\nOn Mon, Jul 10, 2017 at 5:35 PM, Andreas Kretschmer <[email protected]\n> wrote:\n\n>\n>\n> Am 10.07.2017 um 16:03 schrieb Charles Nadeau:\n>\n>> random_page_cost | 22\n>>\n>\n>\n> why such a high value for random_page_cost?\n>\n> Regards, Andreas\n>\n> --\n> 2ndQuadrant - The PostgreSQL Support Company.\n> www.2ndQuadrant.com\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\nAndreas,Because the ratio between the Sequential IOPS and Random IOPS is about 29. Taking into account that part of the data is in RAM, I obtained an \"effective\" ratio of about 22.Thanks!CharlesOn Mon, Jul 10, 2017 at 5:35 PM, Andreas Kretschmer <[email protected]> wrote:\n\nAm 10.07.2017 um 16:03 schrieb Charles Nadeau:\n\nrandom_page_cost | 22 \n\n\n\nwhy such a high value for random_page_cost?\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Charles Nadeau Ph.D.http://charlesnadeau.blogspot.com/",
"msg_date": "Mon, 10 Jul 2017 17:48:25 +0200",
"msg_from": "Charles Nadeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "From: [email protected] [mailto:[email protected]] On Behalf Of Charles Nadeau\r\nSent: Monday, July 10, 2017 11:48 AM\r\nTo: Andreas Kretschmer <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Very poor read performance, query independent\r\n\r\nAndreas,\r\n\r\nBecause the ratio between the Sequential IOPS and Random IOPS is about 29. Taking into account that part of the data is in RAM, I obtained an \"effective\" ratio of about 22.\r\nThanks!\r\n\r\nCharles\r\n\r\nOn Mon, Jul 10, 2017 at 5:35 PM, Andreas Kretschmer <[email protected]<mailto:[email protected]>> wrote:\r\n\r\n\r\nAm 10.07.2017 um 16:03 schrieb Charles Nadeau:\r\nrandom_page_cost | 22\r\n\r\n\r\nwhy such a high value for random_page_cost?\r\n\r\nRegards, Andreas\r\n\r\n--\r\n2ndQuadrant - The PostgreSQL Support Company.\r\nwww.2ndQuadrant.com<http://www.2ndQuadrant.com>\r\n\r\n\r\n--\r\nCharles Nadeau Ph.D.\r\nhttp://charlesnadeau.blogspot.com/\r\n\r\n\r\nConsidering RAM size of 72 GB and your database size of ~225GB, and also the fact that Postgres is the only app running on the server, probably 1/3 of your database resides in memory, so random_page_cost = 22 looks extremely high, probably it completely precludes index usage in your queries.\r\n\r\nYou should try this setting at least at its default value: random_page_cost =4, and probably go even lower.\r\nAlso, effective_cache_size is at least as big as your shared_buffers. Having 72GB RAM t effective_cache_size should be set around 64GB (again considering that Postgres is the only app running on the server).\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n\r\n\r\n\r\n\n\n\n\n\n\n\n\n\n \n\n\nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of Charles Nadeau\nSent: Monday, July 10, 2017 11:48 AM\nTo: Andreas Kretschmer <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] Very poor read performance, query independent\n\n\n \n\n\nAndreas, \n\n \n\n\nBecause the ratio between the Sequential IOPS and Random IOPS is about 29. Taking into account that part of the data is in RAM, I obtained an \"effective\" ratio of about 22.\n\n\nThanks!\n\n\n \n\n\nCharles\n\n\n\n \n\nOn Mon, Jul 10, 2017 at 5:35 PM, Andreas Kretschmer <[email protected]> wrote:\n\n\n\r\nAm 10.07.2017 um 16:03 schrieb Charles Nadeau:\n\nrandom_page_cost | 22 \n\n\n\n\n\n\r\nwhy such a high value for random_page_cost?\n\r\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n\n-- \n\nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\n \n\n \nConsidering RAM size of 72 GB and your database size of ~225GB, and also the fact that Postgres is the only app running on the server, probably 1/3 of your database\r\n resides in memory, so random_page_cost = 22 looks extremely high, probably it completely precludes index usage in your queries.\n \nYou should try this setting at least at its default value: random_page_cost =4, and probably go even lower.\nAlso, effective_cache_size is at least as big as your shared_buffers. Having 72GB RAM t\neffective_cache_size should be set around 64GB (again considering that Postgres is the only app running on the server).\n \nRegards,\nIgor Neyman",
"msg_date": "Mon, 10 Jul 2017 18:35:16 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "On Mon, Jul 10, 2017 at 7:03 AM, Charles Nadeau <[email protected]>\nwrote:\n\n>\n> The problem I have is very poor read. When I benchmark my array with fio I\n> get random reads of about 200MB/s and 1100IOPS and sequential reads of\n> about 286MB/s and 21000IPS.\n>\n\n\nThat doesn't seem right. Sequential is only 43% faster? What job file are\ngiving to fio?\n\nWhat do you get if you do something simpler, like:\n\ntime cat ~/$PGDATA/base/16402/*|wc -c\n\nreplacing 16402 with whatever your biggest database is.\n\nCheers,\n\nJeff\n\nOn Mon, Jul 10, 2017 at 7:03 AM, Charles Nadeau <[email protected]> wrote:\n\nThe problem I have\nis very poor read. When I benchmark my array with fio I get random\nreads of about 200MB/s and 1100IOPS and sequential reads of about\n286MB/s and 21000IPS. That doesn't seem right. Sequential is only 43% faster? What job file are giving to fio?What do you get if you do something simpler, like: time cat ~/$PGDATA/base/16402/*|wc -creplacing 16402 with whatever your biggest database is.Cheers,Jeff",
"msg_date": "Mon, 10 Jul 2017 12:24:04 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Rick,\n\nI applied the change you recommended but it didn't speed up the reads.\nOne thing I forgot to mention earlier is the speed of the backup made with\nthe COPY operations seems almost normal: I have read speed of up to 85MB/s.\nThanks for your help!\n\nCharles\n\nOn Mon, Jul 10, 2017 at 5:25 PM, Rick Otten <[email protected]>\nwrote:\n\n> Although probably not the root cause, at the least I would set up\n> hugepages ( https://www.postgresql.org/docs/9.6/static/kernel-\n> resources.html#LINUX-HUGE-PAGES ), and bump effective_io_concurrency up\n> quite a bit as well (256 ?).\n>\n>\n> On Mon, Jul 10, 2017 at 10:03 AM, Charles Nadeau <[email protected]\n> > wrote:\n>\n>> I’m running PostgreSQL 9.6.3 on Ubuntu 16.10 (kernel 4.4.0-85-generic).\n>> Hardware is:\n>>\n>> *2x Intel Xeon E5550\n>>\n>> *72GB RAM\n>>\n>> *Hardware RAID10 (4 x 146GB SAS 10k) P410i controller with 1GB FBWC (80%\n>> read/20% write) for Postgresql data only:\n>>\n>> Logical Drive: 3\n>>\n>> Size: 273.4 GB\n>>\n>> Fault Tolerance: 1+0\n>>\n>> Heads: 255\n>>\n>> Sectors Per Track: 32\n>>\n>> Cylinders: 65535\n>>\n>> Strip Size: 128 KB\n>>\n>> Full Stripe Size: 256 KB\n>>\n>> Status: OK\n>>\n>> Caching: Enabled\n>>\n>> Unique Identifier: 600508B1001037383941424344450A00\n>>\n>> Disk Name: /dev/sdc\n>>\n>> Mount Points: /mnt/data 273.4 GB\n>>\n>> OS Status: LOCKED\n>>\n>> Logical Drive Label: A00A194750123456789ABCDE516F\n>>\n>> Mirror Group 0:\n>>\n>> physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SAS, 146 GB, OK)\n>>\n>> physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SAS, 146 GB, OK)\n>>\n>> Mirror Group 1:\n>>\n>> physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SAS, 146 GB, OK)\n>>\n>> physicaldrive 2I:1:8 (port 2I:box 1:bay 8, SAS, 146 GB, OK)\n>>\n>> Drive Type: Data\n>>\n>> Formatted with ext4 with: sudo mkfs.ext4 -E stride=32,stripe_width=64 -v\n>> /dev/sdc1.\n>>\n>> Mounted in /etc/fstab with this line: \"UUID=99fef4ae-51dc-4365-9210-0b153b1cbbd0\n>> /mnt/data ext4 rw,nodiratime,user_xattr,noatime,nobarrier,errors=remount-ro\n>> 0 1\"\n>>\n>> Postgresql is the only application running on this server.\n>>\n>>\n>> Postgresql is used as a mini data warehouse to generate reports and do\n>> statistical analysis. It is used by at most 2 users and fresh data is added\n>> every 10 days. The database has 16 tables: one is 224GB big and the rest\n>> are between 16kB and 470MB big.\n>>\n>>\n>> My configuration is:\n>>\n>>\n>> name | current_setting | source\n>>\n>> ---------------------------------+--------------------------\n>> ----------------------+----------------------\n>>\n>> application_name | psql | client\n>>\n>> autovacuum_vacuum_scale_factor | 0 | configuration file\n>>\n>> autovacuum_vacuum_threshold | 2000 | configuration file\n>>\n>> checkpoint_completion_target | 0.9 | configuration file\n>>\n>> checkpoint_timeout | 30min | configuration file\n>>\n>> client_encoding | UTF8 | client\n>>\n>> client_min_messages | log | configuration file\n>>\n>> cluster_name | 9.6/main | configuration file\n>>\n>> cpu_index_tuple_cost | 0.001 | configuration file\n>>\n>> cpu_operator_cost | 0.0005 | configuration file\n>>\n>> cpu_tuple_cost | 0.003 | configuration file\n>>\n>> DateStyle | ISO, YMD | configuration file\n>>\n>> default_statistics_target | 100 | configuration file\n>>\n>> default_text_search_config | pg_catalog.english | configuration file\n>>\n>> dynamic_shared_memory_type | posix | configuration file\n>>\n>> effective_cache_size | 22GB | configuration file\n>>\n>> effective_io_concurrency | 4 | configuration file\n>>\n>> external_pid_file | /var/run/postgresql/9.6-main.pid | configuration file\n>>\n>> lc_messages | C | configuration file\n>>\n>> lc_monetary | en_CA.UTF-8 | configuration file\n>>\n>> lc_numeric | en_CA.UTF-8 | configuration file\n>>\n>> lc_time | en_CA.UTF-8 | configuration file\n>>\n>> listen_addresses | * | configuration file\n>>\n>> lock_timeout | 100s | configuration file\n>>\n>> log_autovacuum_min_duration | 0 | configuration file\n>>\n>> log_checkpoints | on | configuration file\n>>\n>> log_connections | on | configuration file\n>>\n>> log_destination | csvlog | configuration file\n>>\n>> log_directory | /mnt/bigzilla/data/toburn/hp/postgresql/pg_log |\n>> configuration file\n>>\n>> log_disconnections | on | configuration file\n>>\n>> log_error_verbosity | default | configuration file\n>>\n>> log_file_mode | 0600 | configuration file\n>>\n>> log_filename | postgresql-%Y-%m-%d_%H%M%S.log | configuration file\n>>\n>> log_line_prefix | user=%u,db=%d,app=%aclient=%h | configuration file\n>>\n>> log_lock_waits | on | configuration file\n>>\n>> log_min_duration_statement | 0 | configuration file\n>>\n>> log_min_error_statement | debug1 | configuration file\n>>\n>> log_min_messages | debug1 | configuration file\n>>\n>> log_rotation_size | 1GB | configuration file\n>>\n>> log_temp_files | 0 | configuration file\n>>\n>> log_timezone | localtime | configuration file\n>>\n>> logging_collector | on | configuration file\n>>\n>> maintenance_work_mem | 3GB | configuration file\n>>\n>> max_connections | 10 | configuration file\n>>\n>> max_locks_per_transaction | 256 | configuration file\n>>\n>> max_parallel_workers_per_gather | 14 | configuration file\n>>\n>> max_stack_depth | 2MB | environment variable\n>>\n>> max_wal_size | 4GB | configuration file\n>>\n>> max_worker_processes | 14 | configuration file\n>>\n>> min_wal_size | 2GB | configuration file\n>>\n>> parallel_setup_cost | 1000 | configuration file\n>>\n>> parallel_tuple_cost | 0.012 | configuration file\n>>\n>> port | 5432 | configuration file\n>>\n>> random_page_cost | 22 | configuration file\n>>\n>> seq_page_cost | 1 | configuration file\n>>\n>> shared_buffers | 34GB | configuration file\n>>\n>> shared_preload_libraries | pg_stat_statements | configuration file\n>>\n>> ssl | on | configuration file\n>>\n>> ssl_cert_file | /etc/ssl/certs/ssl-cert-snakeoil.pem | configuration file\n>>\n>> ssl_key_file | /etc/ssl/private/ssl-cert-snakeoil.key | configuration\n>> file\n>>\n>> statement_timeout | 1000000s | configuration file\n>>\n>> stats_temp_directory | /var/run/postgresql/9.6-main.pg_stat_tmp |\n>> configuration file\n>>\n>> superuser_reserved_connections | 1 | configuration file\n>>\n>> syslog_facility | local1 | configuration file\n>>\n>> syslog_ident | postgres | configuration file\n>>\n>> syslog_sequence_numbers | on | configuration file\n>>\n>> temp_file_limit | 80GB | configuration file\n>>\n>> TimeZone | localtime | configuration file\n>>\n>> track_activities | on | configuration file\n>>\n>> track_counts | on | configuration file\n>>\n>> track_functions | all | configuration file\n>>\n>> unix_socket_directories | /var/run/postgresql | configuration file\n>>\n>> vacuum_cost_delay | 1ms | configuration file\n>>\n>> vacuum_cost_limit | 5000 | configuration file\n>>\n>> vacuum_cost_page_dirty | 200 | configuration file\n>>\n>> vacuum_cost_page_hit | 10 | configuration file\n>>\n>> vacuum_cost_page_miss | 100 | configuration file\n>>\n>> wal_buffers | 16MB | configuration file\n>>\n>> wal_compression | on | configuration file\n>>\n>> wal_sync_method | fdatasync | configuration file\n>>\n>> work_mem | 1468006kB | configuration file\n>>\n>>\n>> The part of /etc/sysctl.conf I modified is:\n>>\n>> vm.swappiness = 1\n>>\n>> vm.dirty_background_bytes = 134217728\n>>\n>> vm.dirty_bytes = 1073741824\n>>\n>> vm.overcommit_ratio = 100\n>>\n>> vm.zone_reclaim_mode = 0\n>>\n>> kernel.numa_balancing = 0\n>>\n>> kernel.sched_autogroup_enabled = 0\n>>\n>> kernel.sched_migration_cost_ns = 5000000\n>>\n>>\n>> The problem I have is very poor read. When I benchmark my array with fio\n>> I get random reads of about 200MB/s and 1100IOPS and sequential reads of\n>> about 286MB/s and 21000IPS. But when I watch my queries using pg_activity,\n>> I get at best 4MB/s. Also using dstat I can see that iowait time is at\n>> about 25%. This problem is not query-dependent.\n>>\n>> I backed up the database, I reformated the array making sure it is well\n>> aligned then restored the database and got the same result.\n>>\n>> Where should I target my troubleshooting at this stage? I reformatted my\n>> drive, I tuned my postgresql.conf and OS as much as I could. The hardware\n>> doesn’t seem to have any issues, I am really puzzled.\n>>\n>> Thanks!\n>>\n>>\n>> Charles\n>>\n>> --\n>> Charles Nadeau Ph.D.\n>>\n>\n>\n\n\n-- \nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\nRick,I applied the change you recommended but it didn't speed up the reads.One thing I forgot to mention earlier is the speed of the backup made with the COPY operations seems almost normal: I have read speed of up to 85MB/s.Thanks for your help!CharlesOn Mon, Jul 10, 2017 at 5:25 PM, Rick Otten <[email protected]> wrote:Although probably not the root cause, at the least I would set up hugepages ( https://www.postgresql.org/docs/9.6/static/kernel-resources.html#LINUX-HUGE-PAGES ), and bump effective_io_concurrency up quite a bit as well (256 ?).On Mon, Jul 10, 2017 at 10:03 AM, Charles Nadeau <[email protected]> wrote:\nI’m running\nPostgreSQL 9.6.3 on Ubuntu 16.10 (kernel 4.4.0-85-generic). Hardware\nis:\n*2x Intel Xeon E5550\n*72GB RAM\n*Hardware RAID10 (4\nx 146GB SAS 10k) P410i controller with 1GB FBWC (80% read/20% write)\nfor Postgresql data only:\n Logical Drive:\n3\n Size: 273.4\nGB\n Fault\nTolerance: 1+0\n Heads: 255\n Sectors Per\nTrack: 32\n Cylinders:\n65535\n Strip Size:\n128 KB\n Full Stripe\nSize: 256 KB\n Status: OK\n Caching: \nEnabled\n Unique\nIdentifier: 600508B1001037383941424344450A00\n Disk Name:\n/dev/sdc\n Mount\nPoints: /mnt/data 273.4 GB\n OS Status:\nLOCKED\n Logical\nDrive Label: A00A194750123456789ABCDE516F\n Mirror\nGroup 0:\n \nphysicaldrive 2I:1:5 (port 2I:box 1:bay 5, SAS, 146 GB, OK)\n \nphysicaldrive 2I:1:6 (port 2I:box 1:bay 6, SAS, 146 GB, OK)\n Mirror\nGroup 1:\n \nphysicaldrive 2I:1:7 (port 2I:box 1:bay 7, SAS, 146 GB, OK)\n \nphysicaldrive 2I:1:8 (port 2I:box 1:bay 8, SAS, 146 GB, OK)\n Drive Type:\nData\nFormatted with ext4\nwith: sudo mkfs.ext4 -E stride=32,stripe_width=64 -v /dev/sdc1.\nMounted in\n/etc/fstab with this line: \"UUID=99fef4ae-51dc-4365-9210-0b153b1cbbd0\n/mnt/data ext4\nrw,nodiratime,user_xattr,noatime,nobarrier,errors=remount-ro 0 1\"\nPostgresql is the\nonly application running on this server.\n\n\nPostgresql is used\nas a mini data warehouse to generate reports and do statistical\nanalysis. It is used by at most 2 users and fresh data is added every\n10 days. The database has 16 tables: one is 224GB big and the rest\nare between 16kB and 470MB big.\n\n\nMy configuration is:\n\n\n name \n | current_setting | \nsource \n\n---------------------------------+------------------------------------------------+----------------------\n application_name \n | psql | client\n\nautovacuum_vacuum_scale_factor | 0 \n | configuration file\n\nautovacuum_vacuum_threshold | 2000 \n | configuration file\n\ncheckpoint_completion_target | 0.9 \n | configuration file\n checkpoint_timeout \n | 30min |\nconfiguration file\n client_encoding \n | UTF8 | client\n client_min_messages\n | log |\nconfiguration file\n cluster_name \n | 9.6/main |\nconfiguration file\n\ncpu_index_tuple_cost | 0.001 \n | configuration file\n cpu_operator_cost \n | 0.0005 |\nconfiguration file\n cpu_tuple_cost \n | 0.003 |\nconfiguration file\n DateStyle \n | ISO, YMD |\nconfiguration file\n\ndefault_statistics_target | 100 \n | configuration file\n\ndefault_text_search_config | pg_catalog.english \n | configuration file\n\ndynamic_shared_memory_type | posix \n | configuration file\n\neffective_cache_size | 22GB \n | configuration file\n\neffective_io_concurrency | 4 \n | configuration file\n external_pid_file \n | /var/run/postgresql/9.6-main.pid |\nconfiguration file\n lc_messages \n | C |\nconfiguration file\n lc_monetary \n | en_CA.UTF-8 |\nconfiguration file\n lc_numeric \n | en_CA.UTF-8 |\nconfiguration file\n lc_time \n | en_CA.UTF-8 |\nconfiguration file\n listen_addresses \n | * |\nconfiguration file\n lock_timeout \n | 100s |\nconfiguration file\n\nlog_autovacuum_min_duration | 0 \n | configuration file\n log_checkpoints \n | on |\nconfiguration file\n log_connections \n | on |\nconfiguration file\n log_destination \n | csvlog |\nconfiguration file\n log_directory \n | /mnt/bigzilla/data/toburn/hp/postgresql/pg_log |\nconfiguration file\n log_disconnections \n | on |\nconfiguration file\n log_error_verbosity\n | default |\nconfiguration file\n log_file_mode \n | 0600 |\nconfiguration file\n log_filename \n | postgresql-%Y-%m-%d_%H%M%S.log |\nconfiguration file\n log_line_prefix \n | user=%u,db=%d,app=%aclient=%h |\nconfiguration file\n log_lock_waits \n | on |\nconfiguration file\n\nlog_min_duration_statement | 0 \n | configuration file\n\nlog_min_error_statement | debug1 \n | configuration file\n log_min_messages \n | debug1 |\nconfiguration file\n log_rotation_size \n | 1GB |\nconfiguration file\n log_temp_files \n | 0 |\nconfiguration file\n log_timezone \n | localtime |\nconfiguration file\n logging_collector \n | on |\nconfiguration file\n\nmaintenance_work_mem | 3GB \n | configuration file\n max_connections \n | 10 |\nconfiguration file\n\nmax_locks_per_transaction | 256 \n | configuration file\n\nmax_parallel_workers_per_gather | 14 \n | configuration file\n max_stack_depth \n | 2MB |\nenvironment variable\n max_wal_size \n | 4GB |\nconfiguration file\n\nmax_worker_processes | 14 \n | configuration file\n min_wal_size \n | 2GB |\nconfiguration file\n parallel_setup_cost\n | 1000 |\nconfiguration file\n parallel_tuple_cost\n | 0.012 |\nconfiguration file\n port \n | 5432 |\nconfiguration file\n random_page_cost \n | 22 |\nconfiguration file\n seq_page_cost \n | 1 |\nconfiguration file\n shared_buffers \n | 34GB |\nconfiguration file\n\nshared_preload_libraries | pg_stat_statements \n | configuration file\n ssl \n | on |\nconfiguration file\n ssl_cert_file \n | /etc/ssl/certs/ssl-cert-snakeoil.pem |\nconfiguration file\n ssl_key_file \n | /etc/ssl/private/ssl-cert-snakeoil.key |\nconfiguration file\n statement_timeout \n | 1000000s |\nconfiguration file\n\nstats_temp_directory |\n/var/run/postgresql/9.6-main.pg_stat_tmp | configuration file\n\nsuperuser_reserved_connections | 1 \n | configuration file\n syslog_facility \n | local1 |\nconfiguration file\n syslog_ident \n | postgres |\nconfiguration file\n\nsyslog_sequence_numbers | on \n | configuration file\n temp_file_limit \n | 80GB |\nconfiguration file\n TimeZone \n | localtime |\nconfiguration file\n track_activities \n | on |\nconfiguration file\n track_counts \n | on |\nconfiguration file\n track_functions \n | all |\nconfiguration file\n\nunix_socket_directories | /var/run/postgresql \n | configuration file\n vacuum_cost_delay \n | 1ms |\nconfiguration file\n vacuum_cost_limit \n | 5000 |\nconfiguration file\n\nvacuum_cost_page_dirty | 200 \n | configuration file\n\nvacuum_cost_page_hit | 10 \n | configuration file\n\nvacuum_cost_page_miss | 100 \n | configuration file\n wal_buffers \n | 16MB |\nconfiguration file\n wal_compression \n | on |\nconfiguration file\n wal_sync_method \n | fdatasync |\nconfiguration file\n work_mem \n | 1468006kB |\nconfiguration file\n\n\nThe part of\n/etc/sysctl.conf I modified is:\nvm.swappiness = 1\nvm.dirty_background_bytes\n= 134217728\nvm.dirty_bytes =\n1073741824\nvm.overcommit_ratio\n= 100\nvm.zone_reclaim_mode\n= 0\nkernel.numa_balancing\n= 0\nkernel.sched_autogroup_enabled\n= 0\nkernel.sched_migration_cost_ns\n= 5000000\n\n\nThe problem I have\nis very poor read. When I benchmark my array with fio I get random\nreads of about 200MB/s and 1100IOPS and sequential reads of about\n286MB/s and 21000IPS. But when I watch my queries using pg_activity,\nI get at best 4MB/s. Also using dstat I can see that iowait time is\nat about 25%. This problem is not query-dependent.\nI backed up the\ndatabase, I reformated the array making sure it is well aligned then\nrestored the database and got the same result.\nWhere should I\ntarget my troubleshooting at this stage? I reformatted my drive, I\ntuned my postgresql.conf and OS as much as I could. The hardware\ndoesn’t seem to have any issues, I am really puzzled.\nThanks!\n\n\nCharles-- Charles Nadeau Ph.D.\n\n\n-- Charles Nadeau Ph.D.http://charlesnadeau.blogspot.com/",
"msg_date": "Tue, 11 Jul 2017 12:39:19 +0200",
"msg_from": "Charles Nadeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Igor,\n\nI reduced the value of random_page_cost to 4 but the read speed remains low.\nRegarding effective_cache_size and shared_buffer, do you mean they should\nbe both equal to 64GB?\nThanks for suggestions!\n\nCharles\n\nOn Mon, Jul 10, 2017 at 8:35 PM, Igor Neyman <[email protected]> wrote:\n\n>\n>\n> *From:* [email protected] [mailto:pgsql-performance-\n> [email protected]] *On Behalf Of *Charles Nadeau\n> *Sent:* Monday, July 10, 2017 11:48 AM\n> *To:* Andreas Kretschmer <[email protected]>\n> *Cc:* [email protected]\n> *Subject:* Re: [PERFORM] Very poor read performance, query independent\n>\n>\n>\n> Andreas,\n>\n>\n>\n> Because the ratio between the Sequential IOPS and Random IOPS is about 29.\n> Taking into account that part of the data is in RAM, I obtained an\n> \"effective\" ratio of about 22.\n>\n> Thanks!\n>\n>\n>\n> Charles\n>\n>\n>\n> On Mon, Jul 10, 2017 at 5:35 PM, Andreas Kretschmer <\n> [email protected]> wrote:\n>\n>\n>\n> Am 10.07.2017 um 16:03 schrieb Charles Nadeau:\n>\n> random_page_cost | 22\n>\n>\n>\n> why such a high value for random_page_cost?\n>\n> Regards, Andreas\n>\n> --\n> 2ndQuadrant - The PostgreSQL Support Company.\n> www.2ndQuadrant.com\n>\n>\n> --\n>\n> Charles Nadeau Ph.D.\n> http://charlesnadeau.blogspot.com/\n>\n>\n>\n>\n>\n> Considering RAM size of 72 GB and your database size of ~225GB, and also\n> the fact that Postgres is the only app running on the server, probably 1/3\n> of your database resides in memory, so random_page_cost = 22 looks\n> extremely high, probably it completely precludes index usage in your\n> queries.\n>\n>\n>\n> You should try this setting at least at its default value:\n> random_page_cost =4, and probably go even lower.\n>\n> Also, effective_cache_size is at least as big as your shared_buffers.\n> Having 72GB RAM t effective_cache_size should be set around 64GB (again\n> considering that Postgres is the only app running on the server).\n>\n>\n>\n> Regards,\n>\n> Igor Neyman\n>\n>\n>\n>\n>\n>\n>\n>\n>\n\n\n\n-- \nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\nIgor,I reduced the value of random_page_cost to 4 but the read speed remains low.Regarding effective_cache_size and shared_buffer, do you mean they should be both equal to 64GB?Thanks for suggestions!CharlesOn Mon, Jul 10, 2017 at 8:35 PM, Igor Neyman <[email protected]> wrote:\n\n\n \n\n\nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of Charles Nadeau\nSent: Monday, July 10, 2017 11:48 AM\nTo: Andreas Kretschmer <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] Very poor read performance, query independent\n\n\n \n\n\nAndreas, \n\n \n\n\nBecause the ratio between the Sequential IOPS and Random IOPS is about 29. Taking into account that part of the data is in RAM, I obtained an \"effective\" ratio of about 22.\n\n\nThanks!\n\n\n \n\n\nCharles\n\n\n\n \n\nOn Mon, Jul 10, 2017 at 5:35 PM, Andreas Kretschmer <[email protected]> wrote:\n\n\n\nAm 10.07.2017 um 16:03 schrieb Charles Nadeau:\n\nrandom_page_cost | 22 \n\n\n\n\n\n\nwhy such a high value for random_page_cost?\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n\n-- \n\nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\n \n\n \nConsidering RAM size of 72 GB and your database size of ~225GB, and also the fact that Postgres is the only app running on the server, probably 1/3 of your database\n resides in memory, so random_page_cost = 22 looks extremely high, probably it completely precludes index usage in your queries.\n \nYou should try this setting at least at its default value: random_page_cost =4, and probably go even lower.\nAlso, effective_cache_size is at least as big as your shared_buffers. Having 72GB RAM t\neffective_cache_size should be set around 64GB (again considering that Postgres is the only app running on the server).\n \nRegards,\nIgor Neyman\n \n \n \n \n\n\n\n\n\n-- Charles Nadeau Ph.D.http://charlesnadeau.blogspot.com/",
"msg_date": "Tue, 11 Jul 2017 12:42:35 +0200",
"msg_from": "Charles Nadeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Jeff,\n\nI used fio in a quick benchmarking script inspired by\nhttps://smcleod.net/benchmarking-io/:\n\n#!/bin/bash\n#Random throughput\necho \"Random throughput\"\nsync\nfio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test\n--filename=test --bs=4M --iodepth=256 --size=10G --readwrite=randread\n--ramp_time=4\n#Random IOPS\necho \"Random IOPS\"\nsync\nfio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test\n--filename=test --bs=4k --iodepth=256 --size=4G --readwrite=randread\n--ramp_time=4\n#Sequential throughput\necho \"Sequential throughput\"\nsync\nfio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test\n--filename=test --bs=4M --iodepth=256 --size=10G --readwrite=read\n--ramp_time=4\n#Sequential IOPS\necho \"Sequential IOPS\"\nsync\nfio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test\n--filename=test --bs=4k --iodepth=256 --size=4G --readwrite=read\n--ramp_time=4\n\nPerforming the test you suggested, I get 128.5MB/s. Monitoring the test, I\nfind that the throughput is constant from start to finish and that the\niowait is also constant at 5%:\n\ncharles@hpdl380g6:~$ sudo sh -c 'time cat /mnt/data/postgresql/base/16385/*\n| wc -c'\n[sudo] password for charles:\n1.62user 179.94system 29:50.79elapsed 10%CPU (0avgtext+0avgdata\n1920maxresident)k\n448026264inputs+0outputs (0major+117minor)pagefaults 0swaps\n241297594904\n\n\nAfter making the changes to HugePage suggested by Rick Otten (above), I\nfound slightly better results (135.7MB/s):\n\ncharles@hpdl380g6:~$ sudo sh -c 'time cat /mnt/data/postgresql/base/16385/*\n| wc -c'\n[sudo] password for charles:\n0.86user 130.84system 28:15.78elapsed 7%CPU (0avgtext+0avgdata\n1820maxresident)k\n471286792inputs+0outputs (1major+118minor)pagefaults 0swaps\n241297594904\n\n\nCould you suggest another way to benchmark random reads?\n\nThanks for your help!\n\nCharles\n\nOn Mon, Jul 10, 2017 at 9:24 PM, Jeff Janes <[email protected]> wrote:\n\n> On Mon, Jul 10, 2017 at 7:03 AM, Charles Nadeau <[email protected]>\n> wrote:\n>\n>>\n>> The problem I have is very poor read. When I benchmark my array with fio\n>> I get random reads of about 200MB/s and 1100IOPS and sequential reads of\n>> about 286MB/s and 21000IPS.\n>>\n>\n>\n> That doesn't seem right. Sequential is only 43% faster? What job file\n> are giving to fio?\n>\n> What do you get if you do something simpler, like:\n>\n> time cat ~/$PGDATA/base/16402/*|wc -c\n>\n> replacing 16402 with whatever your biggest database is.\n>\n> Cheers,\n>\n> Jeff\n>\n\n\n\n-- \nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\nJeff,I used fio in a quick benchmarking script inspired by https://smcleod.net/benchmarking-io/:#!/bin/bash#Random throughputecho \"Random throughput\"syncfio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4M --iodepth=256 --size=10G --readwrite=randread --ramp_time=4#Random IOPSecho \"Random IOPS\"syncfio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=256 --size=4G --readwrite=randread --ramp_time=4#Sequential throughputecho \"Sequential throughput\"syncfio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4M --iodepth=256 --size=10G --readwrite=read --ramp_time=4#Sequential IOPSecho \"Sequential IOPS\"syncfio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=256 --size=4G --readwrite=read --ramp_time=4Performing the test you suggested, I get 128.5MB/s. Monitoring the test, I find that the throughput is constant from start to finish and that the iowait is also constant at 5%:charles@hpdl380g6:~$ sudo sh -c 'time cat /mnt/data/postgresql/base/16385/* | wc -c'[sudo] password for charles: 1.62user 179.94system 29:50.79elapsed 10%CPU (0avgtext+0avgdata 1920maxresident)k448026264inputs+0outputs (0major+117minor)pagefaults 0swaps241297594904After making the changes to HugePage suggested by Rick Otten (above), I found slightly better results (135.7MB/s):charles@hpdl380g6:~$ sudo sh -c 'time cat /mnt/data/postgresql/base/16385/* | wc -c'[sudo] password for charles: 0.86user 130.84system 28:15.78elapsed 7%CPU (0avgtext+0avgdata 1820maxresident)k471286792inputs+0outputs (1major+118minor)pagefaults 0swaps241297594904Could you suggest another way to benchmark random reads?Thanks for your help!CharlesOn Mon, Jul 10, 2017 at 9:24 PM, Jeff Janes <[email protected]> wrote:On Mon, Jul 10, 2017 at 7:03 AM, Charles Nadeau <[email protected]> wrote:\n\nThe problem I have\nis very poor read. When I benchmark my array with fio I get random\nreads of about 200MB/s and 1100IOPS and sequential reads of about\n286MB/s and 21000IPS. That doesn't seem right. Sequential is only 43% faster? What job file are giving to fio?What do you get if you do something simpler, like: time cat ~/$PGDATA/base/16402/*|wc -creplacing 16402 with whatever your biggest database is.Cheers,Jeff\n-- Charles Nadeau Ph.D.http://charlesnadeau.blogspot.com/",
"msg_date": "Tue, 11 Jul 2017 13:02:05 +0200",
"msg_from": "Charles Nadeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "After reducing random_page_cost to 4 and testing more, I can report that\nthe aggregate read throughput for parallel sequential scan is about 90MB/s.\nHowever the throughput for sequential scan is still around 4MB/s.\n\nOne more question: if a query uses more than one table, can more than one\ntable be read through a parallel sequential scan? I have many queries\njoining tables and I noticed that there was never more than one table read\nin parallel.\nThanks!\n\nCharles\n\nOn Mon, Jul 10, 2017 at 8:35 PM, Igor Neyman <[email protected]> wrote:\n\n>\n>\n> *From:* [email protected] [mailto:pgsql-performance-\n> [email protected]] *On Behalf Of *Charles Nadeau\n> *Sent:* Monday, July 10, 2017 11:48 AM\n> *To:* Andreas Kretschmer <[email protected]>\n> *Cc:* [email protected]\n> *Subject:* Re: [PERFORM] Very poor read performance, query independent\n>\n>\n>\n> Andreas,\n>\n>\n>\n> Because the ratio between the Sequential IOPS and Random IOPS is about 29.\n> Taking into account that part of the data is in RAM, I obtained an\n> \"effective\" ratio of about 22.\n>\n> Thanks!\n>\n>\n>\n> Charles\n>\n>\n>\n> On Mon, Jul 10, 2017 at 5:35 PM, Andreas Kretschmer <\n> [email protected]> wrote:\n>\n>\n>\n> Am 10.07.2017 um 16:03 schrieb Charles Nadeau:\n>\n> random_page_cost | 22\n>\n>\n>\n> why such a high value for random_page_cost?\n>\n> Regards, Andreas\n>\n> --\n> 2ndQuadrant - The PostgreSQL Support Company.\n> www.2ndQuadrant.com\n>\n>\n> --\n>\n> Charles Nadeau Ph.D.\n> http://charlesnadeau.blogspot.com/\n>\n>\n>\n>\n>\n> Considering RAM size of 72 GB and your database size of ~225GB, and also\n> the fact that Postgres is the only app running on the server, probably 1/3\n> of your database resides in memory, so random_page_cost = 22 looks\n> extremely high, probably it completely precludes index usage in your\n> queries.\n>\n>\n>\n> You should try this setting at least at its default value:\n> random_page_cost =4, and probably go even lower.\n>\n> Also, effective_cache_size is at least as big as your shared_buffers.\n> Having 72GB RAM t effective_cache_size should be set around 64GB (again\n> considering that Postgres is the only app running on the server).\n>\n>\n>\n> Regards,\n>\n> Igor Neyman\n>\n>\n>\n>\n>\n>\n>\n>\n>\n\n\n\n-- \nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\nAfter reducing random_page_cost to 4 and testing more, I can report that the aggregate read throughput for parallel sequential scan is about 90MB/s. However the throughput for sequential scan is still around 4MB/s.One more question: if a query uses more than one table, can more than one table be read through a parallel sequential scan? I have many queries joining tables and I noticed that there was never more than one table read in parallel.Thanks!CharlesOn Mon, Jul 10, 2017 at 8:35 PM, Igor Neyman <[email protected]> wrote:\n\n\n \n\n\nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of Charles Nadeau\nSent: Monday, July 10, 2017 11:48 AM\nTo: Andreas Kretschmer <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] Very poor read performance, query independent\n\n\n \n\n\nAndreas, \n\n \n\n\nBecause the ratio between the Sequential IOPS and Random IOPS is about 29. Taking into account that part of the data is in RAM, I obtained an \"effective\" ratio of about 22.\n\n\nThanks!\n\n\n \n\n\nCharles\n\n\n\n \n\nOn Mon, Jul 10, 2017 at 5:35 PM, Andreas Kretschmer <[email protected]> wrote:\n\n\n\nAm 10.07.2017 um 16:03 schrieb Charles Nadeau:\n\nrandom_page_cost | 22 \n\n\n\n\n\n\nwhy such a high value for random_page_cost?\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n\n-- \n\nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\n \n\n \nConsidering RAM size of 72 GB and your database size of ~225GB, and also the fact that Postgres is the only app running on the server, probably 1/3 of your database\n resides in memory, so random_page_cost = 22 looks extremely high, probably it completely precludes index usage in your queries.\n \nYou should try this setting at least at its default value: random_page_cost =4, and probably go even lower.\nAlso, effective_cache_size is at least as big as your shared_buffers. Having 72GB RAM t\neffective_cache_size should be set around 64GB (again considering that Postgres is the only app running on the server).\n \nRegards,\nIgor Neyman\n \n \n \n \n\n\n\n\n\n-- Charles Nadeau Ph.D.http://charlesnadeau.blogspot.com/",
"msg_date": "Tue, 11 Jul 2017 14:46:03 +0200",
"msg_from": "Charles Nadeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "From: Charles Nadeau [mailto:[email protected]]\r\nSent: Tuesday, July 11, 2017 6:43 AM\r\nTo: Igor Neyman <[email protected]>\r\nCc: Andreas Kretschmer <[email protected]>; [email protected]\r\nSubject: Re: [PERFORM] Very poor read performance, query independent\r\n\r\nIgor,\r\n\r\nI reduced the value of random_page_cost to 4 but the read speed remains low.\r\nRegarding effective_cache_size and shared_buffer, do you mean they should be both equal to 64GB?\r\nThanks for suggestions!\r\n\r\nCharles\r\n\r\nNo, they should not be equal.\r\nFrom the docs:\r\n\r\neffective_cache_size (integer)\r\nSets the planner's assumption about the effective size of the disk cache that is available to a single query. This is factored into estimates of the cost of using an index; a higher value makes it more likely index scans will be used, a lower value makes it more likely sequential scans will be used. When setting this parameter you should consider both PostgreSQL's shared buffers and the portion of the kernel's disk cache that will be used for PostgreSQL data files. Also, take into account the expected number of concurrent queries on different tables, since they will have to share the available space. This parameter has no effect on the size of shared memory allocated by PostgreSQL, nor does it reserve kernel disk cache; it is used only for estimation purposes. The system also does not assume data remains in the disk cache between queries. The default is 4 gigabytes (4GB).\r\nSo, I’d set shared_buffers at 24GB and effective_cache_size at 64GB.\r\n\r\nRegards,\r\nIgor\r\n\r\n\n\n\n\n\n\n\n\n\n \n\n\nFrom: Charles Nadeau [mailto:[email protected]]\r\n\nSent: Tuesday, July 11, 2017 6:43 AM\nTo: Igor Neyman <[email protected]>\nCc: Andreas Kretschmer <[email protected]>; [email protected]\nSubject: Re: [PERFORM] Very poor read performance, query independent\n\n\n\n\n\n \nIgor,\n\n\n \n\n\nI reduced the value of random_page_cost to 4 but the read speed remains low.\n\n\nRegarding effective_cache_size and shared_buffer, do you mean they should be both equal to 64GB?\n\n\nThanks for suggestions!\n\n\n \n\n\n\nCharles\n\n\n\n\n \nNo, they should not be equal.\nFrom the docs:\n \neffective_cache_size (integer)\n\r\nSets the planner's assumption about the effective size of the disk cache that is available to a single query. This is factored into estimates of the cost of using an index; a higher value makes it more likely index scans will be used, a lower value makes it\r\n more likely sequential scans will be used. When setting this parameter you should consider both PostgreSQL's shared buffers and the portion of the kernel's disk cache that will be used for PostgreSQL data files. Also, take into account the expected number\r\n of concurrent queries on different tables, since they will have to share the available space. This parameter has no effect on the size of shared memory allocated by PostgreSQL, nor does it reserve kernel disk cache; it is used only for estimation purposes.\r\n The system also does not assume data remains in the disk cache between queries. The default is 4 gigabytes (4GB).\nSo, I’d set shared_buffers at 24GB and effective_cache_size at 64GB.\n \nRegards,\nIgor",
"msg_date": "Tue, 11 Jul 2017 14:34:05 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "From: [email protected] [mailto:[email protected]] On Behalf Of Igor Neyman\r\nSent: Tuesday, July 11, 2017 10:34 AM\r\nTo: Charles Nadeau <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Very poor read performance, query independent\r\n\r\nFrom: Charles Nadeau [mailto:[email protected]]\r\nSent: Tuesday, July 11, 2017 6:43 AM\r\nTo: Igor Neyman <[email protected]<mailto:[email protected]>>\r\nCc: Andreas Kretschmer <[email protected]<mailto:[email protected]>>; [email protected]<mailto:[email protected]>\r\nSubject: Re: [PERFORM] Very poor read performance, query independent\r\n\r\nIgor,\r\n\r\nI reduced the value of random_page_cost to 4 but the read speed remains low.\r\nRegarding effective_cache_size and shared_buffer, do you mean they should be both equal to 64GB?\r\nThanks for suggestions!\r\n\r\nCharles\r\n\r\nNo, they should not be equal.\r\nFrom the docs:\r\n\r\neffective_cache_size (integer)\r\nSets the planner's assumption about the effective size of the disk cache that is available to a single query. This is factored into estimates of the cost of using an index; a higher value makes it more likely index scans will be used, a lower value makes it more likely sequential scans will be used. When setting this parameter you should consider both PostgreSQL's shared buffers and the portion of the kernel's disk cache that will be used for PostgreSQL data files. Also, take into account the expected number of concurrent queries on different tables, since they will have to share the available space. This parameter has no effect on the size of shared memory allocated by PostgreSQL, nor does it reserve kernel disk cache; it is used only for estimation purposes. The system also does not assume data remains in the disk cache between queries. The default is 4 gigabytes (4GB).\r\nSo, I’d set shared_buffers at 24GB and effective_cache_size at 64GB.\r\n\r\nRegards,\r\nIgor\r\n\r\nAlso, maybe it’s time to look at execution plans (explain analyze) of specific slow queries, instead of trying to solve the problem “in general”.\r\n\r\nIgor\r\n\r\n\n\n\n\n\n\n\n\n\n \n\n\nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of Igor Neyman\nSent: Tuesday, July 11, 2017 10:34 AM\nTo: Charles Nadeau <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] Very poor read performance, query independent\n\n\n\n \n\n\nFrom: Charles Nadeau [mailto:[email protected]]\r\n\nSent: Tuesday, July 11, 2017 6:43 AM\nTo: Igor Neyman <[email protected]>\nCc: Andreas Kretschmer <[email protected]>;\r\[email protected]\nSubject: Re: [PERFORM] Very poor read performance, query independent\n\n\n\n\n\n \nIgor,\n\n\n \n\n\nI reduced the value of random_page_cost to 4 but the read speed remains low.\n\n\nRegarding effective_cache_size and shared_buffer, do you mean they should be both equal to 64GB?\n\n\nThanks for suggestions!\n\n\n \n\n\n\nCharles\n\n\n\n\n \nNo, they should not be equal.\nFrom the docs:\n \neffective_cache_size (integer)\n\r\nSets the planner's assumption about the effective size of the disk cache that is available to a single query. This is factored into estimates of the cost of using an index; a higher value makes it more likely index scans will be used, a lower value makes it\r\n more likely sequential scans will be used. When setting this parameter you should consider both PostgreSQL's shared buffers and the portion of the kernel's disk cache that will be used for PostgreSQL data files. Also, take into account the expected number\r\n of concurrent queries on different tables, since they will have to share the available space. This parameter has no effect on the size of shared memory allocated by PostgreSQL, nor does it reserve kernel disk cache; it is used only for estimation purposes.\r\n The system also does not assume data remains in the disk cache between queries. The default is 4 gigabytes (4GB).\nSo, I’d set shared_buffers at 24GB and effective_cache_size at 64GB.\n \nRegards,\n\nIgor\n\n \nAlso, maybe it’s time to look at execution plans (explain analyze) of specific slow queries, instead of trying to solve the problem “in general”.\n \nIgor",
"msg_date": "Tue, 11 Jul 2017 15:16:52 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Igor,\n\nThe sum of effective_cache_size and shared_buffer will be higher than the\nphysical memory I have. Is it OK?\nThanks!\n\nCharles\n\nOn Tue, Jul 11, 2017 at 4:34 PM, Igor Neyman <[email protected]> wrote:\n\n>\n>\n> *From:* Charles Nadeau [mailto:[email protected]]\n> *Sent:* Tuesday, July 11, 2017 6:43 AM\n> *To:* Igor Neyman <[email protected]>\n> *Cc:* Andreas Kretschmer <[email protected]>;\n> [email protected]\n> *Subject:* Re: [PERFORM] Very poor read performance, query independent\n>\n>\n>\n> Igor,\n>\n>\n>\n> I reduced the value of random_page_cost to 4 but the read speed remains\n> low.\n>\n> Regarding effective_cache_size and shared_buffer, do you mean they should\n> be both equal to 64GB?\n>\n> Thanks for suggestions!\n>\n>\n>\n> Charles\n>\n>\n>\n> No, they should not be equal.\n>\n> From the docs:\n>\n>\n>\n> effective_cache_size (integer)\n>\n> Sets the planner's assumption about the effective size of the disk cache\n> that is available to a single query. This is factored into estimates of the\n> cost of using an index; a higher value makes it more likely index scans\n> will be used, a lower value makes it more likely sequential scans will be\n> used. When setting this parameter you should consider both PostgreSQL's\n> shared buffers and the portion of the kernel's disk cache that will be used\n> for PostgreSQL data files. Also, take into account the expected number of\n> concurrent queries on different tables, since they will have to share the\n> available space. This parameter has no effect on the size of shared memory\n> allocated by PostgreSQL, nor does it reserve kernel disk cache; it is used\n> only for estimation purposes. The system also does not assume data remains\n> in the disk cache between queries. The default is 4 gigabytes (4GB).\n>\n> So, I’d set shared_buffers at 24GB and effective_cache_size at 64GB.\n>\n>\n>\n> Regards,\n>\n> Igor\n>\n>\n>\n\n\n\n-- \nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\nIgor,The sum of effective_cache_size and shared_buffer will be higher than the physical memory I have. Is it OK?Thanks!CharlesOn Tue, Jul 11, 2017 at 4:34 PM, Igor Neyman <[email protected]> wrote:\n\n\n \n\n\nFrom: Charles Nadeau [mailto:[email protected]]\n\nSent: Tuesday, July 11, 2017 6:43 AM\nTo: Igor Neyman <[email protected]>\nCc: Andreas Kretschmer <[email protected]>; [email protected]\nSubject: Re: [PERFORM] Very poor read performance, query independent\n\n\n\n\n\n \nIgor,\n\n\n \n\n\nI reduced the value of random_page_cost to 4 but the read speed remains low.\n\n\nRegarding effective_cache_size and shared_buffer, do you mean they should be both equal to 64GB?\n\n\nThanks for suggestions!\n\n\n \n\n\n\nCharles\n\n\n\n\n \nNo, they should not be equal.\nFrom the docs:\n \neffective_cache_size (integer)\n\nSets the planner's assumption about the effective size of the disk cache that is available to a single query. This is factored into estimates of the cost of using an index; a higher value makes it more likely index scans will be used, a lower value makes it\n more likely sequential scans will be used. When setting this parameter you should consider both PostgreSQL's shared buffers and the portion of the kernel's disk cache that will be used for PostgreSQL data files. Also, take into account the expected number\n of concurrent queries on different tables, since they will have to share the available space. This parameter has no effect on the size of shared memory allocated by PostgreSQL, nor does it reserve kernel disk cache; it is used only for estimation purposes.\n The system also does not assume data remains in the disk cache between queries. The default is 4 gigabytes (4GB).\nSo, I’d set shared_buffers at 24GB and effective_cache_size at 64GB.\n \nRegards,\nIgor\n \n\n\n\n\n-- Charles Nadeau Ph.D.http://charlesnadeau.blogspot.com/",
"msg_date": "Tue, 11 Jul 2017 17:25:14 +0200",
"msg_from": "Charles Nadeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "From: Charles Nadeau [mailto:[email protected]]\r\nSent: Tuesday, July 11, 2017 11:25 AM\r\nTo: Igor Neyman <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Very poor read performance, query independent\r\n\r\n\r\nAttention: This email was sent from someone outside of Perceptron. Always exercise caution when opening attachments or clicking links from unknown senders or when receiving unexpected emails.\r\n\r\nIgor,\r\n\r\nThe sum of effective_cache_size and shared_buffer will be higher than the physical memory I have. Is it OK?\r\nThanks!\r\n\r\nCharles\r\n\r\nYes, that’s normal.\r\n\r\nshared_buffers is the maximum that Postgres allowed to allocate, while effective_cache_size is just a number that optimizer takes into account when creating execution plan.\r\n\r\nIgor\r\n\r\n\n\n\n\n\n\n\n\n\n \n\n\nFrom: Charles Nadeau [mailto:[email protected]]\r\n\nSent: Tuesday, July 11, 2017 11:25 AM\nTo: Igor Neyman <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] Very poor read performance, query independent\n\n\n \nAttention: This email was sent from someone outside of Perceptron. Always exercise caution when opening attachments or clicking links from unknown senders or when receiving unexpected emails.\n \n\n\nIgor, \n\n \n\n\nThe sum of effective_cache_size and shared_buffer will be higher than the physical memory I have. Is it OK?\n\n\nThanks!\n\n\n \n\n\n\nCharles\n\n\n\n\n \nYes, that’s normal.\n \nshared_buffers is the maximum that Postgres allowed to allocate, while effective_cache_size is just a number that optimizer takes into account when creating execution\r\n plan.\n \nIgor",
"msg_date": "Tue, 11 Jul 2017 15:46:30 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "On Mon, Jul 10, 2017 at 9:03 AM, Charles Nadeau\n<[email protected]> wrote:\n> I’m running PostgreSQL 9.6.3 on Ubuntu 16.10 (kernel 4.4.0-85-generic).\n> Hardware is:\n>\n> *2x Intel Xeon E5550\n>\n> *72GB RAM\n>\n> *Hardware RAID10 (4 x 146GB SAS 10k) P410i controller with 1GB FBWC (80%\n> read/20% write) for Postgresql data only:\n>\n> The problem I have is very poor read. When I benchmark my array with fio I\n> get random reads of about 200MB/s and 1100IOPS and sequential reads of about\n> 286MB/s and 21000IPS. But when I watch my queries using pg_activity, I get\n> at best 4MB/s. Also using dstat I can see that iowait time is at about 25%.\n> This problem is not query-dependent.\n\nStop right there. 1100 iops * 8kb = ~8mb/sec raw which might\nreasonably translate to 4mb/sec to the client. 200mb/sec random\nread/sec on spinning media is simply not plausible; your benchmark is\nlying to you. Random reads on spinning media are absolutely going to\nbe storage bound.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 11 Jul 2017 18:15:28 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "On 07/11/2017 04:15 PM, Merlin Moncure wrote:\n> On Mon, Jul 10, 2017 at 9:03 AM, Charles Nadeau\n> <[email protected]> wrote:\n>> I’m running PostgreSQL 9.6.3 on Ubuntu 16.10 (kernel 4.4.0-85-generic).\n>> Hardware is:\n>>\n>> *2x Intel Xeon E5550\n>>\n>> *72GB RAM\n>>\n>> *Hardware RAID10 (4 x 146GB SAS 10k) P410i controller with 1GB FBWC (80%\n>> read/20% write) for Postgresql data only:\n>>\n>> The problem I have is very poor read. When I benchmark my array with fio I\n>> get random reads of about 200MB/s and 1100IOPS and sequential reads of about\n>> 286MB/s and 21000IPS. But when I watch my queries using pg_activity, I get\n>> at best 4MB/s. Also using dstat I can see that iowait time is at about 25%.\n>> This problem is not query-dependent.\n> \n> Stop right there. 1100 iops * 8kb = ~8mb/sec raw which might\n> reasonably translate to 4mb/sec to the client. 200mb/sec random\n> read/sec on spinning media is simply not plausible;\n\nSure it is, if he had more than 4 disks ;) but he also isn't going to \nget 1100 IOPS from 4 10k disks. The average 10k disk is going to get \naround 130 IOPS . If he only has 4 then there is no way he is getting \n1100 IOPS.\n\nUsing the above specs (4x146GB) the best he can reasonably hope for from \nthe drives themselves is about 50MB/s add in the 1GB FWBC and that is \nhow he is getting those high numbers for IOPS but that is because of \ncaching.\n\nHe may need to adjust his readahead as well as his kernel scheduler. At \na minimum he should be able to saturate the drives without issue.\n\nJD\n\n\n\n-- \nCommand Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc\n\nPostgreSQL Centered full stack support, consulting and development.\nAdvocate: @amplifypostgres || Learn: https://pgconf.us\n***** Unless otherwise stated, opinions are my own. *****\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 11 Jul 2017 16:42:08 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "On Tue, Jul 11, 2017 at 4:02 AM, Charles Nadeau <[email protected]>\nwrote:\n\n> Jeff,\n>\n> I used fio in a quick benchmarking script inspired by https://smcleod.net/\n> benchmarking-io/:\n>\n> #!/bin/bash\n> #Random throughput\n> echo \"Random throughput\"\n> sync\n> fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1\n> --name=test --filename=test --bs=4M --iodepth=256 --size=10G\n> --readwrite=randread --ramp_time=4\n> #Random IOPS\n> echo \"Random IOPS\"\n> sync\n> fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1\n> --name=test --filename=test --bs=4k --iodepth=256 --size=4G\n> --readwrite=randread --ramp_time=4\n> #Sequential throughput\n> echo \"Sequential throughput\"\n> sync\n> fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1\n> --name=test --filename=test --bs=4M --iodepth=256 --size=10G\n> --readwrite=read --ramp_time=4\n> #Sequential IOPS\n> echo \"Sequential IOPS\"\n> sync\n> fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1\n> --name=test --filename=test --bs=4k --iodepth=256 --size=4G\n> --readwrite=read --ramp_time=4\n>\n>\nI don't think any of those are directly relevant to PostgreSQL, as it\ndoesn't use direct IO, doesn't use libaio, and is rarely going to get\nanywhere near 256 iodepth. So the best they can do is put a theoretical\nceiling on the performance. Also, random IO with a 4MB stride doesn't make\nany sense from a PostgreSQL perspective.\n\n\n\n>\n> Performing the test you suggested, I get 128.5MB/s. Monitoring the test, I\n> find that the throughput is constant from start to finish and that the\n> iowait is also constant at 5%:\n>\n\nI would have expected it to do better than that. Maybe you increase the\nkernel readahead setting. I've found the default to be much too small.\nBut it doesn't make much difference to you, as you appear to be doing\nrandom IO in your queries, not sequential.\n\n\n> Could you suggest another way to benchmark random reads?\n>\n\nYour 1100 IOPS times 8kb block size gives about 8MB/s of throughput, which\nis close to what you report. So I think I'd would instead focus on tuning\nyour actual queries. You say the problem is not query-dependent, but I\nthink that that just means all the queries you looked at are similar. If\nyou looked at a query that can't use indexes, like count(unindexed_column)\nfrom biggest_table; you would find it doing much more IO than 4MB/s.\n\nCan you pick the simplest query you actually care about, and post both an\n\"explain (analyze, timing off)\" and an \"explain (analyze, buffers)\" for it?\n (Preferably turning \"track_io_timing\" on first).\n\nOne other question I had, you said you had \"2x Intel Xeon E5550\", which\nshould be 8 CPU (or 16, if the hyperthreads\nare reported as separate CPUs). But you also said: \"Also using dstat I can\nsee that iowait time is at about 25%\". Usually if there is only one thing\ngoing on on the server, then IOWAIT won't be more than reciprocal of #CPU.\nIs the server busy doing other stuff at the same time you are benchmarking\nit?\n\nCheers,\n\nJeff\n\nOn Tue, Jul 11, 2017 at 4:02 AM, Charles Nadeau <[email protected]> wrote:Jeff,I used fio in a quick benchmarking script inspired by https://smcleod.net/benchmarking-io/:#!/bin/bash#Random throughputecho \"Random throughput\"syncfio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4M --iodepth=256 --size=10G --readwrite=randread --ramp_time=4#Random IOPSecho \"Random IOPS\"syncfio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=256 --size=4G --readwrite=randread --ramp_time=4#Sequential throughputecho \"Sequential throughput\"syncfio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4M --iodepth=256 --size=10G --readwrite=read --ramp_time=4#Sequential IOPSecho \"Sequential IOPS\"syncfio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=256 --size=4G --readwrite=read --ramp_time=4I don't think any of those are directly relevant to PostgreSQL, as it doesn't use direct IO, doesn't use libaio, and is rarely going to get anywhere near 256 iodepth. So the best they can do is put a theoretical ceiling on the performance. Also, random IO with a 4MB stride doesn't make any sense from a PostgreSQL perspective. Performing the test you suggested, I get 128.5MB/s. Monitoring the test, I find that the throughput is constant from start to finish and that the iowait is also constant at 5%:I would have expected it to do better than that. Maybe you increase the kernel readahead setting. I've found the default to be much too small. But it doesn't make much difference to you, as you appear to be doing random IO in your queries, not sequential.Could you suggest another way to benchmark random reads?Your 1100 IOPS times 8kb block size gives about 8MB/s of throughput, which is close to what you report. So I think I'd would instead focus on tuning your actual queries. You say the problem is not query-dependent, but I think that that just means all the queries you looked at are similar. If you looked at a query that can't use indexes, like count(unindexed_column) from biggest_table; you would find it doing much more IO than 4MB/s.Can you pick the simplest query you actually care about, and post both an \"explain (analyze, timing off)\" and an \"explain (analyze, buffers)\" for it? (Preferably turning \"track_io_timing\" on first).One other question I had, you said you had \"2x Intel Xeon E5550\", which should be 8 CPU (or 16, if the hyperthreads are reported as separate CPUs). But you also said: \"Also using dstat I can see that iowait time is at about 25%\". Usually if there is only one thing going on on the server, then IOWAIT won't be more than reciprocal of #CPU. Is the server busy doing other stuff at the same time you are benchmarking it?Cheers,Jeff",
"msg_date": "Tue, 11 Jul 2017 17:39:54 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "On Tue, Jul 11, 2017 at 4:42 PM, Joshua D. Drake <[email protected]>\nwrote:\n\n> On 07/11/2017 04:15 PM, Merlin Moncure wrote:\n>\n>> On Mon, Jul 10, 2017 at 9:03 AM, Charles Nadeau\n>> <[email protected]> wrote:\n>>\n>>> I’m running PostgreSQL 9.6.3 on Ubuntu 16.10 (kernel 4.4.0-85-generic).\n>>> Hardware is:\n>>>\n>>> *2x Intel Xeon E5550\n>>>\n>>> *72GB RAM\n>>>\n>>> *Hardware RAID10 (4 x 146GB SAS 10k) P410i controller with 1GB FBWC (80%\n>>> read/20% write) for Postgresql data only:\n>>>\n>>> The problem I have is very poor read. When I benchmark my array with fio\n>>> I\n>>> get random reads of about 200MB/s and 1100IOPS and sequential reads of\n>>> about\n>>> 286MB/s and 21000IPS. But when I watch my queries using pg_activity, I\n>>> get\n>>> at best 4MB/s. Also using dstat I can see that iowait time is at about\n>>> 25%.\n>>> This problem is not query-dependent.\n>>>\n>>\n>> Stop right there. 1100 iops * 8kb = ~8mb/sec raw which might\n>> reasonably translate to 4mb/sec to the client. 200mb/sec random\n>> read/sec on spinning media is simply not plausible;\n>>\n>\n> Sure it is, if he had more than 4 disks ;)\n\n\nOr more to the point here, if each random read is 4MB long. Which makes it\nmore like sequential reads, randomly-piecewise, rather than random reads.\n\n\n> but he also isn't going to get 1100 IOPS from 4 10k disks. The average 10k\n> disk is going to get around 130 IOPS . If he only has 4 then there is no\n> way he is getting 1100 IOPS.\n>\n\nI wouldn't be sure. He is using an iodepth of 256 in his benchmark. It\nwouldn't be all that outrageous for a disk to be able to find 3 or 4\nsectors per revolution it can read, when it has that many to choose from.\n\n Cheers,\n\nJeff\n\nOn Tue, Jul 11, 2017 at 4:42 PM, Joshua D. Drake <[email protected]> wrote:On 07/11/2017 04:15 PM, Merlin Moncure wrote:\n\nOn Mon, Jul 10, 2017 at 9:03 AM, Charles Nadeau\n<[email protected]> wrote:\n\nI’m running PostgreSQL 9.6.3 on Ubuntu 16.10 (kernel 4.4.0-85-generic).\nHardware is:\n\n*2x Intel Xeon E5550\n\n*72GB RAM\n\n*Hardware RAID10 (4 x 146GB SAS 10k) P410i controller with 1GB FBWC (80%\nread/20% write) for Postgresql data only:\n\nThe problem I have is very poor read. When I benchmark my array with fio I\nget random reads of about 200MB/s and 1100IOPS and sequential reads of about\n286MB/s and 21000IPS. But when I watch my queries using pg_activity, I get\nat best 4MB/s. Also using dstat I can see that iowait time is at about 25%.\nThis problem is not query-dependent.\n\n\nStop right there. 1100 iops * 8kb = ~8mb/sec raw which might\nreasonably translate to 4mb/sec to the client. 200mb/sec random\nread/sec on spinning media is simply not plausible;\n\n\nSure it is, if he had more than 4 disks ;) Or more to the point here, if each random read is 4MB long. Which makes it more like sequential reads, randomly-piecewise, rather than random reads. but he also isn't going to get 1100 IOPS from 4 10k disks. The average 10k disk is going to get around 130 IOPS . If he only has 4 then there is no way he is getting 1100 IOPS.I wouldn't be sure. He is using an iodepth of 256 in his benchmark. It wouldn't be all that outrageous for a disk to be able to find 3 or 4 sectors per revolution it can read, when it has that many to choose from. Cheers,Jeff",
"msg_date": "Tue, 11 Jul 2017 18:03:12 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Hmm - how are you measuring that sequential scan speed of 4MB/s? I'd \nrecommend doing a very simple test e.g, here's one on my workstation - \n13 GB single table on 1 SATA drive - cold cache after reboot, sequential \nscan using Postgres 9.6.2:\n\nbench=# EXPLAIN SELECT count(*) FROM pgbench_accounts;\n QUERY PLAN\n------------------------------------------------------------------------------------\n Aggregate (cost=2889345.00..2889345.01 rows=1 width=8)\n -> Seq Scan on pgbench_accounts (cost=0.00..2639345.00 \nrows=100000000 width=0)\n(2 rows)\n\n\nbench=# SELECT pg_relation_size('pgbench_accounts');\n pg_relation_size\n------------------\n 13429514240\n(1 row)\n\nbench=# SELECT count(*) FROM pgbench_accounts;\n count\n-----------\n 100000000\n(1 row)\n\nTime: 118884.277 ms\n\n\nSo doing the math seq read speed is about 110MB/s (i.e 13 GB in 120 \nsec). Sure enough, while I was running the query iostat showed:\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz \navgqu-sz await r_await w_await svctm %util\nsda 0.00 0.00 926.00 0.00 114.89 0.00 \n254.10 1.90 2.03 2.03 0.00 1.08 100.00\n\n\nSo might be useful for us to see something like that from your system - \nnote you need to check you really have flushed the cache, and that no \nother apps are using the db.\n\nregards\n\nMark\n\nOn 12/07/17 00:46, Charles Nadeau wrote:\n> After reducing random_page_cost to 4 and testing more, I can report \n> that the aggregate read throughput for parallel sequential scan is \n> about 90MB/s. However the throughput for sequential scan is still \n> around 4MB/s.\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 12 Jul 2017 14:11:53 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Igor,\n\nI set shared_buffers to 24 GB and effective_cache_size to 64GB and I can\nsee that the queries are faster due to the fact that the index are used\nmore often. Knowing I have 72GB of RAM and the server is exclusively\ndedicated to Postgresql, what could be the maximum value for\neffective_cache?\nThanks!\n\nCharles\n\nOn Tue, Jul 11, 2017 at 5:16 PM, Igor Neyman <[email protected]> wrote:\n\n>\n>\n> *From:* [email protected] [mailto:pgsql-performance-\n> [email protected]] *On Behalf Of *Igor Neyman\n> *Sent:* Tuesday, July 11, 2017 10:34 AM\n> *To:* Charles Nadeau <[email protected]>\n> *Cc:* [email protected]\n> *Subject:* Re: [PERFORM] Very poor read performance, query independent\n>\n>\n>\n> *From:* Charles Nadeau [mailto:[email protected]\n> <[email protected]>]\n> *Sent:* Tuesday, July 11, 2017 6:43 AM\n> *To:* Igor Neyman <[email protected]>\n> *Cc:* Andreas Kretschmer <[email protected]>;\n> [email protected]\n> *Subject:* Re: [PERFORM] Very poor read performance, query independent\n>\n>\n>\n> Igor,\n>\n>\n>\n> I reduced the value of random_page_cost to 4 but the read speed remains\n> low.\n>\n> Regarding effective_cache_size and shared_buffer, do you mean they should\n> be both equal to 64GB?\n>\n> Thanks for suggestions!\n>\n>\n>\n> Charles\n>\n>\n>\n> No, they should not be equal.\n>\n> From the docs:\n>\n>\n>\n> effective_cache_size (integer)\n>\n> Sets the planner's assumption about the effective size of the disk cache\n> that is available to a single query. This is factored into estimates of the\n> cost of using an index; a higher value makes it more likely index scans\n> will be used, a lower value makes it more likely sequential scans will be\n> used. When setting this parameter you should consider both PostgreSQL's\n> shared buffers and the portion of the kernel's disk cache that will be used\n> for PostgreSQL data files. Also, take into account the expected number of\n> concurrent queries on different tables, since they will have to share the\n> available space. This parameter has no effect on the size of shared memory\n> allocated by PostgreSQL, nor does it reserve kernel disk cache; it is used\n> only for estimation purposes. The system also does not assume data remains\n> in the disk cache between queries. The default is 4 gigabytes (4GB).\n>\n> So, I’d set shared_buffers at 24GB and effective_cache_size at 64GB.\n>\n>\n>\n> Regards,\n>\n> Igor\n>\n>\n>\n> Also, maybe it’s time to look at execution plans (explain analyze) of\n> specific slow queries, instead of trying to solve the problem “in general”.\n>\n>\n>\n> Igor\n>\n>\n>\n\n\n\n-- \nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\nIgor,I set shared_buffers to 24 GB and effective_cache_size to 64GB and I can see that the queries are faster due to the fact that the index are used more often. Knowing I have 72GB of RAM and the server is exclusively dedicated to Postgresql, what could be the maximum value for effective_cache?Thanks!CharlesOn Tue, Jul 11, 2017 at 5:16 PM, Igor Neyman <[email protected]> wrote:\n\n\n \n\n\nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of Igor Neyman\nSent: Tuesday, July 11, 2017 10:34 AM\nTo: Charles Nadeau <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] Very poor read performance, query independent\n\n\n\n \n\n\nFrom: Charles Nadeau [mailto:[email protected]]\n\nSent: Tuesday, July 11, 2017 6:43 AM\nTo: Igor Neyman <[email protected]>\nCc: Andreas Kretschmer <[email protected]>;\[email protected]\nSubject: Re: [PERFORM] Very poor read performance, query independent\n\n\n\n\n\n \nIgor,\n\n\n \n\n\nI reduced the value of random_page_cost to 4 but the read speed remains low.\n\n\nRegarding effective_cache_size and shared_buffer, do you mean they should be both equal to 64GB?\n\n\nThanks for suggestions!\n\n\n \n\n\n\nCharles\n\n\n\n\n \nNo, they should not be equal.\nFrom the docs:\n \neffective_cache_size (integer)\n\nSets the planner's assumption about the effective size of the disk cache that is available to a single query. This is factored into estimates of the cost of using an index; a higher value makes it more likely index scans will be used, a lower value makes it\n more likely sequential scans will be used. When setting this parameter you should consider both PostgreSQL's shared buffers and the portion of the kernel's disk cache that will be used for PostgreSQL data files. Also, take into account the expected number\n of concurrent queries on different tables, since they will have to share the available space. This parameter has no effect on the size of shared memory allocated by PostgreSQL, nor does it reserve kernel disk cache; it is used only for estimation purposes.\n The system also does not assume data remains in the disk cache between queries. The default is 4 gigabytes (4GB).\nSo, I’d set shared_buffers at 24GB and effective_cache_size at 64GB.\n \nRegards,\n\nIgor\n\n \nAlso, maybe it’s time to look at execution plans (explain analyze) of specific slow queries, instead of trying to solve the problem “in general”.\n \nIgor\n \n\n\n\n\n\n-- Charles Nadeau Ph.D.http://charlesnadeau.blogspot.com/",
"msg_date": "Wed, 12 Jul 2017 09:21:09 +0200",
"msg_from": "Charles Nadeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Joshua,\n\nI use noop as the scheduler because it is better to let the RAID controller\nre-arrange the IO operation before they reach the disk. Read ahead is set\nto 128:\n\ncharles@hpdl380g6:~$ cat /sys/block/sdc/queue/read_ahead_kb\n128\ncharles@hpdl380g6:~$ cat /sys/block/sdc/queue/scheduler\n[noop] deadline cfq\n\nThanks!\n\nCharles\n\nOn Wed, Jul 12, 2017 at 1:42 AM, Joshua D. Drake <[email protected]>\nwrote:\n\n> On 07/11/2017 04:15 PM, Merlin Moncure wrote:\n>\n>> On Mon, Jul 10, 2017 at 9:03 AM, Charles Nadeau\n>> <[email protected]> wrote:\n>>\n>>> I’m running PostgreSQL 9.6.3 on Ubuntu 16.10 (kernel 4.4.0-85-generic).\n>>> Hardware is:\n>>>\n>>> *2x Intel Xeon E5550\n>>>\n>>> *72GB RAM\n>>>\n>>> *Hardware RAID10 (4 x 146GB SAS 10k) P410i controller with 1GB FBWC (80%\n>>> read/20% write) for Postgresql data only:\n>>>\n>>> The problem I have is very poor read. When I benchmark my array with fio\n>>> I\n>>> get random reads of about 200MB/s and 1100IOPS and sequential reads of\n>>> about\n>>> 286MB/s and 21000IPS. But when I watch my queries using pg_activity, I\n>>> get\n>>> at best 4MB/s. Also using dstat I can see that iowait time is at about\n>>> 25%.\n>>> This problem is not query-dependent.\n>>>\n>>\n>> Stop right there. 1100 iops * 8kb = ~8mb/sec raw which might\n>> reasonably translate to 4mb/sec to the client. 200mb/sec random\n>> read/sec on spinning media is simply not plausible;\n>>\n>\n> Sure it is, if he had more than 4 disks ;) but he also isn't going to get\n> 1100 IOPS from 4 10k disks. The average 10k disk is going to get around 130\n> IOPS . If he only has 4 then there is no way he is getting 1100 IOPS.\n>\n> Using the above specs (4x146GB) the best he can reasonably hope for from\n> the drives themselves is about 50MB/s add in the 1GB FWBC and that is how\n> he is getting those high numbers for IOPS but that is because of caching.\n>\n> He may need to adjust his readahead as well as his kernel scheduler. At a\n> minimum he should be able to saturate the drives without issue.\n>\n> JD\n>\n>\n>\n> --\n> Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc\n>\n> PostgreSQL Centered full stack support, consulting and development.\n> Advocate: @amplifypostgres || Learn: https://pgconf.us\n> ***** Unless otherwise stated, opinions are my own. *****\n>\n\n\n\n-- \nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\nJoshua,I use noop as the scheduler because it is better to let the RAID controller re-arrange the IO operation before they reach the disk. Read ahead is set to 128:charles@hpdl380g6:~$ cat /sys/block/sdc/queue/read_ahead_kb128charles@hpdl380g6:~$ cat /sys/block/sdc/queue/scheduler[noop] deadline cfq Thanks!CharlesOn Wed, Jul 12, 2017 at 1:42 AM, Joshua D. Drake <[email protected]> wrote:On 07/11/2017 04:15 PM, Merlin Moncure wrote:\n\nOn Mon, Jul 10, 2017 at 9:03 AM, Charles Nadeau\n<[email protected]> wrote:\n\nI’m running PostgreSQL 9.6.3 on Ubuntu 16.10 (kernel 4.4.0-85-generic).\nHardware is:\n\n*2x Intel Xeon E5550\n\n*72GB RAM\n\n*Hardware RAID10 (4 x 146GB SAS 10k) P410i controller with 1GB FBWC (80%\nread/20% write) for Postgresql data only:\n\nThe problem I have is very poor read. When I benchmark my array with fio I\nget random reads of about 200MB/s and 1100IOPS and sequential reads of about\n286MB/s and 21000IPS. But when I watch my queries using pg_activity, I get\nat best 4MB/s. Also using dstat I can see that iowait time is at about 25%.\nThis problem is not query-dependent.\n\n\nStop right there. 1100 iops * 8kb = ~8mb/sec raw which might\nreasonably translate to 4mb/sec to the client. 200mb/sec random\nread/sec on spinning media is simply not plausible;\n\n\nSure it is, if he had more than 4 disks ;) but he also isn't going to get 1100 IOPS from 4 10k disks. The average 10k disk is going to get around 130 IOPS . If he only has 4 then there is no way he is getting 1100 IOPS.\n\nUsing the above specs (4x146GB) the best he can reasonably hope for from the drives themselves is about 50MB/s add in the 1GB FWBC and that is how he is getting those high numbers for IOPS but that is because of caching.\n\nHe may need to adjust his readahead as well as his kernel scheduler. At a minimum he should be able to saturate the drives without issue.\n\nJD\n\n\n\n-- \nCommand Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc\n\nPostgreSQL Centered full stack support, consulting and development.\nAdvocate: @amplifypostgres || Learn: https://pgconf.us\n***** Unless otherwise stated, opinions are my own. *****\n-- Charles Nadeau Ph.D.http://charlesnadeau.blogspot.com/",
"msg_date": "Wed, 12 Jul 2017 09:30:16 +0200",
"msg_from": "Charles Nadeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Jeff,\n\nHere are the 2 EXPLAINs for one of my simplest query:\n\nflows=# SET track_io_timing = on;\nLOG: duration: 24.101 ms statement: SET track_io_timing = on;\nSET\nflows=# explain (analyze, timing off) SELECT DISTINCT\nflows-# srcaddr,\nflows-# dstaddr,\nflows-# dstport,\nflows-# COUNT(*) AS conversation,\nflows-# SUM(doctets) / 1024 / 1024 AS mbytes\nflows-# FROM\nflows-# flowscompact,\nflows-# mynetworks\nflows-# WHERE\nflows-# mynetworks.ipaddr >>= flowscompact.srcaddr\nflows-# AND dstaddr IN\nflows-# (\nflows(# SELECT\nflows(# dstaddr\nflows(# FROM\nflows(# dstexterne\nflows(# )\nflows-# GROUP BY\nflows-# srcaddr,\nflows-# dstaddr,\nflows-# dstport\nflows-# ORDER BY\nflows-# mbytes DESC LIMIT 50;\nLOG: temporary file: path\n\"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.3\", size\n1073741824\nLOG: temporary file: path\n\"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.4\", size\n1073741824\nLOG: temporary file: path\n\"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.5\", size\n639696896\nLOG: duration: 2632108.352 ms statement: explain (analyze, timing off)\nSELECT DISTINCT\n srcaddr,\n dstaddr,\n dstport,\n COUNT(*) AS conversation,\n SUM(doctets) / 1024 / 1024 AS mbytes\nFROM\n flowscompact,\n mynetworks\nWHERE\n mynetworks.ipaddr >>= flowscompact.srcaddr\n AND dstaddr IN\n (\n SELECT\n dstaddr\n FROM\n dstexterne\n )\nGROUP BY\n srcaddr,\n dstaddr,\n dstport\nORDER BY\n mbytes DESC LIMIT 50;\n\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=37762321.83..37762321.98 rows=50 width=52) (actual rows=50\nloops=1)\n -> Unique (cost=37762321.83..37769053.57 rows=2243913 width=52)\n(actual rows=50 loops=1)\n -> Sort (cost=37762321.83..37763443.79 rows=2243913 width=52)\n(actual rows=50 loops=1)\n Sort Key: (((sum(flows.doctets) / '1024'::numeric) /\n'1024'::numeric)) DESC, flows.srcaddr, flows.dstaddr, flows.dstport,\n(count(*))\n Sort Method: quicksort Memory: 563150kB\n -> GroupAggregate (cost=37698151.34..37714980.68\nrows=2243913 width=52) (actual rows=4691734 loops=1)\n Group Key: flows.srcaddr, flows.dstaddr, flows.dstport\n -> Sort (cost=37698151.34..37699273.29 rows=2243913\nwidth=20) (actual rows=81896988 loops=1)\n Sort Key: flows.srcaddr, flows.dstaddr,\nflows.dstport\n Sort Method: external merge Disk: 2721856kB\n -> Gather (cost=19463936.00..37650810.19\nrows=2243913 width=20) (actual rows=81896988 loops=1)\n Workers Planned: 9\n Workers Launched: 9\n -> Hash Semi Join\n (cost=19462936.00..37622883.23 rows=249324 width=20) (actual rows=8189699\nloops=10)\n Hash Cond: (flows.dstaddr =\nflows_1.dstaddr)\n -> Nested Loop\n (cost=0.03..18159012.30 rows=249324 width=20) (actual rows=45499045\nloops=10)\n -> Parallel Seq Scan on flows\n (cost=0.00..16039759.79 rows=62330930 width=20) (actual rows=54155970\nloops=10)\n -> Index Only Scan using\nmynetworks_ipaddr_idx on mynetworks (cost=0.03..0.03 rows=1 width=8)\n(actual rows=1 loops=541559704)\n Index Cond: (ipaddr >>=\n(flows.srcaddr)::ip4r)\n Heap Fetches: 48679396\n -> Hash\n (cost=19462896.74..19462896.74 rows=11210 width=4) (actual rows=3099798\nloops=10)\n Buckets: 4194304 (originally\n16384) Batches: 1 (originally 1) Memory Usage: 141746kB\n -> HashAggregate\n (cost=19462829.48..19462863.11 rows=11210 width=4) (actual rows=3099798\nloops=10)\n Group Key:\nflows_1.dstaddr\n -> Nested Loop Anti\nJoin (cost=0.12..19182620.78 rows=560417390 width=4) (actual\nrows=113420172 loops=10)\n Join Filter:\n(mynetworks_1.ipaddr >> (flows_1.dstaddr)::ip4r)\n Rows Removed by\nJoin Filter: 453681377\n -> Index Only\nScan using flows_srcaddr_dstaddr_idx on flows flows_1\n (cost=0.12..9091067.70 rows=560978368 width=4) (actual rows=541559704\nloops=10)\n Heap\nFetches: 91\n -> Materialize\n (cost=0.00..1.02 rows=4 width=8) (actual rows=2 loops=5415597040)\n -> Seq Scan\non mynetworks mynetworks_1 (cost=0.00..1.01 rows=4 width=8) (actual rows=4\nloops=10)\n Planning time: 62.066 ms\n Execution time: 2631923.716 ms\n(33 rows)\n\nflows=# explain (analyze, buffers) SELECT DISTINCT\nflows-# srcaddr,\nflows-# dstaddr,\nflows-# dstport,\nflows-# COUNT(*) AS conversation,\nflows-# SUM(doctets) / 1024 / 1024 AS mbytes\nflows-# FROM\nflows-# flowscompact,\nflows-# mynetworks\nflows-# WHERE\nflows-# mynetworks.ipaddr >>= flowscompact.srcaddr\nflows-# AND dstaddr IN\nflows-# (\nflows(# SELECT\nflows(# dstaddr\nflows(# FROM\nflows(# dstexterne\nflows(# )\nflows-# GROUP BY\nflows-# srcaddr,\nflows-# dstaddr,\nflows-# dstport\nflows-# ORDER BY\nflows-# mbytes DESC LIMIT 50;\nLOG: temporary file: path\n\"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.6\", size\n1073741824\nLOG: temporary file: path\n\"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.7\", size\n1073741824\nLOG: temporary file: path\n\"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.8\", size\n639696896\nLOG: duration: 2765020.327 ms statement: explain (analyze, buffers)\nSELECT DISTINCT\n srcaddr,\n dstaddr,\n dstport,\n COUNT(*) AS conversation,\n SUM(doctets) / 1024 / 1024 AS mbytes\nFROM\n flowscompact,\n mynetworks\nWHERE\n mynetworks.ipaddr >>= flowscompact.srcaddr\n AND dstaddr IN\n (\n SELECT\n dstaddr\n FROM\n dstexterne\n )\nGROUP BY\n srcaddr,\n dstaddr,\n dstport\nORDER BY\n mbytes DESC LIMIT 50;\n\n QUERY PLAN\n\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=37762321.83..37762321.98 rows=50 width=52) (actual\ntime=2764548.863..2764548.891 rows=50 loops=1)\n Buffers: shared hit=1116590560 read=15851133, temp read=340244\nwritten=340244\n I/O Timings: read=5323746.860\n -> Unique (cost=37762321.83..37769053.57 rows=2243913 width=52)\n(actual time=2764548.861..2764548.882 rows=50 loops=1)\n Buffers: shared hit=1116590560 read=15851133, temp read=340244\nwritten=340244\n I/O Timings: read=5323746.860\n -> Sort (cost=37762321.83..37763443.79 rows=2243913 width=52)\n(actual time=2764548.859..2764548.872 rows=50 loops=1)\n Sort Key: (((sum(flows.doctets) / '1024'::numeric) /\n'1024'::numeric)) DESC, flows.srcaddr, flows.dstaddr, flows.dstport,\n(count(*))\n Sort Method: quicksort Memory: 563150kB\n Buffers: shared hit=1116590560 read=15851133, temp\nread=340244 written=340244\n I/O Timings: read=5323746.860\n -> GroupAggregate (cost=37698151.34..37714980.68\nrows=2243913 width=52) (actual time=2696721.610..2752109.551 rows=4691734\nloops=1)\n Group Key: flows.srcaddr, flows.dstaddr, flows.dstport\n Buffers: shared hit=1116590560 read=15851133, temp\nread=340244 written=340244\n I/O Timings: read=5323746.860\n -> Sort (cost=37698151.34..37699273.29 rows=2243913\nwidth=20) (actual time=2696711.428..2732781.705 rows=81896988 loops=1)\n Sort Key: flows.srcaddr, flows.dstaddr,\nflows.dstport\n Sort Method: external merge Disk: 2721856kB\n Buffers: shared hit=1116590560 read=15851133,\ntemp read=340244 written=340244\n I/O Timings: read=5323746.860\n -> Gather (cost=19463936.00..37650810.19\nrows=2243913 width=20) (actual time=1777219.713..2590530.887 rows=81896988\nloops=1)\n Workers Planned: 9\n Workers Launched: 9\n Buffers: shared hit=1116590559\nread=15851133\n I/O Timings: read=5323746.860\n -> Hash Semi Join\n (cost=19462936.00..37622883.23 rows=249324 width=20) (actual\ntime=1847579.360..2602039.780 rows=8189699 loops=10)\n Hash Cond: (flows.dstaddr =\nflows_1.dstaddr)\n Buffers: shared hit=1116588309\nread=15851133\n I/O Timings: read=5323746.860\n -> Nested Loop\n (cost=0.03..18159012.30 rows=249324 width=20) (actual\ntime=1.562..736556.583 rows=45499045 loops=10)\n Buffers: shared hit=996551813\nread=15851133\n I/O Timings: read=5323746.860\n -> Parallel Seq Scan on flows\n (cost=0.00..16039759.79 rows=62330930 width=20) (actual\ntime=1.506..547485.066 rows=54155970 loops=10)\n Buffers: shared hit=1634\nread=15851133\n I/O Timings:\nread=5323746.860\n -> Index Only Scan using\nmynetworks_ipaddr_idx on mynetworks (cost=0.03..0.03 rows=1 width=8)\n(actual time=0.002..0.002 rows=1 loops=541559704)\n Index Cond: (ipaddr >>=\n(flows.srcaddr)::ip4r)\n Heap Fetches: 59971474\n Buffers: shared\nhit=996550152\n -> Hash\n (cost=19462896.74..19462896.74 rows=11210 width=4) (actual\ntime=1847228.894..1847228.894 rows=3099798 loops=10)\n Buckets: 4194304 (originally\n16384) Batches: 1 (originally 1) Memory Usage: 141746kB\n Buffers: shared hit=120036496\n -> HashAggregate\n (cost=19462829.48..19462863.11 rows=11210 width=4) (actual\ntime=1230049.015..1845955.764 rows=3099798 loops=10)\n Group Key:\nflows_1.dstaddr\n Buffers: shared\nhit=120036496\n -> Nested Loop Anti\nJoin (cost=0.12..19182620.78 rows=560417390 width=4) (actual\ntime=0.084..831832.333 rows=113420172 loops=10)\n Join Filter:\n(mynetworks_1.ipaddr >> (flows_1.dstaddr)::ip4r)\n Rows Removed by\nJoin Filter: 453681377\n Buffers: shared\nhit=120036496\n -> Index Only\nScan using flows_srcaddr_dstaddr_idx on flows flows_1\n (cost=0.12..9091067.70 rows=560978368 width=4) (actual\ntime=0.027..113052.437 rows=541559704 loops=10)\n Heap\nFetches: 91\n Buffers:\nshared hit=120036459\n -> Materialize\n (cost=0.00..1.02 rows=4 width=8) (actual time=0.000..0.000 rows=2\nloops=5415597040)\n Buffers:\nshared hit=10\n -> Seq Scan\non mynetworks mynetworks_1 (cost=0.00..1.01 rows=4 width=8) (actual\ntime=0.007..0.008 rows=4 loops=10)\n\n Buffers: shared hit=10\n Planning time: 6.689 ms\n Execution time: 2764860.853 ms\n(58 rows)\n\n\nRegarding \"Also using dstat I can see that iowait time is at about 25%\", I\ndon't think the server was doing anything else. If it is important, I can\nrepeat the benchmarks.\nThanks!\n\nCharles\n\nOn Wed, Jul 12, 2017 at 2:39 AM, Jeff Janes <[email protected]> wrote:\n\n> On Tue, Jul 11, 2017 at 4:02 AM, Charles Nadeau <[email protected]>\n> wrote:\n>\n>> Jeff,\n>>\n>> I used fio in a quick benchmarking script inspired by\n>> https://smcleod.net/benchmarking-io/:\n>>\n>> #!/bin/bash\n>> #Random throughput\n>> echo \"Random throughput\"\n>> sync\n>> fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1\n>> --name=test --filename=test --bs=4M --iodepth=256 --size=10G\n>> --readwrite=randread --ramp_time=4\n>> #Random IOPS\n>> echo \"Random IOPS\"\n>> sync\n>> fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1\n>> --name=test --filename=test --bs=4k --iodepth=256 --size=4G\n>> --readwrite=randread --ramp_time=4\n>> #Sequential throughput\n>> echo \"Sequential throughput\"\n>> sync\n>> fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1\n>> --name=test --filename=test --bs=4M --iodepth=256 --size=10G\n>> --readwrite=read --ramp_time=4\n>> #Sequential IOPS\n>> echo \"Sequential IOPS\"\n>> sync\n>> fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1\n>> --name=test --filename=test --bs=4k --iodepth=256 --size=4G\n>> --readwrite=read --ramp_time=4\n>>\n>>\n> I don't think any of those are directly relevant to PostgreSQL, as it\n> doesn't use direct IO, doesn't use libaio, and is rarely going to get\n> anywhere near 256 iodepth. So the best they can do is put a theoretical\n> ceiling on the performance. Also, random IO with a 4MB stride doesn't make\n> any sense from a PostgreSQL perspective.\n>\n>\n>\n>>\n>> Performing the test you suggested, I get 128.5MB/s. Monitoring the test,\n>> I find that the throughput is constant from start to finish and that the\n>> iowait is also constant at 5%:\n>>\n>\n> I would have expected it to do better than that. Maybe you increase the\n> kernel readahead setting. I've found the default to be much too small.\n> But it doesn't make much difference to you, as you appear to be doing\n> random IO in your queries, not sequential.\n>\n>\n>> Could you suggest another way to benchmark random reads?\n>>\n>\n> Your 1100 IOPS times 8kb block size gives about 8MB/s of throughput, which\n> is close to what you report. So I think I'd would instead focus on tuning\n> your actual queries. You say the problem is not query-dependent, but I\n> think that that just means all the queries you looked at are similar. If\n> you looked at a query that can't use indexes, like count(unindexed_column)\n> from biggest_table; you would find it doing much more IO than 4MB/s.\n>\n> Can you pick the simplest query you actually care about, and post both an\n> \"explain (analyze, timing off)\" and an \"explain (analyze, buffers)\" for it?\n> (Preferably turning \"track_io_timing\" on first).\n>\n> One other question I had, you said you had \"2x Intel Xeon E5550\", which\n> should be 8 CPU (or 16, if the hyperthreads\n> are reported as separate CPUs). But you also said: \"Also using dstat I\n> can see that iowait time is at about 25%\". Usually if there is only one\n> thing going on on the server, then IOWAIT won't be more than reciprocal of\n> #CPU. Is the server busy doing other stuff at the same time you are\n> benchmarking it?\n>\n> Cheers,\n>\n> Jeff\n>\n\n\n\n-- \nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\nJeff,Here are the 2 EXPLAINs for one of my simplest query:flows=# SET track_io_timing = on;LOG: duration: 24.101 ms statement: SET track_io_timing = on;SETflows=# explain (analyze, timing off) SELECT DISTINCTflows-# srcaddr,flows-# dstaddr,flows-# dstport,flows-# COUNT(*) AS conversation,flows-# SUM(doctets) / 1024 / 1024 AS mbytes flows-# FROMflows-# flowscompact,flows-# mynetworks flows-# WHEREflows-# mynetworks.ipaddr >>= flowscompact.srcaddr flows-# AND dstaddr IN flows-# (flows(# SELECTflows(# dstaddr flows(# FROMflows(# dstexterneflows(# )flows-# GROUP BYflows-# srcaddr,flows-# dstaddr,flows-# dstport flows-# ORDER BYflows-# mbytes DESC LIMIT 50;LOG: temporary file: path \"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.3\", size 1073741824LOG: temporary file: path \"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.4\", size 1073741824LOG: temporary file: path \"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.5\", size 639696896LOG: duration: 2632108.352 ms statement: explain (analyze, timing off) SELECT DISTINCT srcaddr, dstaddr, dstport, COUNT(*) AS conversation, SUM(doctets) / 1024 / 1024 AS mbytes FROM flowscompact, mynetworks WHERE mynetworks.ipaddr >>= flowscompact.srcaddr AND dstaddr IN ( SELECT dstaddr FROM dstexterne )GROUP BY srcaddr, dstaddr, dstport ORDER BY mbytes DESC LIMIT 50; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=37762321.83..37762321.98 rows=50 width=52) (actual rows=50 loops=1) -> Unique (cost=37762321.83..37769053.57 rows=2243913 width=52) (actual rows=50 loops=1) -> Sort (cost=37762321.83..37763443.79 rows=2243913 width=52) (actual rows=50 loops=1) Sort Key: (((sum(flows.doctets) / '1024'::numeric) / '1024'::numeric)) DESC, flows.srcaddr, flows.dstaddr, flows.dstport, (count(*)) Sort Method: quicksort Memory: 563150kB -> GroupAggregate (cost=37698151.34..37714980.68 rows=2243913 width=52) (actual rows=4691734 loops=1) Group Key: flows.srcaddr, flows.dstaddr, flows.dstport -> Sort (cost=37698151.34..37699273.29 rows=2243913 width=20) (actual rows=81896988 loops=1) Sort Key: flows.srcaddr, flows.dstaddr, flows.dstport Sort Method: external merge Disk: 2721856kB -> Gather (cost=19463936.00..37650810.19 rows=2243913 width=20) (actual rows=81896988 loops=1) Workers Planned: 9 Workers Launched: 9 -> Hash Semi Join (cost=19462936.00..37622883.23 rows=249324 width=20) (actual rows=8189699 loops=10) Hash Cond: (flows.dstaddr = flows_1.dstaddr) -> Nested Loop (cost=0.03..18159012.30 rows=249324 width=20) (actual rows=45499045 loops=10) -> Parallel Seq Scan on flows (cost=0.00..16039759.79 rows=62330930 width=20) (actual rows=54155970 loops=10) -> Index Only Scan using mynetworks_ipaddr_idx on mynetworks (cost=0.03..0.03 rows=1 width=8) (actual rows=1 loops=541559704) Index Cond: (ipaddr >>= (flows.srcaddr)::ip4r) Heap Fetches: 48679396 -> Hash (cost=19462896.74..19462896.74 rows=11210 width=4) (actual rows=3099798 loops=10) Buckets: 4194304 (originally 16384) Batches: 1 (originally 1) Memory Usage: 141746kB -> HashAggregate (cost=19462829.48..19462863.11 rows=11210 width=4) (actual rows=3099798 loops=10) Group Key: flows_1.dstaddr -> Nested Loop Anti Join (cost=0.12..19182620.78 rows=560417390 width=4) (actual rows=113420172 loops=10) Join Filter: (mynetworks_1.ipaddr >> (flows_1.dstaddr)::ip4r) Rows Removed by Join Filter: 453681377 -> Index Only Scan using flows_srcaddr_dstaddr_idx on flows flows_1 (cost=0.12..9091067.70 rows=560978368 width=4) (actual rows=541559704 loops=10) Heap Fetches: 91 -> Materialize (cost=0.00..1.02 rows=4 width=8) (actual rows=2 loops=5415597040) -> Seq Scan on mynetworks mynetworks_1 (cost=0.00..1.01 rows=4 width=8) (actual rows=4 loops=10) Planning time: 62.066 ms Execution time: 2631923.716 ms(33 rows)flows=# explain (analyze, buffers) SELECT DISTINCTflows-# srcaddr,flows-# dstaddr,flows-# dstport,flows-# COUNT(*) AS conversation,flows-# SUM(doctets) / 1024 / 1024 AS mbytes flows-# FROMflows-# flowscompact,flows-# mynetworks flows-# WHEREflows-# mynetworks.ipaddr >>= flowscompact.srcaddr flows-# AND dstaddr IN flows-# (flows(# SELECTflows(# dstaddr flows(# FROMflows(# dstexterneflows(# )flows-# GROUP BYflows-# srcaddr,flows-# dstaddr,flows-# dstport flows-# ORDER BYflows-# mbytes DESC LIMIT 50;LOG: temporary file: path \"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.6\", size 1073741824LOG: temporary file: path \"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.7\", size 1073741824LOG: temporary file: path \"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.8\", size 639696896LOG: duration: 2765020.327 ms statement: explain (analyze, buffers) SELECT DISTINCT srcaddr, dstaddr, dstport, COUNT(*) AS conversation, SUM(doctets) / 1024 / 1024 AS mbytes FROM flowscompact, mynetworks WHERE mynetworks.ipaddr >>= flowscompact.srcaddr AND dstaddr IN ( SELECT dstaddr FROM dstexterne )GROUP BY srcaddr, dstaddr, dstport ORDER BY mbytes DESC LIMIT 50; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=37762321.83..37762321.98 rows=50 width=52) (actual time=2764548.863..2764548.891 rows=50 loops=1) Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244 I/O Timings: read=5323746.860 -> Unique (cost=37762321.83..37769053.57 rows=2243913 width=52) (actual time=2764548.861..2764548.882 rows=50 loops=1) Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244 I/O Timings: read=5323746.860 -> Sort (cost=37762321.83..37763443.79 rows=2243913 width=52) (actual time=2764548.859..2764548.872 rows=50 loops=1) Sort Key: (((sum(flows.doctets) / '1024'::numeric) / '1024'::numeric)) DESC, flows.srcaddr, flows.dstaddr, flows.dstport, (count(*)) Sort Method: quicksort Memory: 563150kB Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244 I/O Timings: read=5323746.860 -> GroupAggregate (cost=37698151.34..37714980.68 rows=2243913 width=52) (actual time=2696721.610..2752109.551 rows=4691734 loops=1) Group Key: flows.srcaddr, flows.dstaddr, flows.dstport Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244 I/O Timings: read=5323746.860 -> Sort (cost=37698151.34..37699273.29 rows=2243913 width=20) (actual time=2696711.428..2732781.705 rows=81896988 loops=1) Sort Key: flows.srcaddr, flows.dstaddr, flows.dstport Sort Method: external merge Disk: 2721856kB Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244 I/O Timings: read=5323746.860 -> Gather (cost=19463936.00..37650810.19 rows=2243913 width=20) (actual time=1777219.713..2590530.887 rows=81896988 loops=1) Workers Planned: 9 Workers Launched: 9 Buffers: shared hit=1116590559 read=15851133 I/O Timings: read=5323746.860 -> Hash Semi Join (cost=19462936.00..37622883.23 rows=249324 width=20) (actual time=1847579.360..2602039.780 rows=8189699 loops=10) Hash Cond: (flows.dstaddr = flows_1.dstaddr) Buffers: shared hit=1116588309 read=15851133 I/O Timings: read=5323746.860 -> Nested Loop (cost=0.03..18159012.30 rows=249324 width=20) (actual time=1.562..736556.583 rows=45499045 loops=10) Buffers: shared hit=996551813 read=15851133 I/O Timings: read=5323746.860 -> Parallel Seq Scan on flows (cost=0.00..16039759.79 rows=62330930 width=20) (actual time=1.506..547485.066 rows=54155970 loops=10) Buffers: shared hit=1634 read=15851133 I/O Timings: read=5323746.860 -> Index Only Scan using mynetworks_ipaddr_idx on mynetworks (cost=0.03..0.03 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=541559704) Index Cond: (ipaddr >>= (flows.srcaddr)::ip4r) Heap Fetches: 59971474 Buffers: shared hit=996550152 -> Hash (cost=19462896.74..19462896.74 rows=11210 width=4) (actual time=1847228.894..1847228.894 rows=3099798 loops=10) Buckets: 4194304 (originally 16384) Batches: 1 (originally 1) Memory Usage: 141746kB Buffers: shared hit=120036496 -> HashAggregate (cost=19462829.48..19462863.11 rows=11210 width=4) (actual time=1230049.015..1845955.764 rows=3099798 loops=10) Group Key: flows_1.dstaddr Buffers: shared hit=120036496 -> Nested Loop Anti Join (cost=0.12..19182620.78 rows=560417390 width=4) (actual time=0.084..831832.333 rows=113420172 loops=10) Join Filter: (mynetworks_1.ipaddr >> (flows_1.dstaddr)::ip4r) Rows Removed by Join Filter: 453681377 Buffers: shared hit=120036496 -> Index Only Scan using flows_srcaddr_dstaddr_idx on flows flows_1 (cost=0.12..9091067.70 rows=560978368 width=4) (actual time=0.027..113052.437 rows=541559704 loops=10) Heap Fetches: 91 Buffers: shared hit=120036459 -> Materialize (cost=0.00..1.02 rows=4 width=8) (actual time=0.000..0.000 rows=2 loops=5415597040) Buffers: shared hit=10 -> Seq Scan on mynetworks mynetworks_1 (cost=0.00..1.01 rows=4 width=8) (actual time=0.007..0.008 rows=4 loops=10) Buffers: shared hit=10 Planning time: 6.689 ms Execution time: 2764860.853 ms(58 rows)Regarding \"Also using dstat I can see that iowait time is at about 25%\", I don't think the server was doing anything else. If it is important, I can repeat the benchmarks.Thanks!CharlesOn Wed, Jul 12, 2017 at 2:39 AM, Jeff Janes <[email protected]> wrote:On Tue, Jul 11, 2017 at 4:02 AM, Charles Nadeau <[email protected]> wrote:Jeff,I used fio in a quick benchmarking script inspired by https://smcleod.net/benchmarking-io/:#!/bin/bash#Random throughputecho \"Random throughput\"syncfio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4M --iodepth=256 --size=10G --readwrite=randread --ramp_time=4#Random IOPSecho \"Random IOPS\"syncfio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=256 --size=4G --readwrite=randread --ramp_time=4#Sequential throughputecho \"Sequential throughput\"syncfio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4M --iodepth=256 --size=10G --readwrite=read --ramp_time=4#Sequential IOPSecho \"Sequential IOPS\"syncfio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=256 --size=4G --readwrite=read --ramp_time=4I don't think any of those are directly relevant to PostgreSQL, as it doesn't use direct IO, doesn't use libaio, and is rarely going to get anywhere near 256 iodepth. So the best they can do is put a theoretical ceiling on the performance. Also, random IO with a 4MB stride doesn't make any sense from a PostgreSQL perspective. Performing the test you suggested, I get 128.5MB/s. Monitoring the test, I find that the throughput is constant from start to finish and that the iowait is also constant at 5%:I would have expected it to do better than that. Maybe you increase the kernel readahead setting. I've found the default to be much too small. But it doesn't make much difference to you, as you appear to be doing random IO in your queries, not sequential.Could you suggest another way to benchmark random reads?Your 1100 IOPS times 8kb block size gives about 8MB/s of throughput, which is close to what you report. So I think I'd would instead focus on tuning your actual queries. You say the problem is not query-dependent, but I think that that just means all the queries you looked at are similar. If you looked at a query that can't use indexes, like count(unindexed_column) from biggest_table; you would find it doing much more IO than 4MB/s.Can you pick the simplest query you actually care about, and post both an \"explain (analyze, timing off)\" and an \"explain (analyze, buffers)\" for it? (Preferably turning \"track_io_timing\" on first).One other question I had, you said you had \"2x Intel Xeon E5550\", which should be 8 CPU (or 16, if the hyperthreads are reported as separate CPUs). But you also said: \"Also using dstat I can see that iowait time is at about 25%\". Usually if there is only one thing going on on the server, then IOWAIT won't be more than reciprocal of #CPU. Is the server busy doing other stuff at the same time you are benchmarking it?Cheers,Jeff\n-- Charles Nadeau Ph.D.http://charlesnadeau.blogspot.com/",
"msg_date": "Wed, 12 Jul 2017 12:04:49 +0200",
"msg_from": "Charles Nadeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Rick,\n\nShould the number of page should always be correlated to the VmPeak of the\npostmaster or could it be set to reflect shared_buffer or another setting?\nThanks!\n\nCharles\n\nOn Mon, Jul 10, 2017 at 5:25 PM, Rick Otten <[email protected]>\nwrote:\n\n> Although probably not the root cause, at the least I would set up\n> hugepages ( https://www.postgresql.org/docs/9.6/static/kernel-\n> resources.html#LINUX-HUGE-PAGES ), and bump effective_io_concurrency up\n> quite a bit as well (256 ?).\n>\n>\n> On Mon, Jul 10, 2017 at 10:03 AM, Charles Nadeau <[email protected]\n> > wrote:\n>\n>> I’m running PostgreSQL 9.6.3 on Ubuntu 16.10 (kernel 4.4.0-85-generic).\n>> Hardware is:\n>>\n>> *2x Intel Xeon E5550\n>>\n>> *72GB RAM\n>>\n>> *Hardware RAID10 (4 x 146GB SAS 10k) P410i controller with 1GB FBWC (80%\n>> read/20% write) for Postgresql data only:\n>>\n>> Logical Drive: 3\n>>\n>> Size: 273.4 GB\n>>\n>> Fault Tolerance: 1+0\n>>\n>> Heads: 255\n>>\n>> Sectors Per Track: 32\n>>\n>> Cylinders: 65535\n>>\n>> Strip Size: 128 KB\n>>\n>> Full Stripe Size: 256 KB\n>>\n>> Status: OK\n>>\n>> Caching: Enabled\n>>\n>> Unique Identifier: 600508B1001037383941424344450A00\n>>\n>> Disk Name: /dev/sdc\n>>\n>> Mount Points: /mnt/data 273.4 GB\n>>\n>> OS Status: LOCKED\n>>\n>> Logical Drive Label: A00A194750123456789ABCDE516F\n>>\n>> Mirror Group 0:\n>>\n>> physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SAS, 146 GB, OK)\n>>\n>> physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SAS, 146 GB, OK)\n>>\n>> Mirror Group 1:\n>>\n>> physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SAS, 146 GB, OK)\n>>\n>> physicaldrive 2I:1:8 (port 2I:box 1:bay 8, SAS, 146 GB, OK)\n>>\n>> Drive Type: Data\n>>\n>> Formatted with ext4 with: sudo mkfs.ext4 -E stride=32,stripe_width=64 -v\n>> /dev/sdc1.\n>>\n>> Mounted in /etc/fstab with this line: \"UUID=99fef4ae-51dc-4365-9210-0b153b1cbbd0\n>> /mnt/data ext4 rw,nodiratime,user_xattr,noatime,nobarrier,errors=remount-ro\n>> 0 1\"\n>>\n>> Postgresql is the only application running on this server.\n>>\n>>\n>> Postgresql is used as a mini data warehouse to generate reports and do\n>> statistical analysis. It is used by at most 2 users and fresh data is added\n>> every 10 days. The database has 16 tables: one is 224GB big and the rest\n>> are between 16kB and 470MB big.\n>>\n>>\n>> My configuration is:\n>>\n>>\n>> name | current_setting | source\n>>\n>> ---------------------------------+--------------------------\n>> ----------------------+----------------------\n>>\n>> application_name | psql | client\n>>\n>> autovacuum_vacuum_scale_factor | 0 | configuration file\n>>\n>> autovacuum_vacuum_threshold | 2000 | configuration file\n>>\n>> checkpoint_completion_target | 0.9 | configuration file\n>>\n>> checkpoint_timeout | 30min | configuration file\n>>\n>> client_encoding | UTF8 | client\n>>\n>> client_min_messages | log | configuration file\n>>\n>> cluster_name | 9.6/main | configuration file\n>>\n>> cpu_index_tuple_cost | 0.001 | configuration file\n>>\n>> cpu_operator_cost | 0.0005 | configuration file\n>>\n>> cpu_tuple_cost | 0.003 | configuration file\n>>\n>> DateStyle | ISO, YMD | configuration file\n>>\n>> default_statistics_target | 100 | configuration file\n>>\n>> default_text_search_config | pg_catalog.english | configuration file\n>>\n>> dynamic_shared_memory_type | posix | configuration file\n>>\n>> effective_cache_size | 22GB | configuration file\n>>\n>> effective_io_concurrency | 4 | configuration file\n>>\n>> external_pid_file | /var/run/postgresql/9.6-main.pid | configuration file\n>>\n>> lc_messages | C | configuration file\n>>\n>> lc_monetary | en_CA.UTF-8 | configuration file\n>>\n>> lc_numeric | en_CA.UTF-8 | configuration file\n>>\n>> lc_time | en_CA.UTF-8 | configuration file\n>>\n>> listen_addresses | * | configuration file\n>>\n>> lock_timeout | 100s | configuration file\n>>\n>> log_autovacuum_min_duration | 0 | configuration file\n>>\n>> log_checkpoints | on | configuration file\n>>\n>> log_connections | on | configuration file\n>>\n>> log_destination | csvlog | configuration file\n>>\n>> log_directory | /mnt/bigzilla/data/toburn/hp/postgresql/pg_log |\n>> configuration file\n>>\n>> log_disconnections | on | configuration file\n>>\n>> log_error_verbosity | default | configuration file\n>>\n>> log_file_mode | 0600 | configuration file\n>>\n>> log_filename | postgresql-%Y-%m-%d_%H%M%S.log | configuration file\n>>\n>> log_line_prefix | user=%u,db=%d,app=%aclient=%h | configuration file\n>>\n>> log_lock_waits | on | configuration file\n>>\n>> log_min_duration_statement | 0 | configuration file\n>>\n>> log_min_error_statement | debug1 | configuration file\n>>\n>> log_min_messages | debug1 | configuration file\n>>\n>> log_rotation_size | 1GB | configuration file\n>>\n>> log_temp_files | 0 | configuration file\n>>\n>> log_timezone | localtime | configuration file\n>>\n>> logging_collector | on | configuration file\n>>\n>> maintenance_work_mem | 3GB | configuration file\n>>\n>> max_connections | 10 | configuration file\n>>\n>> max_locks_per_transaction | 256 | configuration file\n>>\n>> max_parallel_workers_per_gather | 14 | configuration file\n>>\n>> max_stack_depth | 2MB | environment variable\n>>\n>> max_wal_size | 4GB | configuration file\n>>\n>> max_worker_processes | 14 | configuration file\n>>\n>> min_wal_size | 2GB | configuration file\n>>\n>> parallel_setup_cost | 1000 | configuration file\n>>\n>> parallel_tuple_cost | 0.012 | configuration file\n>>\n>> port | 5432 | configuration file\n>>\n>> random_page_cost | 22 | configuration file\n>>\n>> seq_page_cost | 1 | configuration file\n>>\n>> shared_buffers | 34GB | configuration file\n>>\n>> shared_preload_libraries | pg_stat_statements | configuration file\n>>\n>> ssl | on | configuration file\n>>\n>> ssl_cert_file | /etc/ssl/certs/ssl-cert-snakeoil.pem | configuration file\n>>\n>> ssl_key_file | /etc/ssl/private/ssl-cert-snakeoil.key | configuration\n>> file\n>>\n>> statement_timeout | 1000000s | configuration file\n>>\n>> stats_temp_directory | /var/run/postgresql/9.6-main.pg_stat_tmp |\n>> configuration file\n>>\n>> superuser_reserved_connections | 1 | configuration file\n>>\n>> syslog_facility | local1 | configuration file\n>>\n>> syslog_ident | postgres | configuration file\n>>\n>> syslog_sequence_numbers | on | configuration file\n>>\n>> temp_file_limit | 80GB | configuration file\n>>\n>> TimeZone | localtime | configuration file\n>>\n>> track_activities | on | configuration file\n>>\n>> track_counts | on | configuration file\n>>\n>> track_functions | all | configuration file\n>>\n>> unix_socket_directories | /var/run/postgresql | configuration file\n>>\n>> vacuum_cost_delay | 1ms | configuration file\n>>\n>> vacuum_cost_limit | 5000 | configuration file\n>>\n>> vacuum_cost_page_dirty | 200 | configuration file\n>>\n>> vacuum_cost_page_hit | 10 | configuration file\n>>\n>> vacuum_cost_page_miss | 100 | configuration file\n>>\n>> wal_buffers | 16MB | configuration file\n>>\n>> wal_compression | on | configuration file\n>>\n>> wal_sync_method | fdatasync | configuration file\n>>\n>> work_mem | 1468006kB | configuration file\n>>\n>>\n>> The part of /etc/sysctl.conf I modified is:\n>>\n>> vm.swappiness = 1\n>>\n>> vm.dirty_background_bytes = 134217728\n>>\n>> vm.dirty_bytes = 1073741824\n>>\n>> vm.overcommit_ratio = 100\n>>\n>> vm.zone_reclaim_mode = 0\n>>\n>> kernel.numa_balancing = 0\n>>\n>> kernel.sched_autogroup_enabled = 0\n>>\n>> kernel.sched_migration_cost_ns = 5000000\n>>\n>>\n>> The problem I have is very poor read. When I benchmark my array with fio\n>> I get random reads of about 200MB/s and 1100IOPS and sequential reads of\n>> about 286MB/s and 21000IPS. But when I watch my queries using pg_activity,\n>> I get at best 4MB/s. Also using dstat I can see that iowait time is at\n>> about 25%. This problem is not query-dependent.\n>>\n>> I backed up the database, I reformated the array making sure it is well\n>> aligned then restored the database and got the same result.\n>>\n>> Where should I target my troubleshooting at this stage? I reformatted my\n>> drive, I tuned my postgresql.conf and OS as much as I could. The hardware\n>> doesn’t seem to have any issues, I am really puzzled.\n>>\n>> Thanks!\n>>\n>>\n>> Charles\n>>\n>> --\n>> Charles Nadeau Ph.D.\n>>\n>\n>\n\n\n-- \nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\nRick,Should the number of page should always be correlated to the VmPeak of the postmaster or could it be set to reflect shared_buffer or another setting?Thanks!CharlesOn Mon, Jul 10, 2017 at 5:25 PM, Rick Otten <[email protected]> wrote:Although probably not the root cause, at the least I would set up hugepages ( https://www.postgresql.org/docs/9.6/static/kernel-resources.html#LINUX-HUGE-PAGES ), and bump effective_io_concurrency up quite a bit as well (256 ?).On Mon, Jul 10, 2017 at 10:03 AM, Charles Nadeau <[email protected]> wrote:\nI’m running\nPostgreSQL 9.6.3 on Ubuntu 16.10 (kernel 4.4.0-85-generic). Hardware\nis:\n*2x Intel Xeon E5550\n*72GB RAM\n*Hardware RAID10 (4\nx 146GB SAS 10k) P410i controller with 1GB FBWC (80% read/20% write)\nfor Postgresql data only:\n Logical Drive:\n3\n Size: 273.4\nGB\n Fault\nTolerance: 1+0\n Heads: 255\n Sectors Per\nTrack: 32\n Cylinders:\n65535\n Strip Size:\n128 KB\n Full Stripe\nSize: 256 KB\n Status: OK\n Caching: \nEnabled\n Unique\nIdentifier: 600508B1001037383941424344450A00\n Disk Name:\n/dev/sdc\n Mount\nPoints: /mnt/data 273.4 GB\n OS Status:\nLOCKED\n Logical\nDrive Label: A00A194750123456789ABCDE516F\n Mirror\nGroup 0:\n \nphysicaldrive 2I:1:5 (port 2I:box 1:bay 5, SAS, 146 GB, OK)\n \nphysicaldrive 2I:1:6 (port 2I:box 1:bay 6, SAS, 146 GB, OK)\n Mirror\nGroup 1:\n \nphysicaldrive 2I:1:7 (port 2I:box 1:bay 7, SAS, 146 GB, OK)\n \nphysicaldrive 2I:1:8 (port 2I:box 1:bay 8, SAS, 146 GB, OK)\n Drive Type:\nData\nFormatted with ext4\nwith: sudo mkfs.ext4 -E stride=32,stripe_width=64 -v /dev/sdc1.\nMounted in\n/etc/fstab with this line: \"UUID=99fef4ae-51dc-4365-9210-0b153b1cbbd0\n/mnt/data ext4\nrw,nodiratime,user_xattr,noatime,nobarrier,errors=remount-ro 0 1\"\nPostgresql is the\nonly application running on this server.\n\n\nPostgresql is used\nas a mini data warehouse to generate reports and do statistical\nanalysis. It is used by at most 2 users and fresh data is added every\n10 days. The database has 16 tables: one is 224GB big and the rest\nare between 16kB and 470MB big.\n\n\nMy configuration is:\n\n\n name \n | current_setting | \nsource \n\n---------------------------------+------------------------------------------------+----------------------\n application_name \n | psql | client\n\nautovacuum_vacuum_scale_factor | 0 \n | configuration file\n\nautovacuum_vacuum_threshold | 2000 \n | configuration file\n\ncheckpoint_completion_target | 0.9 \n | configuration file\n checkpoint_timeout \n | 30min |\nconfiguration file\n client_encoding \n | UTF8 | client\n client_min_messages\n | log |\nconfiguration file\n cluster_name \n | 9.6/main |\nconfiguration file\n\ncpu_index_tuple_cost | 0.001 \n | configuration file\n cpu_operator_cost \n | 0.0005 |\nconfiguration file\n cpu_tuple_cost \n | 0.003 |\nconfiguration file\n DateStyle \n | ISO, YMD |\nconfiguration file\n\ndefault_statistics_target | 100 \n | configuration file\n\ndefault_text_search_config | pg_catalog.english \n | configuration file\n\ndynamic_shared_memory_type | posix \n | configuration file\n\neffective_cache_size | 22GB \n | configuration file\n\neffective_io_concurrency | 4 \n | configuration file\n external_pid_file \n | /var/run/postgresql/9.6-main.pid |\nconfiguration file\n lc_messages \n | C |\nconfiguration file\n lc_monetary \n | en_CA.UTF-8 |\nconfiguration file\n lc_numeric \n | en_CA.UTF-8 |\nconfiguration file\n lc_time \n | en_CA.UTF-8 |\nconfiguration file\n listen_addresses \n | * |\nconfiguration file\n lock_timeout \n | 100s |\nconfiguration file\n\nlog_autovacuum_min_duration | 0 \n | configuration file\n log_checkpoints \n | on |\nconfiguration file\n log_connections \n | on |\nconfiguration file\n log_destination \n | csvlog |\nconfiguration file\n log_directory \n | /mnt/bigzilla/data/toburn/hp/postgresql/pg_log |\nconfiguration file\n log_disconnections \n | on |\nconfiguration file\n log_error_verbosity\n | default |\nconfiguration file\n log_file_mode \n | 0600 |\nconfiguration file\n log_filename \n | postgresql-%Y-%m-%d_%H%M%S.log |\nconfiguration file\n log_line_prefix \n | user=%u,db=%d,app=%aclient=%h |\nconfiguration file\n log_lock_waits \n | on |\nconfiguration file\n\nlog_min_duration_statement | 0 \n | configuration file\n\nlog_min_error_statement | debug1 \n | configuration file\n log_min_messages \n | debug1 |\nconfiguration file\n log_rotation_size \n | 1GB |\nconfiguration file\n log_temp_files \n | 0 |\nconfiguration file\n log_timezone \n | localtime |\nconfiguration file\n logging_collector \n | on |\nconfiguration file\n\nmaintenance_work_mem | 3GB \n | configuration file\n max_connections \n | 10 |\nconfiguration file\n\nmax_locks_per_transaction | 256 \n | configuration file\n\nmax_parallel_workers_per_gather | 14 \n | configuration file\n max_stack_depth \n | 2MB |\nenvironment variable\n max_wal_size \n | 4GB |\nconfiguration file\n\nmax_worker_processes | 14 \n | configuration file\n min_wal_size \n | 2GB |\nconfiguration file\n parallel_setup_cost\n | 1000 |\nconfiguration file\n parallel_tuple_cost\n | 0.012 |\nconfiguration file\n port \n | 5432 |\nconfiguration file\n random_page_cost \n | 22 |\nconfiguration file\n seq_page_cost \n | 1 |\nconfiguration file\n shared_buffers \n | 34GB |\nconfiguration file\n\nshared_preload_libraries | pg_stat_statements \n | configuration file\n ssl \n | on |\nconfiguration file\n ssl_cert_file \n | /etc/ssl/certs/ssl-cert-snakeoil.pem |\nconfiguration file\n ssl_key_file \n | /etc/ssl/private/ssl-cert-snakeoil.key |\nconfiguration file\n statement_timeout \n | 1000000s |\nconfiguration file\n\nstats_temp_directory |\n/var/run/postgresql/9.6-main.pg_stat_tmp | configuration file\n\nsuperuser_reserved_connections | 1 \n | configuration file\n syslog_facility \n | local1 |\nconfiguration file\n syslog_ident \n | postgres |\nconfiguration file\n\nsyslog_sequence_numbers | on \n | configuration file\n temp_file_limit \n | 80GB |\nconfiguration file\n TimeZone \n | localtime |\nconfiguration file\n track_activities \n | on |\nconfiguration file\n track_counts \n | on |\nconfiguration file\n track_functions \n | all |\nconfiguration file\n\nunix_socket_directories | /var/run/postgresql \n | configuration file\n vacuum_cost_delay \n | 1ms |\nconfiguration file\n vacuum_cost_limit \n | 5000 |\nconfiguration file\n\nvacuum_cost_page_dirty | 200 \n | configuration file\n\nvacuum_cost_page_hit | 10 \n | configuration file\n\nvacuum_cost_page_miss | 100 \n | configuration file\n wal_buffers \n | 16MB |\nconfiguration file\n wal_compression \n | on |\nconfiguration file\n wal_sync_method \n | fdatasync |\nconfiguration file\n work_mem \n | 1468006kB |\nconfiguration file\n\n\nThe part of\n/etc/sysctl.conf I modified is:\nvm.swappiness = 1\nvm.dirty_background_bytes\n= 134217728\nvm.dirty_bytes =\n1073741824\nvm.overcommit_ratio\n= 100\nvm.zone_reclaim_mode\n= 0\nkernel.numa_balancing\n= 0\nkernel.sched_autogroup_enabled\n= 0\nkernel.sched_migration_cost_ns\n= 5000000\n\n\nThe problem I have\nis very poor read. When I benchmark my array with fio I get random\nreads of about 200MB/s and 1100IOPS and sequential reads of about\n286MB/s and 21000IPS. But when I watch my queries using pg_activity,\nI get at best 4MB/s. Also using dstat I can see that iowait time is\nat about 25%. This problem is not query-dependent.\nI backed up the\ndatabase, I reformated the array making sure it is well aligned then\nrestored the database and got the same result.\nWhere should I\ntarget my troubleshooting at this stage? I reformatted my drive, I\ntuned my postgresql.conf and OS as much as I could. The hardware\ndoesn’t seem to have any issues, I am really puzzled.\nThanks!\n\n\nCharles-- Charles Nadeau Ph.D.\n\n\n-- Charles Nadeau Ph.D.http://charlesnadeau.blogspot.com/",
"msg_date": "Wed, 12 Jul 2017 15:38:23 +0200",
"msg_from": "Charles Nadeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "From: Charles Nadeau [mailto:[email protected]]\r\nSent: Wednesday, July 12, 2017 3:21 AM\r\nTo: Igor Neyman <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Very poor read performance, query independent\r\n\r\nIgor,\r\n\r\nI set shared_buffers to 24 GB and effective_cache_size to 64GB and I can see that the queries are faster due to the fact that the index are used more often. Knowing I have 72GB of RAM and the server is exclusively dedicated to Postgresql, what could be the maximum value for effective_cache?\r\nThanks!\r\n\r\nCharles\r\n\r\n64GB for effective_cache_size should be good enough, adding couple more GB wouldn’t change much.\r\n\r\nIgor\r\n\n\n\n\n\n\n\n\n\n \n\n\nFrom: Charles Nadeau [mailto:[email protected]]\r\n\nSent: Wednesday, July 12, 2017 3:21 AM\nTo: Igor Neyman <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] Very poor read performance, query independent\n\n\n\n\n\n \nIgor,\n\n\n \n\n\nI set shared_buffers to 24 GB and effective_cache_size to 64GB and I can see that the queries are faster due to the fact that the index are used more often. Knowing I have 72GB of RAM and the server is exclusively dedicated to Postgresql,\r\n what could be the maximum value for effective_cache?\n\n\nThanks!\n\n\n \n\n\n\nCharles\n\n\n\n\n \n64GB for effective_cache_size should be good enough, adding couple more GB wouldn’t change much.\n \nIgor",
"msg_date": "Wed, 12 Jul 2017 13:57:06 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "On Wed, Jul 12, 2017 at 9:38 AM, Charles Nadeau <[email protected]>\nwrote:\n\n> Rick,\n>\n> Should the number of page should always be correlated to the VmPeak of the\n> postmaster or could it be set to reflect shared_buffer or another setting?\n> Thanks!\n>\n>\nThe documentation implies that you may need to adjust its size when you\nchange shared_buffer settings.\n\nI usually check it every now and then (I haven't build a formal monitor\nyet.) to see if all of the huge pages are free/used and if it looks like\nthey are all getting consumed - consider bumping it higher. If there are\nlots free, you are probably fine.\n\ncat /proc/meminfo | grep -i \"^huge\"\n\n--\n\nAlso regarding my note on effective_io_concurrency, which I'm not sure you\ntried tweaking yet.\n\nWith file system and hardware caching between you and your spindles, your\nbest setting for effective_io_concurrency may be much higher than the\nactual number of spindles. It is worth experimenting with. If you can,\ntry several values. You can use pg_bench to put consistent workloads on\nyour database for measurement purposes.\n\n\nCharles\n>\n> On Mon, Jul 10, 2017 at 5:25 PM, Rick Otten <[email protected]>\n> wrote:\n>\n>> Although probably not the root cause, at the least I would set up\n>> hugepages ( https://www.postgresql.org/docs/9.6/static/kernel-resourc\n>> es.html#LINUX-HUGE-PAGES ), and bump effective_io_concurrency up quite a\n>> bit as well (256 ?).\n>>\n>>\n\nOn Wed, Jul 12, 2017 at 9:38 AM, Charles Nadeau <[email protected]> wrote:Rick,Should the number of page should always be correlated to the VmPeak of the postmaster or could it be set to reflect shared_buffer or another setting?Thanks!The documentation implies that you may need to adjust its size when you change shared_buffer settings. I usually check it every now and then (I haven't build a formal monitor yet.) to see if all of the huge pages are free/used and if it looks like they are all getting consumed - consider bumping it higher. If there are lots free, you are probably fine.cat /proc/meminfo | grep -i \"^huge\"--Also regarding my note on effective_io_concurrency, which I'm not sure you tried tweaking yet.With file system and hardware caching between you and your spindles, your best setting for effective_io_concurrency may be much higher than the actual number of spindles. It is worth experimenting with. If you can, try several values. You can use pg_bench to put consistent workloads on your database for measurement purposes.CharlesOn Mon, Jul 10, 2017 at 5:25 PM, Rick Otten <[email protected]> wrote:Although probably not the root cause, at the least I would set up hugepages ( https://www.postgresql.org/docs/9.6/static/kernel-resources.html#LINUX-HUGE-PAGES ), and bump effective_io_concurrency up quite a bit as well (256 ?).",
"msg_date": "Wed, 12 Jul 2017 10:10:27 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "On Wed, Jul 12, 2017 at 12:30 AM, Charles Nadeau <[email protected]>\nwrote:\n\n>\n> I use noop as the scheduler because it is better to let the RAID\n> controller re-arrange the IO operation before they reach the disk. Read\n> ahead is set to 128:\n>\n> charles@hpdl380g6:~$ cat /sys/block/sdc/queue/read_ahead_kb\n> 128\n> charles@hpdl380g6:~$ cat /sys/block/sdc/queue/scheduler\n> [noop] deadline cfq\n>\n>\n>\nPerhaps pg_test_fsync (\nhttps://www.postgresql.org/docs/9.6/static/pgtestfsync.html) and\npg_test_timing will help shed some light here, or at the very least give\nsome numbers to compare against.\n\nOn Wed, Jul 12, 2017 at 12:30 AM, Charles Nadeau <[email protected]> wrote:I use noop as the scheduler because it is better to let the RAID controller re-arrange the IO operation before they reach the disk. Read ahead is set to 128:charles@hpdl380g6:~$ cat /sys/block/sdc/queue/read_ahead_kb128charles@hpdl380g6:~$ cat /sys/block/sdc/queue/scheduler[noop] deadline cfq Perhaps pg_test_fsync (https://www.postgresql.org/docs/9.6/static/pgtestfsync.html) and pg_test_timing will help shed some light here, or at the very least give some numbers to compare against.",
"msg_date": "Wed, 12 Jul 2017 07:11:54 -0700",
"msg_from": "bricklen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "From: [email protected] [mailto:[email protected]] On Behalf Of Charles Nadeau\r\nSent: Wednesday, July 12, 2017 6:05 AM\r\nTo: Jeff Janes <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Very poor read performance, query independent\r\n\r\n\r\nflows=# explain (analyze, buffers) SELECT DISTINCT\r\nflows-# srcaddr,\r\nflows-# dstaddr,\r\nflows-# dstport,\r\nflows-# COUNT(*) AS conversation,\r\nflows-# SUM(doctets) / 1024 / 1024 AS mbytes\r\nflows-# FROM\r\nflows-# flowscompact,\r\nflows-# mynetworks\r\nflows-# WHERE\r\nflows-# mynetworks.ipaddr >>= flowscompact.srcaddr\r\nflows-# AND dstaddr IN\r\nflows-# (\r\nflows(# SELECT\r\nflows(# dstaddr\r\nflows(# FROM\r\nflows(# dstexterne\r\nflows(# )\r\nflows-# GROUP BY\r\nflows-# srcaddr,\r\nflows-# dstaddr,\r\nflows-# dstport\r\nflows-# ORDER BY\r\nflows-# mbytes DESC LIMIT 50;\r\nLOG: temporary file: path \"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.6\", size 1073741824\r\nLOG: temporary file: path \"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.7\", size 1073741824\r\nLOG: temporary file: path \"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.8\", size 639696896\r\nLOG: duration: 2765020.327 ms statement: explain (analyze, buffers) SELECT DISTINCT\r\n srcaddr,\r\n dstaddr,\r\n dstport,\r\n COUNT(*) AS conversation,\r\n SUM(doctets) / 1024 / 1024 AS mbytes\r\nFROM\r\n flowscompact,\r\n mynetworks\r\nWHERE\r\n mynetworks.ipaddr >>= flowscompact.srcaddr\r\n AND dstaddr IN\r\n (\r\n SELECT\r\n dstaddr\r\n FROM\r\n dstexterne\r\n )\r\nGROUP BY\r\n srcaddr,\r\n dstaddr,\r\n dstport\r\nORDER BY\r\n mbytes DESC LIMIT 50;\r\n QUERY PLAN\r\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n Limit (cost=37762321.83..37762321.98 rows=50 width=52) (actual time=2764548.863..2764548.891 rows=50 loops=1)\r\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\r\n I/O Timings: read=5323746.860\r\n -> Unique (cost=37762321.83..37769053.57 rows=2243913 width=52) (actual time=2764548.861..2764548.882 rows=50 loops=1)\r\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\r\n I/O Timings: read=5323746.860\r\n -> Sort (cost=37762321.83..37763443.79 rows=2243913 width=52) (actual time=2764548.859..2764548.872 rows=50 loops=1)\r\n Sort Key: (((sum(flows.doctets) / '1024'::numeric) / '1024'::numeric)) DESC, flows.srcaddr, flows.dstaddr, flows.dstport, (count(*))\r\n Sort Method: quicksort Memory: 563150kB\r\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\r\n I/O Timings: read=5323746.860\r\n -> GroupAggregate (cost=37698151.34..37714980.68 rows=2243913 width=52) (actual time=2696721.610..2752109.551 rows=4691734 loops=1)\r\n Group Key: flows.srcaddr, flows.dstaddr, flows.dstport\r\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\r\n I/O Timings: read=5323746.860\r\n -> Sort (cost=37698151.34..37699273.29 rows=2243913 width=20) (actual time=2696711.428..2732781.705 rows=81896988 loops=1)\r\n Sort Key: flows.srcaddr, flows.dstaddr, flows.dstport\r\n Sort Method: external merge Disk: 2721856kB\r\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\r\n I/O Timings: read=5323746.860\r\n -> Gather (cost=19463936.00..37650810.19 rows=2243913 width=20) (actual time=1777219.713..2590530.887 rows=81896988 loops=1)\r\n Workers Planned: 9\r\n Workers Launched: 9\r\n Buffers: shared hit=1116590559 read=15851133\r\n I/O Timings: read=5323746.860\r\n -> Hash Semi Join (cost=19462936.00..37622883.23 rows=249324 width=20) (actual time=1847579.360..2602039.780 rows=8189699 loops=10)\r\n Hash Cond: (flows.dstaddr = flows_1.dstaddr)\r\n Buffers: shared hit=1116588309 read=15851133\r\n I/O Timings: read=5323746.860\r\n -> Nested Loop (cost=0.03..18159012.30 rows=249324 width=20) (actual time=1.562..736556.583 rows=45499045 loops=10)\r\n Buffers: shared hit=996551813 read=15851133\r\n I/O Timings: read=5323746.860\r\n -> Parallel Seq Scan on flows (cost=0.00..16039759.79 rows=62330930 width=20) (actual time=1.506..547485.066 rows=54155970 loops=10)\r\n Buffers: shared hit=1634 read=15851133\r\n I/O Timings: read=5323746.860\r\n -> Index Only Scan using mynetworks_ipaddr_idx on mynetworks (cost=0.03..0.03 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=541559704)\r\n Index Cond: (ipaddr >>= (flows.srcaddr)::ip4r)\r\n Heap Fetches: 59971474\r\n Buffers: shared hit=996550152\r\n -> Hash (cost=19462896.74..19462896.74 rows=11210 width=4) (actual time=1847228.894..1847228.894 rows=3099798 loops=10)\r\n Buckets: 4194304 (originally 16384) Batches: 1 (originally 1) Memory Usage: 141746kB\r\n Buffers: shared hit=120036496\r\n -> HashAggregate (cost=19462829.48..19462863.11 rows=11210 width=4) (actual time=1230049.015..1845955.764 rows=3099798 loops=10)\r\n Group Key: flows_1.dstaddr\r\n Buffers: shared hit=120036496\r\n -> Nested Loop Anti Join (cost=0.12..19182620.78 rows=560417390 width=4) (actual time=0.084..831832.333 rows=113420172 loops=10)\r\n Join Filter: (mynetworks_1.ipaddr >> (flows_1.dstaddr)::ip4r)\r\n Rows Removed by Join Filter: 453681377\r\n Buffers: shared hit=120036496\r\n -> Index Only Scan using flows_srcaddr_dstaddr_idx on flows flows_1 (cost=0.12..9091067.70 rows=560978368 width=4) (actual time=0.027..113052.437 rows=541559704 loops=10)\r\n Heap Fetches: 91\r\n Buffers: shared hit=120036459\r\n -> Materialize (cost=0.00..1.02 rows=4 width=8) (actual time=0.000..0.000 rows=2 loops=5415597040)\r\n Buffers: shared hit=10\r\n -> Seq Scan on mynetworks mynetworks_1 (cost=0.00..1.01 rows=4 width=8) (actual time=0.007..0.008 rows=4 loops=10)\r\n Buffers: shared hit=10\r\n Planning time: 6.689 ms\r\n Execution time: 2764860.853 ms\r\n(58 rows)\r\n\r\nRegarding \"Also using dstat I can see that iowait time is at about 25%\", I don't think the server was doing anything else. If it is important, I can repeat the benchmarks.\r\nThanks!\r\n\r\nCharles\r\n\r\nCharles,\r\n\r\nIn your original posting I couldn’t find what value you set for temp_buffers.\r\nConsidering you have plenty of RAM, try setting temp_buffers=’6GB’ and then run ‘explain (analyze, buffers) select…’ in the same session. This should alleviate “disk sort’ problem.\r\n\r\nAlso, could you post the structure of flowscompact, mynetworks, and dstextern tables with all the indexes and number of rows. Actually, are they all – tables, or some of them – views?\r\n\r\nIgor\r\n\r\n\r\n\r\n\r\n\n\n\n\n\n\n\n\n\n \n\n\nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of Charles Nadeau\nSent: Wednesday, July 12, 2017 6:05 AM\nTo: Jeff Janes <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] Very poor read performance, query independent\n\n\n \n\n\n\n\n \n\n\nflows=# explain (analyze, buffers) SELECT DISTINCT\n\n\nflows-# srcaddr,\n\n\nflows-# dstaddr,\n\n\nflows-# dstport,\n\n\nflows-# COUNT(*) AS conversation,\n\n\nflows-# SUM(doctets) / 1024 / 1024 AS mbytes \n\n\nflows-# FROM\n\n\nflows-# flowscompact,\n\n\nflows-# mynetworks \n\n\nflows-# WHERE\n\n\nflows-# mynetworks.ipaddr >>= flowscompact.srcaddr \n\n\nflows-# AND dstaddr IN \n\n\nflows-# (\n\n\nflows(# SELECT\n\n\nflows(# dstaddr \n\n\nflows(# FROM\n\n\nflows(# dstexterne\n\n\nflows(# )\n\n\nflows-# GROUP BY\n\n\nflows-# srcaddr,\n\n\nflows-# dstaddr,\n\n\nflows-# dstport \n\n\nflows-# ORDER BY\n\n\nflows-# mbytes DESC LIMIT 50;\n\n\nLOG: temporary file: path \"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.6\", size 1073741824\n\n\nLOG: temporary file: path \"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.7\", size 1073741824\n\n\nLOG: temporary file: path \"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.8\", size 639696896\n\n\nLOG: duration: 2765020.327 ms statement: explain (analyze, buffers) SELECT DISTINCT\n\n\n srcaddr,\n\n\n dstaddr,\n\n\n dstport,\n\n\n COUNT(*) AS conversation,\n\n\n SUM(doctets) / 1024 / 1024 AS mbytes \n\n\nFROM\n\n\n flowscompact,\n\n\n mynetworks \n\n\nWHERE\n\n\n mynetworks.ipaddr >>= flowscompact.srcaddr \n\n\n AND dstaddr IN \n\n\n (\n\n\n SELECT\n\n\n dstaddr \n\n\n FROM\n\n\n dstexterne\n\n\n )\n\n\nGROUP BY\n\n\n srcaddr,\n\n\n dstaddr,\n\n\n dstport \n\n\nORDER BY\n\n\n mbytes DESC LIMIT 50;\n\n\n QUERY PLAN \n\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n Limit (cost=37762321.83..37762321.98 rows=50 width=52) (actual time=2764548.863..2764548.891 rows=50 loops=1)\n\n\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\n\n\n I/O Timings: read=5323746.860\n\n\n -> Unique (cost=37762321.83..37769053.57 rows=2243913 width=52) (actual time=2764548.861..2764548.882 rows=50 loops=1)\n\n\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\n\n\n I/O Timings: read=5323746.860\n\n\n -> Sort (cost=37762321.83..37763443.79 rows=2243913 width=52) (actual time=2764548.859..2764548.872 rows=50 loops=1)\n\n\n Sort Key: (((sum(flows.doctets) / '1024'::numeric) / '1024'::numeric)) DESC, flows.srcaddr, flows.dstaddr, flows.dstport, (count(*))\n\n\n Sort Method: quicksort Memory: 563150kB\n\n\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\n\n\n I/O Timings: read=5323746.860\n\n\n -> GroupAggregate (cost=37698151.34..37714980.68 rows=2243913 width=52) (actual time=2696721.610..2752109.551 rows=4691734 loops=1)\n\n\n Group Key: flows.srcaddr, flows.dstaddr, flows.dstport\n\n\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\n\n\n I/O Timings: read=5323746.860\n\n\n -> Sort (cost=37698151.34..37699273.29 rows=2243913 width=20) (actual time=2696711.428..2732781.705 rows=81896988 loops=1)\n\n\n Sort Key: flows.srcaddr, flows.dstaddr, flows.dstport\n\n\n Sort Method: external merge Disk: 2721856kB\n\n\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\n\n\n I/O Timings: read=5323746.860\n\n\n -> Gather (cost=19463936.00..37650810.19 rows=2243913 width=20) (actual time=1777219.713..2590530.887 rows=81896988 loops=1)\n\n\n Workers Planned: 9\n\n\n Workers Launched: 9\n\n\n Buffers: shared hit=1116590559 read=15851133\n\n\n I/O Timings: read=5323746.860\n\n\n -> Hash Semi Join (cost=19462936.00..37622883.23 rows=249324 width=20) (actual time=1847579.360..2602039.780 rows=8189699 loops=10)\n\n\n Hash Cond: (flows.dstaddr = flows_1.dstaddr)\n\n\n Buffers: shared hit=1116588309 read=15851133\n\n\n I/O Timings: read=5323746.860\n\n\n -> Nested Loop (cost=0.03..18159012.30 rows=249324 width=20) (actual time=1.562..736556.583 rows=45499045 loops=10)\n\n\n Buffers: shared hit=996551813 read=15851133\n\n\n I/O Timings: read=5323746.860\n\n\n -> Parallel Seq Scan on flows (cost=0.00..16039759.79 rows=62330930 width=20) (actual time=1.506..547485.066 rows=54155970 loops=10)\n\n\n Buffers: shared hit=1634 read=15851133\n\n\n I/O Timings: read=5323746.860\n\n\n -> Index Only Scan using mynetworks_ipaddr_idx on mynetworks (cost=0.03..0.03 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=541559704)\n\n\n Index Cond: (ipaddr >>= (flows.srcaddr)::ip4r)\n\n\n Heap Fetches: 59971474\n\n\n Buffers: shared hit=996550152\n\n\n -> Hash (cost=19462896.74..19462896.74 rows=11210 width=4) (actual time=1847228.894..1847228.894 rows=3099798 loops=10)\n\n\n Buckets: 4194304 (originally 16384) Batches: 1 (originally 1) Memory Usage: 141746kB\n\n\n Buffers: shared hit=120036496\n\n\n -> HashAggregate (cost=19462829.48..19462863.11 rows=11210 width=4) (actual time=1230049.015..1845955.764 rows=3099798 loops=10)\n\n\n Group Key: flows_1.dstaddr\n\n\n Buffers: shared hit=120036496\n\n\n -> Nested Loop Anti Join (cost=0.12..19182620.78 rows=560417390 width=4) (actual time=0.084..831832.333 rows=113420172 loops=10)\n\n\n Join Filter: (mynetworks_1.ipaddr >> (flows_1.dstaddr)::ip4r)\n\n\n Rows Removed by Join Filter: 453681377\n\n\n Buffers: shared hit=120036496\n\n\n -> Index Only Scan using flows_srcaddr_dstaddr_idx on flows flows_1 (cost=0.12..9091067.70 rows=560978368 width=4) (actual time=0.027..113052.437 rows=541559704 loops=10)\n\n\n Heap Fetches: 91\n\n\n Buffers: shared hit=120036459\n\n\n -> Materialize (cost=0.00..1.02 rows=4 width=8) (actual time=0.000..0.000 rows=2 loops=5415597040)\n\n\n Buffers: shared hit=10\n\n\n -> Seq Scan on mynetworks mynetworks_1 (cost=0.00..1.01 rows=4 width=8) (actual time=0.007..0.008 rows=4 loops=10)\n\n\n Buffers: shared hit=10\n\n\n Planning time: 6.689 ms\n\n\n Execution time: 2764860.853 ms\n\n\n(58 rows)\n\n\n\n \n\n\nRegarding \"Also using dstat I can see that iowait time is at about 25%\", I don't think the server was doing anything else. If it is important, I can repeat the benchmarks.\n\n\nThanks!\n\n\n \n\n\n\nCharles\n\n \nCharles,\n \nIn your original posting I couldn’t find what value you set for temp_buffers.\nConsidering you have plenty of RAM, try setting temp_buffers=’6GB’ and then run ‘explain (analyze, buffers) select…’ in the same session. This should alleviate\r\n “disk sort’ problem.\n \nAlso, could you post the structure of\r\nflowscompact, mynetworks, and dstextern \r\ntables with all the indexes and number of rows. Actually, are they all – tables, or some of them – views?\n \nIgor",
"msg_date": "Wed, 12 Jul 2017 14:31:57 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "From: [email protected]<mailto:[email protected]> [mailto:[email protected]] On Behalf Of Charles Nadeau\r\nSent: Wednesday, July 12, 2017 6:05 AM\r\nTo: Jeff Janes <[email protected]<mailto:[email protected]>>\r\nCc: [email protected]<mailto:[email protected]>\r\nSubject: Re: [PERFORM] Very poor read performance, query independent\r\n\r\n\r\nflows=# explain (analyze, buffers) SELECT DISTINCT\r\nflows-# srcaddr,\r\nflows-# dstaddr,\r\nflows-# dstport,\r\nflows-# COUNT(*) AS conversation,\r\nflows-# SUM(doctets) / 1024 / 1024 AS mbytes\r\nflows-# FROM\r\nflows-# flowscompact,\r\nflows-# mynetworks\r\nflows-# WHERE\r\nflows-# mynetworks.ipaddr >>= flowscompact.srcaddr\r\nflows-# AND dstaddr IN\r\nflows-# (\r\nflows(# SELECT\r\nflows(# dstaddr\r\nflows(# FROM\r\nflows(# dstexterne\r\nflows(# )\r\nflows-# GROUP BY\r\nflows-# srcaddr,\r\nflows-# dstaddr,\r\nflows-# dstport\r\nflows-# ORDER BY\r\nflows-# mbytes DESC LIMIT 50;\r\nLOG: temporary file: path \"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.6\", size 1073741824\r\nLOG: temporary file: path \"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.7\", size 1073741824\r\nLOG: temporary file: path \"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.8\", size 639696896\r\nLOG: duration: 2765020.327 ms statement: explain (analyze, buffers) SELECT DISTINCT\r\n srcaddr,\r\n dstaddr,\r\n dstport,\r\n COUNT(*) AS conversation,\r\n SUM(doctets) / 1024 / 1024 AS mbytes\r\nFROM\r\n flowscompact,\r\n mynetworks\r\nWHERE\r\n mynetworks.ipaddr >>= flowscompact.srcaddr\r\n AND dstaddr IN\r\n (\r\n SELECT\r\n dstaddr\r\n FROM\r\n dstexterne\r\n )\r\nGROUP BY\r\n srcaddr,\r\n dstaddr,\r\n dstport\r\nORDER BY\r\n mbytes DESC LIMIT 50;\r\n QUERY PLAN\r\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n Limit (cost=37762321.83..37762321.98 rows=50 width=52) (actual time=2764548.863..2764548.891 rows=50 loops=1)\r\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\r\n I/O Timings: read=5323746.860\r\n -> Unique (cost=37762321.83..37769053.57 rows=2243913 width=52) (actual time=2764548.861..2764548.882 rows=50 loops=1)\r\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\r\n I/O Timings: read=5323746.860\r\n -> Sort (cost=37762321.83..37763443.79 rows=2243913 width=52) (actual time=2764548.859..2764548.872 rows=50 loops=1)\r\n Sort Key: (((sum(flows.doctets) / '1024'::numeric) / '1024'::numeric)) DESC, flows.srcaddr, flows.dstaddr, flows.dstport, (count(*))\r\n Sort Method: quicksort Memory: 563150kB\r\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\r\n I/O Timings: read=5323746.860\r\n -> GroupAggregate (cost=37698151.34..37714980.68 rows=2243913 width=52) (actual time=2696721.610..2752109.551 rows=4691734 loops=1)\r\n Group Key: flows.srcaddr, flows.dstaddr, flows.dstport\r\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\r\n I/O Timings: read=5323746.860\r\n -> Sort (cost=37698151.34..37699273.29 rows=2243913 width=20) (actual time=2696711.428..2732781.705 rows=81896988 loops=1)\r\n Sort Key: flows.srcaddr, flows.dstaddr, flows.dstport\r\n Sort Method: external merge Disk: 2721856kB\r\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\r\n I/O Timings: read=5323746.860\r\n -> Gather (cost=19463936.00..37650810.19 rows=2243913 width=20) (actual time=1777219.713..2590530.887 rows=81896988 loops=1)\r\n Workers Planned: 9\r\n Workers Launched: 9\r\n Buffers: shared hit=1116590559 read=15851133\r\n I/O Timings: read=5323746.860\r\n -> Hash Semi Join (cost=19462936.00..37622883.23 rows=249324 width=20) (actual time=1847579.360..2602039.780 rows=8189699 loops=10)\r\n Hash Cond: (flows.dstaddr = flows_1.dstaddr)\r\n Buffers: shared hit=1116588309 read=15851133\r\n I/O Timings: read=5323746.860\r\n -> Nested Loop (cost=0.03..18159012.30 rows=249324 width=20) (actual time=1.562..736556.583 rows=45499045 loops=10)\r\n Buffers: shared hit=996551813 read=15851133\r\n I/O Timings: read=5323746.860\r\n -> Parallel Seq Scan on flows (cost=0.00..16039759.79 rows=62330930 width=20) (actual time=1.506..547485.066 rows=54155970 loops=10)\r\n Buffers: shared hit=1634 read=15851133\r\n I/O Timings: read=5323746.860\r\n -> Index Only Scan using mynetworks_ipaddr_idx on mynetworks (cost=0.03..0.03 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=541559704)\r\n Index Cond: (ipaddr >>= (flows.srcaddr)::ip4r)\r\n Heap Fetches: 59971474\r\n Buffers: shared hit=996550152\r\n -> Hash (cost=19462896.74..19462896.74 rows=11210 width=4) (actual time=1847228.894..1847228.894 rows=3099798 loops=10)\r\n Buckets: 4194304 (originally 16384) Batches: 1 (originally 1) Memory Usage: 141746kB\r\n Buffers: shared hit=120036496\r\n -> HashAggregate (cost=19462829.48..19462863.11 rows=11210 width=4) (actual time=1230049.015..1845955.764 rows=3099798 loops=10)\r\n Group Key: flows_1.dstaddr\r\n Buffers: shared hit=120036496\r\n -> Nested Loop Anti Join (cost=0.12..19182620.78 rows=560417390 width=4) (actual time=0.084..831832.333 rows=113420172 loops=10)\r\n Join Filter: (mynetworks_1.ipaddr >> (flows_1.dstaddr)::ip4r)\r\n Rows Removed by Join Filter: 453681377\r\n Buffers: shared hit=120036496\r\n -> Index Only Scan using flows_srcaddr_dstaddr_idx on flows flows_1 (cost=0.12..9091067.70 rows=560978368 width=4) (actual time=0.027..113052.437 rows=541559704 loops=10)\r\n Heap Fetches: 91\r\n Buffers: shared hit=120036459\r\n -> Materialize (cost=0.00..1.02 rows=4 width=8) (actual time=0.000..0.000 rows=2 loops=5415597040)\r\n Buffers: shared hit=10\r\n -> Seq Scan on mynetworks mynetworks_1 (cost=0.00..1.01 rows=4 width=8) (actual time=0.007..0.008 rows=4 loops=10)\r\n Buffers: shared hit=10\r\n Planning time: 6.689 ms\r\n Execution time: 2764860.853 ms\r\n(58 rows)\r\n\r\nRegarding \"Also using dstat I can see that iowait time is at about 25%\", I don't think the server was doing anything else. If it is important, I can repeat the benchmarks.\r\nThanks!\r\n\r\nCharles\r\n\r\nCharles,\r\n\r\nIn your original posting I couldn’t find what value you set for temp_buffers.\r\nConsidering you have plenty of RAM, try setting temp_buffers=’6GB’ and then run ‘explain (analyze, buffers) select…’ in the same session. This should alleviate “disk sort’ problem.\r\n\r\nAlso, could you post the structure of flowscompact, mynetworks, and dstextern tables with all the indexes and number of rows. Actually, are they all – tables, or some of them – views?\r\n\r\nIgor\r\n\r\n\r\nSorry, I misstated the parameter to change.\r\nIt is work_mem (not temp_buffers) you should try to increase to 6GB.\r\n\r\nIgor\r\n\r\n\r\n\n\n\n\n\n\n\n\n\n \n\n \n\n\nFrom:\[email protected] [mailto:[email protected]]\r\nOn Behalf Of Charles Nadeau\nSent: Wednesday, July 12, 2017 6:05 AM\nTo: Jeff Janes <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] Very poor read performance, query independent\n\n\n \n\n\n\n\n \n\n\nflows=# explain (analyze, buffers) SELECT DISTINCT\n\n\nflows-# srcaddr,\n\n\nflows-# dstaddr,\n\n\nflows-# dstport,\n\n\nflows-# COUNT(*) AS conversation,\n\n\nflows-# SUM(doctets) / 1024 / 1024 AS mbytes \n\n\nflows-# FROM\n\n\nflows-# flowscompact,\n\n\nflows-# mynetworks \n\n\nflows-# WHERE\n\n\nflows-# mynetworks.ipaddr >>= flowscompact.srcaddr \n\n\nflows-# AND dstaddr IN \n\n\nflows-# (\n\n\nflows(# SELECT\n\n\nflows(# dstaddr \n\n\nflows(# FROM\n\n\nflows(# dstexterne\n\n\nflows(# )\n\n\nflows-# GROUP BY\n\n\nflows-# srcaddr,\n\n\nflows-# dstaddr,\n\n\nflows-# dstport \n\n\nflows-# ORDER BY\n\n\nflows-# mbytes DESC LIMIT 50;\n\n\nLOG: temporary file: path \"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.6\", size 1073741824\n\n\nLOG: temporary file: path \"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.7\", size 1073741824\n\n\nLOG: temporary file: path \"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.8\", size 639696896\n\n\nLOG: duration: 2765020.327 ms statement: explain (analyze, buffers) SELECT DISTINCT\n\n\n srcaddr,\n\n\n dstaddr,\n\n\n dstport,\n\n\n COUNT(*) AS conversation,\n\n\n SUM(doctets) / 1024 / 1024 AS mbytes \n\n\nFROM\n\n\n flowscompact,\n\n\n mynetworks \n\n\nWHERE\n\n\n mynetworks.ipaddr >>= flowscompact.srcaddr \n\n\n AND dstaddr IN \n\n\n (\n\n\n SELECT\n\n\n dstaddr \n\n\n FROM\n\n\n dstexterne\n\n\n )\n\n\nGROUP BY\n\n\n srcaddr,\n\n\n dstaddr,\n\n\n dstport \n\n\nORDER BY\n\n\n mbytes DESC LIMIT 50;\n\n\n QUERY PLAN \n\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n Limit (cost=37762321.83..37762321.98 rows=50 width=52) (actual time=2764548.863..2764548.891 rows=50 loops=1)\n\n\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\n\n\n I/O Timings: read=5323746.860\n\n\n -> Unique (cost=37762321.83..37769053.57 rows=2243913 width=52) (actual time=2764548.861..2764548.882 rows=50 loops=1)\n\n\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\n\n\n I/O Timings: read=5323746.860\n\n\n -> Sort (cost=37762321.83..37763443.79 rows=2243913 width=52) (actual time=2764548.859..2764548.872 rows=50 loops=1)\n\n\n Sort Key: (((sum(flows.doctets) / '1024'::numeric) / '1024'::numeric)) DESC, flows.srcaddr, flows.dstaddr, flows.dstport, (count(*))\n\n\n Sort Method: quicksort Memory: 563150kB\n\n\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\n\n\n I/O Timings: read=5323746.860\n\n\n -> GroupAggregate (cost=37698151.34..37714980.68 rows=2243913 width=52) (actual time=2696721.610..2752109.551 rows=4691734 loops=1)\n\n\n Group Key: flows.srcaddr, flows.dstaddr, flows.dstport\n\n\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\n\n\n I/O Timings: read=5323746.860\n\n\n -> Sort (cost=37698151.34..37699273.29 rows=2243913 width=20) (actual time=2696711.428..2732781.705 rows=81896988 loops=1)\n\n\n Sort Key: flows.srcaddr, flows.dstaddr, flows.dstport\n\n\n Sort Method: external merge Disk: 2721856kB\n\n\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\n\n\n I/O Timings: read=5323746.860\n\n\n -> Gather (cost=19463936.00..37650810.19 rows=2243913 width=20) (actual time=1777219.713..2590530.887 rows=81896988 loops=1)\n\n\n Workers Planned: 9\n\n\n Workers Launched: 9\n\n\n Buffers: shared hit=1116590559 read=15851133\n\n\n I/O Timings: read=5323746.860\n\n\n -> Hash Semi Join (cost=19462936.00..37622883.23 rows=249324 width=20) (actual time=1847579.360..2602039.780 rows=8189699 loops=10)\n\n\n Hash Cond: (flows.dstaddr = flows_1.dstaddr)\n\n\n Buffers: shared hit=1116588309 read=15851133\n\n\n I/O Timings: read=5323746.860\n\n\n -> Nested Loop (cost=0.03..18159012.30 rows=249324 width=20) (actual time=1.562..736556.583 rows=45499045 loops=10)\n\n\n Buffers: shared hit=996551813 read=15851133\n\n\n I/O Timings: read=5323746.860\n\n\n -> Parallel Seq Scan on flows (cost=0.00..16039759.79 rows=62330930 width=20) (actual time=1.506..547485.066 rows=54155970 loops=10)\n\n\n Buffers: shared hit=1634 read=15851133\n\n\n I/O Timings: read=5323746.860\n\n\n -> Index Only Scan using mynetworks_ipaddr_idx on mynetworks (cost=0.03..0.03 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=541559704)\n\n\n Index Cond: (ipaddr >>= (flows.srcaddr)::ip4r)\n\n\n Heap Fetches: 59971474\n\n\n Buffers: shared hit=996550152\n\n\n -> Hash (cost=19462896.74..19462896.74 rows=11210 width=4) (actual time=1847228.894..1847228.894 rows=3099798 loops=10)\n\n\n Buckets: 4194304 (originally 16384) Batches: 1 (originally 1) Memory Usage: 141746kB\n\n\n Buffers: shared hit=120036496\n\n\n -> HashAggregate (cost=19462829.48..19462863.11 rows=11210 width=4) (actual time=1230049.015..1845955.764 rows=3099798 loops=10)\n\n\n Group Key: flows_1.dstaddr\n\n\n Buffers: shared hit=120036496\n\n\n -> Nested Loop Anti Join (cost=0.12..19182620.78 rows=560417390 width=4) (actual time=0.084..831832.333 rows=113420172 loops=10)\n\n\n Join Filter: (mynetworks_1.ipaddr >> (flows_1.dstaddr)::ip4r)\n\n\n Rows Removed by Join Filter: 453681377\n\n\n Buffers: shared hit=120036496\n\n\n -> Index Only Scan using flows_srcaddr_dstaddr_idx on flows flows_1 (cost=0.12..9091067.70 rows=560978368 width=4) (actual time=0.027..113052.437 rows=541559704 loops=10)\n\n\n Heap Fetches: 91\n\n\n Buffers: shared hit=120036459\n\n\n -> Materialize (cost=0.00..1.02 rows=4 width=8) (actual time=0.000..0.000 rows=2 loops=5415597040)\n\n\n Buffers: shared hit=10\n\n\n -> Seq Scan on mynetworks mynetworks_1 (cost=0.00..1.01 rows=4 width=8) (actual time=0.007..0.008 rows=4 loops=10)\n\n\n Buffers: shared hit=10\n\n\n Planning time: 6.689 ms\n\n\n Execution time: 2764860.853 ms\n\n\n(58 rows)\n\n\n\n \n\n\nRegarding \"Also using dstat I can see that iowait time is at about 25%\", I don't think the server was doing anything else. If it is important, I can repeat the benchmarks.\n\n\nThanks!\n\n\n \n\n\n\nCharles\n\n \nCharles,\n \nIn your original posting I couldn’t find what value you set for temp_buffers.\nConsidering you have plenty of RAM, try setting temp_buffers=’6GB’ and then run ‘explain (analyze, buffers) select…’ in the same session. This should alleviate\r\n “disk sort’ problem.\n \nAlso, could you post the structure of\r\nflowscompact, mynetworks, and dstextern \r\ntables with all the indexes and number of rows. Actually, are they all – tables, or some of them – views?\n \nIgor\n\n \n\n \nSorry, I misstated the parameter to change.\nIt is work_mem (not temp_buffers) you should try to increase to 6GB.\n \nIgor",
"msg_date": "Wed, 12 Jul 2017 16:39:08 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "On Wed, Jul 12, 2017 at 3:04 AM, Charles Nadeau <[email protected]>\nwrote:\n\n> Jeff,\n>\n> Here are the 2 EXPLAINs for one of my simplest query:\n>\n\n\nIt looks like dstexterne and flowcompact are both views over flow. Can you\nshare the definition of those views?\n\nI think the iowait > 12.5% is due to the parallel query execution. But\nthen the question is, why is it only 25% when you have 10 fold parallelism?\n\nIt certainly looks like you are doing more than 4MB/s there, so maybe\nsomething is wrong with the instrumentation, or how you are interpreting\nit.\n\nAlthough it is still less than perhaps it could do. To put a baseline on\nwhat you can expect out of parallel seq scans, can you do something like:\n\nexplain (analyze, buffers) select avg(doctets) from flow;\n\nCheers,\n\nJeff\n\nOn Wed, Jul 12, 2017 at 3:04 AM, Charles Nadeau <[email protected]> wrote:Jeff,Here are the 2 EXPLAINs for one of my simplest query:It looks like dstexterne and flowcompact are both views over flow. Can you share the definition of those views?I think the iowait > 12.5% is due to the parallel query execution. But then the question is, why is it only 25% when you have 10 fold parallelism?It certainly looks like you are doing more than 4MB/s there, so maybe something is wrong with the instrumentation, or how you are interpreting it. Although it is still less than perhaps it could do. To put a baseline on what you can expect out of parallel seq scans, can you do something like:explain (analyze, buffers) select avg(doctets) from flow;Cheers,Jeff",
"msg_date": "Wed, 12 Jul 2017 15:27:27 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Mark,\n\nFirst I must say that I changed my disks configuration from 4 disks in RAID\n10 to 5 disks in RAID 0 because I almost ran out of disk space during the\nlast ingest of data.\nHere is the result test you asked. It was done with a cold cache:\n\nflows=# \\timing\nTiming is on.\nflows=# explain select count(*) from flows;\n QUERY PLAN\n\n------------------------------------------------------------\n-----------------------------------\n Finalize Aggregate (cost=17214914.09..17214914.09 rows=1 width=8)\n -> Gather (cost=17214914.07..17214914.09 rows=1 width=8)\n Workers Planned: 1\n -> Partial Aggregate (cost=17213914.07..17213914.07 rows=1\nwidth=8)\n -> Parallel Seq Scan on flows (cost=0.00..17019464.49\nrows=388899162 width=0)\n(5 rows)\n\nTime: 171.835 ms\nflows=# select pg_relation_size('flows');\n pg_relation_size\n------------------\n 129865867264\n(1 row)\n\nTime: 57.157 ms\nflows=# select count(*) from flows;\nLOG: duration: 625546.522 ms statement: select count(*) from flows;\n count\n-----------\n 589831190\n(1 row)\n\nTime: 625546.662 ms\n\nThe throughput reported by Postgresql is almost 198MB/s, and the throughput\nas mesured by dstat during the query execution was between 25 and 299MB/s.\nIt is much better than what I had before! The i/o wait was about 12% all\nthrough the query. One thing I noticed is the discrepency between the read\nthroughput reported by pg_activity and the one reported by dstat:\npg_activity always report a value lower than dstat.\n\nBesides the change of disks configuration, here is what contributed the\nmost to the improvment of the performance so far:\n\nUsing Hugepage\nIncreasing effective_io_concurrency to 256\nReducing random_page_cost from 22 to 4\nReducing min_parallel_relation_size to 512kB to have more workers when\ndoing sequential parallel scan of my biggest table\n\n\nThanks for recomending this test, I now know what the real throughput\nshould be!\n\nCharles\n\nOn Wed, Jul 12, 2017 at 4:11 AM, Mark Kirkwood <\[email protected]> wrote:\n\n> Hmm - how are you measuring that sequential scan speed of 4MB/s? I'd\n> recommend doing a very simple test e.g, here's one on my workstation - 13\n> GB single table on 1 SATA drive - cold cache after reboot, sequential scan\n> using Postgres 9.6.2:\n>\n> bench=# EXPLAIN SELECT count(*) FROM pgbench_accounts;\n> QUERY PLAN\n> ------------------------------------------------------------\n> ------------------------\n> Aggregate (cost=2889345.00..2889345.01 rows=1 width=8)\n> -> Seq Scan on pgbench_accounts (cost=0.00..2639345.00 rows=100000000\n> width=0)\n> (2 rows)\n>\n>\n> bench=# SELECT pg_relation_size('pgbench_accounts');\n> pg_relation_size\n> ------------------\n> 13429514240\n> (1 row)\n>\n> bench=# SELECT count(*) FROM pgbench_accounts;\n> count\n> -----------\n> 100000000\n> (1 row)\n>\n> Time: 118884.277 ms\n>\n>\n> So doing the math seq read speed is about 110MB/s (i.e 13 GB in 120 sec).\n> Sure enough, while I was running the query iostat showed:\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz\n> avgqu-sz await r_await w_await svctm %util\n> sda 0.00 0.00 926.00 0.00 114.89 0.00 254.10\n> 1.90 2.03 2.03 0.00 1.08 100.00\n>\n>\n> So might be useful for us to see something like that from your system -\n> note you need to check you really have flushed the cache, and that no other\n> apps are using the db.\n>\n> regards\n>\n> Mark\n>\n>\n> On 12/07/17 00:46, Charles Nadeau wrote:\n>\n>> After reducing random_page_cost to 4 and testing more, I can report that\n>> the aggregate read throughput for parallel sequential scan is about 90MB/s.\n>> However the throughput for sequential scan is still around 4MB/s.\n>>\n>>\n>\n\n\n-- \nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\nMark,First I must say that I changed my disks configuration from 4 disks in RAID 10 to 5 disks in RAID 0 because I almost ran out of disk space during the last ingest of data.Here is the result test you asked. It was done with a cold cache:flows=# \\timingTiming is on.flows=# explain select count(*) from flows; QUERY PLAN ----------------------------------------------------------------------------------------------- Finalize Aggregate (cost=17214914.09..17214914.09 rows=1 width=8) -> Gather (cost=17214914.07..17214914.09 rows=1 width=8) Workers Planned: 1 -> Partial Aggregate (cost=17213914.07..17213914.07 rows=1 width=8) -> Parallel Seq Scan on flows (cost=0.00..17019464.49 rows=388899162 width=0)(5 rows)Time: 171.835 msflows=# select pg_relation_size('flows'); pg_relation_size ------------------ 129865867264(1 row)Time: 57.157 msflows=# select count(*) from flows;LOG: duration: 625546.522 ms statement: select count(*) from flows; count ----------- 589831190(1 row)Time: 625546.662 msThe throughput reported by Postgresql is almost 198MB/s, and the throughput as mesured by dstat during the query execution was between 25 and 299MB/s. It is much better than what I had before! The i/o wait was about 12% all through the query. One thing I noticed is the discrepency between the read throughput reported by pg_activity and the one reported by dstat: pg_activity always report a value lower than dstat.Besides the change of disks configuration, here is what contributed the most to the improvment of the performance so far:Using HugepageIncreasing effective_io_concurrency to 256Reducing random_page_cost from 22 to 4Reducing min_parallel_relation_size to 512kB to have more workers when doing sequential parallel scan of my biggest tableThanks for recomending this test, I now know what the real throughput should be!CharlesOn Wed, Jul 12, 2017 at 4:11 AM, Mark Kirkwood <[email protected]> wrote:Hmm - how are you measuring that sequential scan speed of 4MB/s? I'd recommend doing a very simple test e.g, here's one on my workstation - 13 GB single table on 1 SATA drive - cold cache after reboot, sequential scan using Postgres 9.6.2:\n\nbench=# EXPLAIN SELECT count(*) FROM pgbench_accounts;\n QUERY PLAN\n------------------------------------------------------------------------------------\n Aggregate (cost=2889345.00..2889345.01 rows=1 width=8)\n -> Seq Scan on pgbench_accounts (cost=0.00..2639345.00 rows=100000000 width=0)\n(2 rows)\n\n\nbench=# SELECT pg_relation_size('pgbench_accounts');\n pg_relation_size\n------------------\n 13429514240\n(1 row)\n\nbench=# SELECT count(*) FROM pgbench_accounts;\n count\n-----------\n 100000000\n(1 row)\n\nTime: 118884.277 ms\n\n\nSo doing the math seq read speed is about 110MB/s (i.e 13 GB in 120 sec). Sure enough, while I was running the query iostat showed:\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util\nsda 0.00 0.00 926.00 0.00 114.89 0.00 254.10 1.90 2.03 2.03 0.00 1.08 100.00\n\n\nSo might be useful for us to see something like that from your system - note you need to check you really have flushed the cache, and that no other apps are using the db.\n\nregards\n\nMark\n\nOn 12/07/17 00:46, Charles Nadeau wrote:\n\nAfter reducing random_page_cost to 4 and testing more, I can report that the aggregate read throughput for parallel sequential scan is about 90MB/s. However the throughput for sequential scan is still around 4MB/s.\n\n\n\n-- Charles Nadeau Ph.D.http://charlesnadeau.blogspot.com/",
"msg_date": "Fri, 14 Jul 2017 16:34:24 +0200",
"msg_from": "Charles Nadeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Igor,\n\nInitially temp_buffer was left to its default value (8MB). Watching the\ncontent of the directory that stores the temporary files, I found that I\nneed at most 21GB of temporary files space. Should I set temp_buffer to\n21GB?\nHere is the explain you requested with work_mem set to 6GB:\n\nflows=# set work_mem='6GB';\nSET\nflows=# explain (analyze, buffers) SELECT DISTINCT\n srcaddr,\n dstaddr,\n dstport,\n COUNT(*) AS conversation,\n SUM(doctets) / 1024 / 1024 AS mbytes\nFROM\n flowscompact,\n mynetworks\nWHERE\n mynetworks.ipaddr >>= flowscompact.srcaddr\n AND dstaddr IN\n (\n SELECT\n dstaddr\n FROM\n dstexterne\n )\nGROUP BY\n srcaddr,\n dstaddr,\n dstport\nORDER BY\n mbytes DESC LIMIT 50;\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=48135680.07..48135680.22 rows=50 width=52) (actual\ntime=2227678.196..2227678.223 rows=50 loops=1)\n Buffers: shared hit=728798038 read=82974833, temp read=381154\nwritten=381154\n -> Unique (cost=48135680.07..48143613.62 rows=2644514 width=52)\n(actual time=2227678.194..2227678.217 rows=50 loops=1)\n Buffers: shared hit=728798038 read=82974833, temp read=381154\nwritten=381154\n -> Sort (cost=48135680.07..48137002.33 rows=2644514 width=52)\n(actual time=2227678.192..2227678.202 rows=50 loops=1)\n Sort Key: (((sum(flows.doctets) / '1024'::numeric) /\n'1024'::numeric)) DESC, flows.srcaddr, flows.dstaddr, flows.dstport,\n(count(*))\n Sort Method: quicksort Memory: 654395kB\n Buffers: shared hit=728798038 read=82974833, temp\nread=381154 written=381154\n -> GroupAggregate (cost=48059426.65..48079260.50\nrows=2644514 width=52) (actual time=2167909.030..2211446.192 rows=5859671\nloops=1)\n Group Key: flows.srcaddr, flows.dstaddr, flows.dstport\n Buffers: shared hit=728798038 read=82974833, temp\nread=381154 written=381154\n -> Sort (cost=48059426.65..48060748.90 rows=2644514\nwidth=20) (actual time=2167896.815..2189107.205 rows=91745640 loops=1)\n Sort Key: flows.srcaddr, flows.dstaddr,\nflows.dstport\n Sort Method: external merge Disk: 3049216kB\n Buffers: shared hit=728798038 read=82974833,\ntemp read=381154 written=381154\n -> Gather (cost=30060688.07..48003007.07\nrows=2644514 width=20) (actual time=1268989.000..1991357.232 rows=91745640\nloops=1)\n Workers Planned: 12\n Workers Launched: 12\n Buffers: shared hit=728798037 read=82974833\n -> Hash Semi Join\n (cost=30059688.07..47951761.31 rows=220376 width=20) (actual\ntime=1268845.181..2007864.725 rows=7057357 loops=13)\n Hash Cond: (flows.dstaddr =\nflows_1.dstaddr)\n Buffers: shared hit=728795193\nread=82974833\n -> Nested Loop\n (cost=0.03..17891246.86 rows=220376 width=20) (actual\ntime=0.207..723790.283 rows=37910370 loops=13)\n Buffers: shared hit=590692229\nread=14991777\n -> Parallel Seq Scan on flows\n (cost=0.00..16018049.14 rows=55094048 width=20) (actual\ntime=0.152..566179.117 rows=45371630 loops=13)\n Buffers: shared\nhit=860990 read=14991777\n -> Index Only Scan using\nmynetworks_ipaddr_idx on mynetworks (cost=0.03..0.03 rows=1 width=8)\n(actual time=0.002..0.002 rows=1 loops=589831190)\n Index Cond: (ipaddr >>=\n(flows.srcaddr)::ip4r)\n Heap Fetches: 0\n Buffers: shared\nhit=589831203\n -> Hash\n (cost=30059641.47..30059641.47 rows=13305 width=4) (actual\ntime=1268811.101..1268811.101 rows=3803508 loops=13)\n Buckets: 4194304 (originally\n16384) Batches: 1 (originally 1) Memory Usage: 166486kB\n Buffers: shared hit=138102964\nread=67983056\n -> HashAggregate\n (cost=30059561.64..30059601.56 rows=13305 width=4) (actual\ntime=1265248.165..1267432.083 rows=3803508 loops=13)\n Group Key:\nflows_1.dstaddr\n Buffers: shared\nhit=138102964 read=67983056\n -> Nested Loop Anti\nJoin (cost=0.00..29729327.92 rows=660467447 width=4) (actual\ntime=0.389..1201072.707 rows=125838232 loops=13)\n Join Filter:\n(mynetworks_1.ipaddr >> (flows_1.dstaddr)::ip4r)\n Rows Removed by\nJoin Filter: 503353617\n Buffers: shared\nhit=138102964 read=67983056\n -> Seq Scan on\nflows flows_1 (cost=0.00..17836152.73 rows=661128576 width=4) (actual\ntime=0.322..343152.274 rows=589831190 loops=13)\n Buffers:\nshared hit=138102915 read=67983056\n -> Materialize\n (cost=0.00..1.02 rows=4 width=8) (actual time=0.000..0.000 rows=2\nloops=7667805470)\n Buffers:\nshared hit=13\n -> Seq Scan\non mynetworks mynetworks_1 (cost=0.00..1.01 rows=4 width=8) (actual\ntime=0.006..0.007 rows=4 loops=13)\n\n Buffers: shared hit=13\n Planning time: 0.941 ms\n Execution time: 2228345.171 ms\n(48 rows)\n\n\nWith a work_mem at 6GB, I noticed that for the first 20 minutes the query\nwas running, the i/o wait was much lower, hovering aroun 3% then it jumped\n45% until almost the end of the query.\n\nflowscompact and dstexterne are actually views. I use views to simplify\nquery writing and to \"abstract\" queries that are use often in other\nqueries. flowscompact is a view built on table flows (having about 590\nmillion rows), it only keeps the most often used fields.\n\nflows=# \\d+ flowscompact;\n View \"public.flowscompact\"\n Column | Type | Modifiers | Storage | Description\n-----------+--------------------------+-----------+---------+-------------\n flow_id | bigint | | plain |\n sysuptime | bigint | | plain |\n exaddr | ip4 | | plain |\n dpkts | integer | | plain |\n doctets | bigint | | plain |\n first | bigint | | plain |\n last | bigint | | plain |\n srcaddr | ip4 | | plain |\n dstaddr | ip4 | | plain |\n srcport | integer | | plain |\n dstport | integer | | plain |\n prot | smallint | | plain |\n tos | smallint | | plain |\n tcp_flags | smallint | | plain |\n timestamp | timestamp with time zone | | plain |\nView definition:\n SELECT flowstimestamp.flow_id,\n flowstimestamp.sysuptime,\n flowstimestamp.exaddr,\n flowstimestamp.dpkts,\n flowstimestamp.doctets,\n flowstimestamp.first,\n flowstimestamp.last,\n flowstimestamp.srcaddr,\n flowstimestamp.dstaddr,\n flowstimestamp.srcport,\n flowstimestamp.dstport,\n flowstimestamp.prot,\n flowstimestamp.tos,\n flowstimestamp.tcp_flags,\n flowstimestamp.\"timestamp\"\n FROM flowstimestamp;\n\nmynetworks is a table having one column and 4 rows; it contains a list of\nour network networks:\n\nflows=# select * from mynetworks;\n ipaddr\n----------------\n 192.168.0.0/24\n 10.112.12.0/30\n 10.112.12.4/30\n 10.112.12.8/30\n(4 row)\nflows=# \\d+ mynetworks;\n Table \"public.mynetworks\"\n Column | Type | Modifiers | Storage | Stats target | Description\n--------+------+-----------+---------+--------------+-------------\n ipaddr | ip4r | | plain | |\nIndexes:\n \"mynetworks_ipaddr_idx\" gist (ipaddr)\n\ndstexterne is a view listing all the destination IPv4 addresses not inside\nour network; it has one column and 3.8 million rows.\n\nflows=# \\d+ dstexterne;\n View \"public.dstexterne\"\n Column | Type | Modifiers | Storage | Description\n---------+------+-----------+---------+-------------\n dstaddr | ip4 | | plain |\nView definition:\n SELECT DISTINCT flowscompact.dstaddr\n FROM flowscompact\n LEFT JOIN mynetworks ON mynetworks.ipaddr >> flowscompact.dstaddr::ip4r\n WHERE mynetworks.ipaddr IS NULL;\n\nThanks!\n\nCharles\n\nOn Wed, Jul 12, 2017 at 6:39 PM, Igor Neyman <[email protected]> wrote:\n\n>\n>\n>\n>\n> *From:* [email protected] [mailto:pgsql-performance-\n> [email protected] <[email protected]>] *On Behalf\n> Of *Charles Nadeau\n> *Sent:* Wednesday, July 12, 2017 6:05 AM\n> *To:* Jeff Janes <[email protected]>\n> *Cc:* [email protected]\n> *Subject:* Re: [PERFORM] Very poor read performance, query independent\n>\n>\n>\n>\n>\n> flows=# explain (analyze, buffers) SELECT DISTINCT\n>\n> flows-# srcaddr,\n>\n> flows-# dstaddr,\n>\n> flows-# dstport,\n>\n> flows-# COUNT(*) AS conversation,\n>\n> flows-# SUM(doctets) / 1024 / 1024 AS mbytes\n>\n> flows-# FROM\n>\n> flows-# flowscompact,\n>\n> flows-# mynetworks\n>\n> flows-# WHERE\n>\n> flows-# mynetworks.ipaddr >>= flowscompact.srcaddr\n>\n> flows-# AND dstaddr IN\n>\n> flows-# (\n>\n> flows(# SELECT\n>\n> flows(# dstaddr\n>\n> flows(# FROM\n>\n> flows(# dstexterne\n>\n> flows(# )\n>\n> flows-# GROUP BY\n>\n> flows-# srcaddr,\n>\n> flows-# dstaddr,\n>\n> flows-# dstport\n>\n> flows-# ORDER BY\n>\n> flows-# mbytes DESC LIMIT 50;\n>\n> LOG: temporary file: path \"pg_tblspc/36238/PG_9.6_\n> 201608131/pgsql_tmp/pgsql_tmp14573.6\", size 1073741824\n>\n> LOG: temporary file: path \"pg_tblspc/36238/PG_9.6_\n> 201608131/pgsql_tmp/pgsql_tmp14573.7\", size 1073741824\n>\n> LOG: temporary file: path \"pg_tblspc/36238/PG_9.6_\n> 201608131/pgsql_tmp/pgsql_tmp14573.8\", size 639696896\n>\n> LOG: duration: 2765020.327 ms statement: explain (analyze, buffers)\n> SELECT DISTINCT\n>\n> srcaddr,\n>\n> dstaddr,\n>\n> dstport,\n>\n> COUNT(*) AS conversation,\n>\n> SUM(doctets) / 1024 / 1024 AS mbytes\n>\n> FROM\n>\n> flowscompact,\n>\n> mynetworks\n>\n> WHERE\n>\n> mynetworks.ipaddr >>= flowscompact.srcaddr\n>\n> AND dstaddr IN\n>\n> (\n>\n> SELECT\n>\n> dstaddr\n>\n> FROM\n>\n> dstexterne\n>\n> )\n>\n> GROUP BY\n>\n> srcaddr,\n>\n> dstaddr,\n>\n> dstport\n>\n> ORDER BY\n>\n> mbytes DESC LIMIT 50;\n>\n>\n> QUERY PLAN\n>\n>\n>\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> --------------------------------------------------\n>\n> Limit (cost=37762321.83..37762321.98 rows=50 width=52) (actual\n> time=2764548.863..2764548.891 rows=50 loops=1)\n>\n> Buffers: shared hit=1116590560 read=15851133, temp read=340244\n> written=340244\n>\n> I/O Timings: read=5323746.860\n>\n> -> Unique (cost=37762321.83..37769053.57 rows=2243913 width=52)\n> (actual time=2764548.861..2764548.882 rows=50 loops=1)\n>\n> Buffers: shared hit=1116590560 read=15851133, temp read=340244\n> written=340244\n>\n> I/O Timings: read=5323746.860\n>\n> -> Sort (cost=37762321.83..37763443.79 rows=2243913 width=52)\n> (actual time=2764548.859..2764548.872 rows=50 loops=1)\n>\n> Sort Key: (((sum(flows.doctets) / '1024'::numeric) /\n> '1024'::numeric)) DESC, flows.srcaddr, flows.dstaddr, flows.dstport,\n> (count(*))\n>\n> Sort Method: quicksort Memory: 563150kB\n>\n> Buffers: shared hit=1116590560 read=15851133, temp\n> read=340244 written=340244\n>\n> I/O Timings: read=5323746.860\n>\n> -> GroupAggregate (cost=37698151.34..37714980.68\n> rows=2243913 width=52) (actual time=2696721.610..2752109.551 rows=4691734\n> loops=1)\n>\n> Group Key: flows.srcaddr, flows.dstaddr, flows.dstport\n>\n> Buffers: shared hit=1116590560 read=15851133, temp\n> read=340244 written=340244\n>\n> I/O Timings: read=5323746.860\n>\n> -> Sort (cost=37698151.34..37699273.29\n> rows=2243913 width=20) (actual time=2696711.428..2732781.705 rows=81896988\n> loops=1)\n>\n> Sort Key: flows.srcaddr, flows.dstaddr,\n> flows.dstport\n>\n> Sort Method: external merge Disk: 2721856kB\n>\n> Buffers: shared hit=1116590560 read=15851133,\n> temp read=340244 written=340244\n>\n> I/O Timings: read=5323746.860\n>\n> -> Gather (cost=19463936.00..37650810.19\n> rows=2243913 width=20) (actual time=1777219.713..2590530.887 rows=81896988\n> loops=1)\n>\n> Workers Planned: 9\n>\n> Workers Launched: 9\n>\n> Buffers: shared hit=1116590559\n> read=15851133\n>\n> I/O Timings: read=5323746.860\n>\n> -> Hash Semi Join\n> (cost=19462936.00..37622883.23 rows=249324 width=20) (actual\n> time=1847579.360..2602039.780 rows=8189699 loops=10)\n>\n> Hash Cond: (flows.dstaddr =\n> flows_1.dstaddr)\n>\n> Buffers: shared hit=1116588309\n> read=15851133\n>\n> I/O Timings: read=5323746.860\n>\n> -> Nested Loop\n> (cost=0.03..18159012.30 rows=249324 width=20) (actual\n> time=1.562..736556.583 rows=45499045 loops=10)\n>\n> Buffers: shared hit=996551813\n> read=15851133\n>\n> I/O Timings: read=5323746.860\n>\n> -> Parallel Seq Scan on\n> flows (cost=0.00..16039759.79 rows=62330930 width=20) (actual\n> time=1.506..547485.066 rows=54155970 loops=10)\n>\n> Buffers: shared\n> hit=1634 read=15851133\n>\n> I/O Timings:\n> read=5323746.860\n>\n> -> Index Only Scan using\n> mynetworks_ipaddr_idx on mynetworks (cost=0.03..0.03 rows=1 width=8)\n> (actual time=0.002..0.002 rows=1 loops=541559704)\n>\n> Index Cond: (ipaddr >>=\n> (flows.srcaddr)::ip4r)\n>\n> Heap Fetches: 59971474\n>\n> Buffers: shared\n> hit=996550152\n>\n> -> Hash\n> (cost=19462896.74..19462896.74 rows=11210 width=4) (actual\n> time=1847228.894..1847228.894 rows=3099798 loops=10)\n>\n> Buckets: 4194304 (originally\n> 16384) Batches: 1 (originally 1) Memory Usage: 141746kB\n>\n> Buffers: shared hit=120036496\n>\n> -> HashAggregate\n> (cost=19462829.48..19462863.11 rows=11210 width=4) (actual\n> time=1230049.015..1845955.764 rows=3099798 loops=10)\n>\n> Group Key:\n> flows_1.dstaddr\n>\n> Buffers: shared\n> hit=120036496\n>\n> -> Nested Loop Anti\n> Join (cost=0.12..19182620.78 rows=560417390 width=4) (actual\n> time=0.084..831832.333 rows=113420172 loops=10)\n>\n> Join Filter:\n> (mynetworks_1.ipaddr >> (flows_1.dstaddr)::ip4r)\n>\n> Rows Removed by\n> Join Filter: 453681377\n>\n> Buffers: shared\n> hit=120036496\n>\n> -> Index Only\n> Scan using flows_srcaddr_dstaddr_idx on flows flows_1\n> (cost=0.12..9091067.70 rows=560978368 width=4) (actual\n> time=0.027..113052.437 rows=541559704 loops=10)\n>\n> Heap\n> Fetches: 91\n>\n> Buffers:\n> shared hit=120036459\n>\n> -> Materialize\n> (cost=0.00..1.02 rows=4 width=8) (actual time=0.000..0.000 rows=2\n> loops=5415597040)\n>\n> Buffers:\n> shared hit=10\n>\n> -> Seq\n> Scan on mynetworks mynetworks_1 (cost=0.00..1.01 rows=4 width=8) (actual\n> time=0.007..0.008 rows=4 loops=10)\n>\n>\n> Buffers: shared hit=10\n>\n> Planning time: 6.689 ms\n>\n> Execution time: 2764860.853 ms\n>\n> (58 rows)\n>\n>\n>\n> Regarding \"Also using dstat I can see that iowait time is at about 25%\", I\n> don't think the server was doing anything else. If it is important, I can\n> repeat the benchmarks.\n>\n> Thanks!\n>\n>\n>\n> Charles\n>\n>\n>\n> Charles,\n>\n>\n>\n> In your original posting I couldn’t find what value you set for\n> temp_buffers.\n>\n> Considering you have plenty of RAM, try setting temp_buffers=’6GB’ and\n> then run ‘explain (analyze, buffers) select…’ in the same session. This\n> should alleviate “disk sort’ problem.\n>\n>\n>\n> Also, could you post the structure of flowscompact, mynetworks, and\n> dstextern tables with all the indexes and number of rows. Actually, are\n> they all – tables, or some of them – views?\n>\n>\n>\n> Igor\n>\n>\n>\n>\n>\n> Sorry, I misstated the parameter to change.\n>\n> It is work_mem (not temp_buffers) you should try to increase to 6GB.\n>\n>\n>\n> Igor\n>\n>\n>\n>\n>\n\n\n\n-- \nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\nIgor,Initially temp_buffer was left to its default value (8MB). Watching the content of the directory that stores the temporary files, I found that I need at most 21GB of temporary files space. Should I set temp_buffer to 21GB?Here is the explain you requested with work_mem set to 6GB:flows=# set work_mem='6GB';SETflows=# explain (analyze, buffers) SELECT DISTINCT srcaddr, dstaddr, dstport, COUNT(*) AS conversation, SUM(doctets) / 1024 / 1024 AS mbytesFROM flowscompact, mynetworksWHERE mynetworks.ipaddr >>= flowscompact.srcaddr AND dstaddr IN ( SELECT dstaddr FROM dstexterne )GROUP BY srcaddr, dstaddr, dstportORDER BY mbytes DESC LIMIT 50; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=48135680.07..48135680.22 rows=50 width=52) (actual time=2227678.196..2227678.223 rows=50 loops=1) Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154 -> Unique (cost=48135680.07..48143613.62 rows=2644514 width=52) (actual time=2227678.194..2227678.217 rows=50 loops=1) Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154 -> Sort (cost=48135680.07..48137002.33 rows=2644514 width=52) (actual time=2227678.192..2227678.202 rows=50 loops=1) Sort Key: (((sum(flows.doctets) / '1024'::numeric) / '1024'::numeric)) DESC, flows.srcaddr, flows.dstaddr, flows.dstport, (count(*)) Sort Method: quicksort Memory: 654395kB Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154 -> GroupAggregate (cost=48059426.65..48079260.50 rows=2644514 width=52) (actual time=2167909.030..2211446.192 rows=5859671 loops=1) Group Key: flows.srcaddr, flows.dstaddr, flows.dstport Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154 -> Sort (cost=48059426.65..48060748.90 rows=2644514 width=20) (actual time=2167896.815..2189107.205 rows=91745640 loops=1) Sort Key: flows.srcaddr, flows.dstaddr, flows.dstport Sort Method: external merge Disk: 3049216kB Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154 -> Gather (cost=30060688.07..48003007.07 rows=2644514 width=20) (actual time=1268989.000..1991357.232 rows=91745640 loops=1) Workers Planned: 12 Workers Launched: 12 Buffers: shared hit=728798037 read=82974833 -> Hash Semi Join (cost=30059688.07..47951761.31 rows=220376 width=20) (actual time=1268845.181..2007864.725 rows=7057357 loops=13) Hash Cond: (flows.dstaddr = flows_1.dstaddr) Buffers: shared hit=728795193 read=82974833 -> Nested Loop (cost=0.03..17891246.86 rows=220376 width=20) (actual time=0.207..723790.283 rows=37910370 loops=13) Buffers: shared hit=590692229 read=14991777 -> Parallel Seq Scan on flows (cost=0.00..16018049.14 rows=55094048 width=20) (actual time=0.152..566179.117 rows=45371630 loops=13) Buffers: shared hit=860990 read=14991777 -> Index Only Scan using mynetworks_ipaddr_idx on mynetworks (cost=0.03..0.03 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=589831190) Index Cond: (ipaddr >>= (flows.srcaddr)::ip4r) Heap Fetches: 0 Buffers: shared hit=589831203 -> Hash (cost=30059641.47..30059641.47 rows=13305 width=4) (actual time=1268811.101..1268811.101 rows=3803508 loops=13) Buckets: 4194304 (originally 16384) Batches: 1 (originally 1) Memory Usage: 166486kB Buffers: shared hit=138102964 read=67983056 -> HashAggregate (cost=30059561.64..30059601.56 rows=13305 width=4) (actual time=1265248.165..1267432.083 rows=3803508 loops=13) Group Key: flows_1.dstaddr Buffers: shared hit=138102964 read=67983056 -> Nested Loop Anti Join (cost=0.00..29729327.92 rows=660467447 width=4) (actual time=0.389..1201072.707 rows=125838232 loops=13) Join Filter: (mynetworks_1.ipaddr >> (flows_1.dstaddr)::ip4r) Rows Removed by Join Filter: 503353617 Buffers: shared hit=138102964 read=67983056 -> Seq Scan on flows flows_1 (cost=0.00..17836152.73 rows=661128576 width=4) (actual time=0.322..343152.274 rows=589831190 loops=13) Buffers: shared hit=138102915 read=67983056 -> Materialize (cost=0.00..1.02 rows=4 width=8) (actual time=0.000..0.000 rows=2 loops=7667805470) Buffers: shared hit=13 -> Seq Scan on mynetworks mynetworks_1 (cost=0.00..1.01 rows=4 width=8) (actual time=0.006..0.007 rows=4 loops=13) Buffers: shared hit=13 Planning time: 0.941 ms Execution time: 2228345.171 ms(48 rows)With a work_mem at 6GB, I noticed that for the first 20 minutes the query was running, the i/o wait was much lower, hovering aroun 3% then it jumped 45% until almost the end of the query. flowscompact and dstexterne are actually views. I use views to simplify query writing and to \"abstract\" queries that are use often in other queries. flowscompact is a view built on table flows (having about 590 million rows), it only keeps the most often used fields.flows=# \\d+ flowscompact; View \"public.flowscompact\" Column | Type | Modifiers | Storage | Description -----------+--------------------------+-----------+---------+------------- flow_id | bigint | | plain | sysuptime | bigint | | plain | exaddr | ip4 | | plain | dpkts | integer | | plain | doctets | bigint | | plain | first | bigint | | plain | last | bigint | | plain | srcaddr | ip4 | | plain | dstaddr | ip4 | | plain | srcport | integer | | plain | dstport | integer | | plain | prot | smallint | | plain | tos | smallint | | plain | tcp_flags | smallint | | plain | timestamp | timestamp with time zone | | plain | View definition: SELECT flowstimestamp.flow_id, flowstimestamp.sysuptime, flowstimestamp.exaddr, flowstimestamp.dpkts, flowstimestamp.doctets, flowstimestamp.first, flowstimestamp.last, flowstimestamp.srcaddr, flowstimestamp.dstaddr, flowstimestamp.srcport, flowstimestamp.dstport, flowstimestamp.prot, flowstimestamp.tos, flowstimestamp.tcp_flags, flowstimestamp.\"timestamp\" FROM flowstimestamp;mynetworks is a table having one column and 4 rows; it contains a list of our network networks:flows=# select * from mynetworks; ipaddr ---------------- 192.168.0.0/24 10.112.12.0/30 10.112.12.4/30 10.112.12.8/30(4 row)flows=# \\d+ mynetworks; Table \"public.mynetworks\" Column | Type | Modifiers | Storage | Stats target | Description --------+------+-----------+---------+--------------+------------- ipaddr | ip4r | | plain | | Indexes: \"mynetworks_ipaddr_idx\" gist (ipaddr)dstexterne is a view listing all the destination IPv4 addresses not inside our network; it has one column and 3.8 million rows.flows=# \\d+ dstexterne; View \"public.dstexterne\" Column | Type | Modifiers | Storage | Description ---------+------+-----------+---------+------------- dstaddr | ip4 | | plain | View definition: SELECT DISTINCT flowscompact.dstaddr FROM flowscompact LEFT JOIN mynetworks ON mynetworks.ipaddr >> flowscompact.dstaddr::ip4r WHERE mynetworks.ipaddr IS NULL;Thanks!CharlesOn Wed, Jul 12, 2017 at 6:39 PM, Igor Neyman <[email protected]> wrote:\n\n\n \n\n \n\n\nFrom:\[email protected] [mailto:[email protected]]\nOn Behalf Of Charles Nadeau\nSent: Wednesday, July 12, 2017 6:05 AM\nTo: Jeff Janes <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] Very poor read performance, query independent\n\n\n \n\n\n\n\n \n\n\nflows=# explain (analyze, buffers) SELECT DISTINCT\n\n\nflows-# srcaddr,\n\n\nflows-# dstaddr,\n\n\nflows-# dstport,\n\n\nflows-# COUNT(*) AS conversation,\n\n\nflows-# SUM(doctets) / 1024 / 1024 AS mbytes \n\n\nflows-# FROM\n\n\nflows-# flowscompact,\n\n\nflows-# mynetworks \n\n\nflows-# WHERE\n\n\nflows-# mynetworks.ipaddr >>= flowscompact.srcaddr \n\n\nflows-# AND dstaddr IN \n\n\nflows-# (\n\n\nflows(# SELECT\n\n\nflows(# dstaddr \n\n\nflows(# FROM\n\n\nflows(# dstexterne\n\n\nflows(# )\n\n\nflows-# GROUP BY\n\n\nflows-# srcaddr,\n\n\nflows-# dstaddr,\n\n\nflows-# dstport \n\n\nflows-# ORDER BY\n\n\nflows-# mbytes DESC LIMIT 50;\n\n\nLOG: temporary file: path \"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.6\", size 1073741824\n\n\nLOG: temporary file: path \"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.7\", size 1073741824\n\n\nLOG: temporary file: path \"pg_tblspc/36238/PG_9.6_201608131/pgsql_tmp/pgsql_tmp14573.8\", size 639696896\n\n\nLOG: duration: 2765020.327 ms statement: explain (analyze, buffers) SELECT DISTINCT\n\n\n srcaddr,\n\n\n dstaddr,\n\n\n dstport,\n\n\n COUNT(*) AS conversation,\n\n\n SUM(doctets) / 1024 / 1024 AS mbytes \n\n\nFROM\n\n\n flowscompact,\n\n\n mynetworks \n\n\nWHERE\n\n\n mynetworks.ipaddr >>= flowscompact.srcaddr \n\n\n AND dstaddr IN \n\n\n (\n\n\n SELECT\n\n\n dstaddr \n\n\n FROM\n\n\n dstexterne\n\n\n )\n\n\nGROUP BY\n\n\n srcaddr,\n\n\n dstaddr,\n\n\n dstport \n\n\nORDER BY\n\n\n mbytes DESC LIMIT 50;\n\n\n QUERY PLAN \n\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n Limit (cost=37762321.83..37762321.98 rows=50 width=52) (actual time=2764548.863..2764548.891 rows=50 loops=1)\n\n\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\n\n\n I/O Timings: read=5323746.860\n\n\n -> Unique (cost=37762321.83..37769053.57 rows=2243913 width=52) (actual time=2764548.861..2764548.882 rows=50 loops=1)\n\n\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\n\n\n I/O Timings: read=5323746.860\n\n\n -> Sort (cost=37762321.83..37763443.79 rows=2243913 width=52) (actual time=2764548.859..2764548.872 rows=50 loops=1)\n\n\n Sort Key: (((sum(flows.doctets) / '1024'::numeric) / '1024'::numeric)) DESC, flows.srcaddr, flows.dstaddr, flows.dstport, (count(*))\n\n\n Sort Method: quicksort Memory: 563150kB\n\n\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\n\n\n I/O Timings: read=5323746.860\n\n\n -> GroupAggregate (cost=37698151.34..37714980.68 rows=2243913 width=52) (actual time=2696721.610..2752109.551 rows=4691734 loops=1)\n\n\n Group Key: flows.srcaddr, flows.dstaddr, flows.dstport\n\n\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\n\n\n I/O Timings: read=5323746.860\n\n\n -> Sort (cost=37698151.34..37699273.29 rows=2243913 width=20) (actual time=2696711.428..2732781.705 rows=81896988 loops=1)\n\n\n Sort Key: flows.srcaddr, flows.dstaddr, flows.dstport\n\n\n Sort Method: external merge Disk: 2721856kB\n\n\n Buffers: shared hit=1116590560 read=15851133, temp read=340244 written=340244\n\n\n I/O Timings: read=5323746.860\n\n\n -> Gather (cost=19463936.00..37650810.19 rows=2243913 width=20) (actual time=1777219.713..2590530.887 rows=81896988 loops=1)\n\n\n Workers Planned: 9\n\n\n Workers Launched: 9\n\n\n Buffers: shared hit=1116590559 read=15851133\n\n\n I/O Timings: read=5323746.860\n\n\n -> Hash Semi Join (cost=19462936.00..37622883.23 rows=249324 width=20) (actual time=1847579.360..2602039.780 rows=8189699 loops=10)\n\n\n Hash Cond: (flows.dstaddr = flows_1.dstaddr)\n\n\n Buffers: shared hit=1116588309 read=15851133\n\n\n I/O Timings: read=5323746.860\n\n\n -> Nested Loop (cost=0.03..18159012.30 rows=249324 width=20) (actual time=1.562..736556.583 rows=45499045 loops=10)\n\n\n Buffers: shared hit=996551813 read=15851133\n\n\n I/O Timings: read=5323746.860\n\n\n -> Parallel Seq Scan on flows (cost=0.00..16039759.79 rows=62330930 width=20) (actual time=1.506..547485.066 rows=54155970 loops=10)\n\n\n Buffers: shared hit=1634 read=15851133\n\n\n I/O Timings: read=5323746.860\n\n\n -> Index Only Scan using mynetworks_ipaddr_idx on mynetworks (cost=0.03..0.03 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=541559704)\n\n\n Index Cond: (ipaddr >>= (flows.srcaddr)::ip4r)\n\n\n Heap Fetches: 59971474\n\n\n Buffers: shared hit=996550152\n\n\n -> Hash (cost=19462896.74..19462896.74 rows=11210 width=4) (actual time=1847228.894..1847228.894 rows=3099798 loops=10)\n\n\n Buckets: 4194304 (originally 16384) Batches: 1 (originally 1) Memory Usage: 141746kB\n\n\n Buffers: shared hit=120036496\n\n\n -> HashAggregate (cost=19462829.48..19462863.11 rows=11210 width=4) (actual time=1230049.015..1845955.764 rows=3099798 loops=10)\n\n\n Group Key: flows_1.dstaddr\n\n\n Buffers: shared hit=120036496\n\n\n -> Nested Loop Anti Join (cost=0.12..19182620.78 rows=560417390 width=4) (actual time=0.084..831832.333 rows=113420172 loops=10)\n\n\n Join Filter: (mynetworks_1.ipaddr >> (flows_1.dstaddr)::ip4r)\n\n\n Rows Removed by Join Filter: 453681377\n\n\n Buffers: shared hit=120036496\n\n\n -> Index Only Scan using flows_srcaddr_dstaddr_idx on flows flows_1 (cost=0.12..9091067.70 rows=560978368 width=4) (actual time=0.027..113052.437 rows=541559704 loops=10)\n\n\n Heap Fetches: 91\n\n\n Buffers: shared hit=120036459\n\n\n -> Materialize (cost=0.00..1.02 rows=4 width=8) (actual time=0.000..0.000 rows=2 loops=5415597040)\n\n\n Buffers: shared hit=10\n\n\n -> Seq Scan on mynetworks mynetworks_1 (cost=0.00..1.01 rows=4 width=8) (actual time=0.007..0.008 rows=4 loops=10)\n\n\n Buffers: shared hit=10\n\n\n Planning time: 6.689 ms\n\n\n Execution time: 2764860.853 ms\n\n\n(58 rows)\n\n\n\n \n\n\nRegarding \"Also using dstat I can see that iowait time is at about 25%\", I don't think the server was doing anything else. If it is important, I can repeat the benchmarks.\n\n\nThanks!\n\n\n \n\n\n\nCharles\n\n \nCharles,\n \nIn your original posting I couldn’t find what value you set for temp_buffers.\nConsidering you have plenty of RAM, try setting temp_buffers=’6GB’ and then run ‘explain (analyze, buffers) select…’ in the same session. This should alleviate\n “disk sort’ problem.\n \nAlso, could you post the structure of\nflowscompact, mynetworks, and dstextern \ntables with all the indexes and number of rows. Actually, are they all – tables, or some of them – views?\n \nIgor\n\n \n\n \nSorry, I misstated the parameter to change.\nIt is work_mem (not temp_buffers) you should try to increase to 6GB.\n \nIgor\n \n \n\n\n\n\n\n\n-- Charles Nadeau Ph.D.http://charlesnadeau.blogspot.com/",
"msg_date": "Fri, 14 Jul 2017 17:34:43 +0200",
"msg_from": "Charles Nadeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "From: Charles Nadeau [mailto:[email protected]]\r\nSent: Friday, July 14, 2017 11:35 AM\r\nTo: Igor Neyman <[email protected]>\r\nCc: Jeff Janes <[email protected]>; [email protected]\r\nSubject: Re: [PERFORM] Very poor read performance, query independent\r\n\r\nIgor,\r\n\r\nInitially temp_buffer was left to its default value (8MB). Watching the content of the directory that stores the temporary files, I found that I need at most 21GB of temporary files space. Should I set temp_buffer to 21GB?\r\nHere is the explain you requested with work_mem set to 6GB:\r\nflows=# set work_mem='6GB';\r\nSET\r\nflows=# explain (analyze, buffers) SELECT DISTINCT\r\n srcaddr,\r\n dstaddr,\r\n dstport,\r\n COUNT(*) AS conversation,\r\n SUM(doctets) / 1024 / 1024 AS mbytes\r\nFROM\r\n flowscompact,\r\n mynetworks\r\nWHERE\r\n mynetworks.ipaddr >>= flowscompact.srcaddr\r\n AND dstaddr IN\r\n (\r\n SELECT\r\n dstaddr\r\n FROM\r\n dstexterne\r\n )\r\nGROUP BY\r\n srcaddr,\r\n dstaddr,\r\n dstport\r\nORDER BY\r\n mbytes DESC LIMIT 50;\r\n QUERY PLAN\r\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n Limit (cost=48135680.07..48135680.22 rows=50 width=52) (actual time=2227678.196..2227678.223 rows=50 loops=1)\r\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\r\n -> Unique (cost=48135680.07..48143613.62 rows=2644514 width=52) (actual time=2227678.194..2227678.217 rows=50 loops=1)\r\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\r\n -> Sort (cost=48135680.07..48137002.33 rows=2644514 width=52) (actual time=2227678.192..2227678.202 rows=50 loops=1)\r\n Sort Key: (((sum(flows.doctets) / '1024'::numeric) / '1024'::numeric)) DESC, flows.srcaddr, flows.dstaddr, flows.dstport, (count(*))\r\n Sort Method: quicksort Memory: 654395kB\r\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\r\n -> GroupAggregate (cost=48059426.65..48079260.50 rows=2644514 width=52) (actual time=2167909.030..2211446.192 rows=5859671 loops=1)\r\n Group Key: flows.srcaddr, flows.dstaddr, flows.dstport\r\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\r\n -> Sort (cost=48059426.65..48060748.90 rows=2644514 width=20) (actual time=2167896.815..2189107.205 rows=91745640 loops=1)\r\n Sort Key: flows.srcaddr, flows.dstaddr, flows.dstport\r\n Sort Method: external merge Disk: 3049216kB\r\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\r\n -> Gather (cost=30060688.07..48003007.07 rows=2644514 width=20) (actual time=1268989.000..1991357.232 rows=91745640 loops=1)\r\n Workers Planned: 12\r\n Workers Launched: 12\r\n Buffers: shared hit=728798037 read=82974833\r\n -> Hash Semi Join (cost=30059688.07..47951761.31 rows=220376 width=20) (actual time=1268845.181..2007864.725 rows=7057357 loops=13)\r\n Hash Cond: (flows.dstaddr = flows_1.dstaddr)\r\n Buffers: shared hit=728795193 read=82974833\r\n -> Nested Loop (cost=0.03..17891246.86 rows=220376 width=20) (actual time=0.207..723790.283 rows=37910370 loops=13)\r\n Buffers: shared hit=590692229 read=14991777\r\n -> Parallel Seq Scan on flows (cost=0.00..16018049.14 rows=55094048 width=20) (actual time=0.152..566179.117 rows=45371630 loops=13)\r\n Buffers: shared hit=860990 read=14991777\r\n -> Index Only Scan using mynetworks_ipaddr_idx on mynetworks (cost=0.03..0.03 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=589831190)\r\n Index Cond: (ipaddr >>= (flows.srcaddr)::ip4r)\r\n Heap Fetches: 0\r\n Buffers: shared hit=589831203\r\n -> Hash (cost=30059641.47..30059641.47 rows=13305 width=4) (actual time=1268811.101..1268811.101 rows=3803508 loops=13)\r\n Buckets: 4194304 (originally 16384) Batches: 1 (originally 1) Memory Usage: 166486kB\r\n Buffers: shared hit=138102964 read=67983056\r\n -> HashAggregate (cost=30059561.64..30059601.56 rows=13305 width=4) (actual time=1265248.165..1267432.083 rows=3803508 loops=13)\r\n Group Key: flows_1.dstaddr\r\n Buffers: shared hit=138102964 read=67983056\r\n -> Nested Loop Anti Join (cost=0.00..29729327.92 rows=660467447 width=4) (actual time=0.389..1201072.707 rows=125838232 loops=13)\r\n Join Filter: (mynetworks_1.ipaddr >> (flows_1.dstaddr)::ip4r)\r\n Rows Removed by Join Filter: 503353617\r\n Buffers: shared hit=138102964 read=67983056\r\n -> Seq Scan on flows flows_1 (cost=0.00..17836152.73 rows=661128576 width=4) (actual time=0.322..343152.274 rows=589831190 loops=13)\r\n Buffers: shared hit=138102915 read=67983056\r\n -> Materialize (cost=0.00..1.02 rows=4 width=8) (actual time=0.000..0.000 rows=2 loops=7667805470)\r\n Buffers: shared hit=13\r\n -> Seq Scan on mynetworks mynetworks_1 (cost=0.00..1.01 rows=4 width=8) (actual time=0.006..0.007 rows=4 loops=13)\r\n Buffers: shared hit=13\r\n Planning time: 0.941 ms\r\n Execution time: 2228345.171 ms\r\n(48 rows)\r\n\r\nWith a work_mem at 6GB, I noticed that for the first 20 minutes the query was running, the i/o wait was much lower, hovering aroun 3% then it jumped 45% until almost the end of the query.\r\n\r\nflowscompact and dstexterne are actually views. I use views to simplify query writing and to \"abstract\" queries that are use often in other queries. flowscompact is a view built on table flows (having about 590 million rows), it only keeps the most often used fields.\r\nflows=# \\d+ flowscompact;\r\n View \"public.flowscompact\"\r\n Column | Type | Modifiers | Storage | Description\r\n-----------+--------------------------+-----------+---------+-------------\r\n flow_id | bigint | | plain |\r\n sysuptime | bigint | | plain |\r\n exaddr | ip4 | | plain |\r\n dpkts | integer | | plain |\r\n doctets | bigint | | plain |\r\n first | bigint | | plain |\r\n last | bigint | | plain |\r\n srcaddr | ip4 | | plain |\r\n dstaddr | ip4 | | plain |\r\n srcport | integer | | plain |\r\n dstport | integer | | plain |\r\n prot | smallint | | plain |\r\n tos | smallint | | plain |\r\n tcp_flags | smallint | | plain |\r\n timestamp | timestamp with time zone | | plain |\r\nView definition:\r\n SELECT flowstimestamp.flow_id,\r\n flowstimestamp.sysuptime,\r\n flowstimestamp.exaddr,\r\n flowstimestamp.dpkts,\r\n flowstimestamp.doctets,\r\n flowstimestamp.first,\r\n flowstimestamp.last,\r\n flowstimestamp.srcaddr,\r\n flowstimestamp.dstaddr,\r\n flowstimestamp.srcport,\r\n flowstimestamp.dstport,\r\n flowstimestamp.prot,\r\n flowstimestamp.tos,\r\n flowstimestamp.tcp_flags,\r\n flowstimestamp.\"timestamp\"\r\n FROM flowstimestamp;\r\nmynetworks is a table having one column and 4 rows; it contains a list of our network networks:\r\nflows=# select * from mynetworks;\r\n ipaddr\r\n----------------\r\n 192.168.0.0/24<http://192.168.0.0/24>\r\n 10.112.12.0/30<http://10.112.12.0/30>\r\n 10.112.12.4/30<http://10.112.12.4/30>\r\n 10.112.12.8/30<http://10.112.12.8/30>\r\n(4 row)\r\nflows=# \\d+ mynetworks;\r\n Table \"public.mynetworks\"\r\n Column | Type | Modifiers | Storage | Stats target | Description\r\n--------+------+-----------+---------+--------------+-------------\r\n ipaddr | ip4r | | plain | |\r\nIndexes:\r\n \"mynetworks_ipaddr_idx\" gist (ipaddr)\r\ndstexterne is a view listing all the destination IPv4 addresses not inside our network; it has one column and 3.8 million rows.\r\nflows=# \\d+ dstexterne;\r\n View \"public.dstexterne\"\r\n Column | Type | Modifiers | Storage | Description\r\n---------+------+-----------+---------+-------------\r\n dstaddr | ip4 | | plain |\r\nView definition:\r\n SELECT DISTINCT flowscompact.dstaddr\r\n FROM flowscompact\r\n LEFT JOIN mynetworks ON mynetworks.ipaddr >> flowscompact.dstaddr::ip4r\r\n WHERE mynetworks.ipaddr IS NULL;\r\nThanks!\r\n\r\nCharles\r\n\r\nCharles,\r\n\r\nDon’t change temp_buffers.\r\nTry to increase work_mem even more, say work_mem=’12GB’, because it’s still using disk for sorting (starting around 20th minute as you noticed).\r\nSee if this:\r\n“Sort Method: external merge Disk: 3049216kB”\r\ngoes away.\r\nIgor\r\n\r\n\n\n\n\n\n\n\n\n\nFrom: Charles Nadeau [mailto:[email protected]]\r\n\nSent: Friday, July 14, 2017 11:35 AM\nTo: Igor Neyman <[email protected]>\nCc: Jeff Janes <[email protected]>; [email protected]\nSubject: Re: [PERFORM] Very poor read performance, query independent\n \n\n\n\nIgor,\n\n\n \n\n\nInitially temp_buffer was left to its default value (8MB). Watching the content of the directory that stores the temporary files, I found that I need at most 21GB of temporary files space. Should I set temp_buffer to 21GB?\n\n\nHere is the explain you requested with work_mem set to 6GB:\n\n\n\nflows=# set work_mem='6GB';\n\n\nSET\n\n\nflows=# explain (analyze, buffers) SELECT DISTINCT\n\n\n srcaddr,\n\n\n dstaddr,\n\n\n dstport,\n\n\n COUNT(*) AS conversation,\n\n\n SUM(doctets) / 1024 / 1024 AS mbytes\n\n\nFROM\n\n\n flowscompact,\n\n\n mynetworks\n\n\nWHERE\n\n\n mynetworks.ipaddr >>= flowscompact.srcaddr\n\n\n AND dstaddr IN\n\n\n (\n\n\n SELECT\n\n\n dstaddr\n\n\n FROM\n\n\n dstexterne\n\n\n )\n\n\nGROUP BY\n\n\n srcaddr,\n\n\n dstaddr,\n\n\n dstport\n\n\nORDER BY\n\n\n mbytes DESC LIMIT 50;\n\n\n QUERY PLAN \n\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n Limit (cost=48135680.07..48135680.22 rows=50 width=52) (actual time=2227678.196..2227678.223 rows=50 loops=1)\n\n\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\n\n\n -> Unique (cost=48135680.07..48143613.62 rows=2644514 width=52) (actual time=2227678.194..2227678.217 rows=50 loops=1)\n\n\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\n\n\n -> Sort (cost=48135680.07..48137002.33 rows=2644514 width=52) (actual time=2227678.192..2227678.202 rows=50 loops=1)\n\n\n Sort Key: (((sum(flows.doctets) / '1024'::numeric) / '1024'::numeric)) DESC, flows.srcaddr, flows.dstaddr, flows.dstport, (count(*))\n\n\n Sort Method: quicksort Memory: 654395kB\n\n\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\n\n\n -> GroupAggregate (cost=48059426.65..48079260.50 rows=2644514 width=52) (actual time=2167909.030..2211446.192 rows=5859671 loops=1)\n\n\n Group Key: flows.srcaddr, flows.dstaddr, flows.dstport\n\n\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\n\n\n -> Sort (cost=48059426.65..48060748.90 rows=2644514 width=20) (actual time=2167896.815..2189107.205 rows=91745640 loops=1)\n\n\n Sort Key: flows.srcaddr, flows.dstaddr, flows.dstport\n\n\n Sort Method: external merge Disk: 3049216kB\n\n\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\n\n\n -> Gather (cost=30060688.07..48003007.07 rows=2644514 width=20) (actual time=1268989.000..1991357.232 rows=91745640 loops=1)\n\n\n Workers Planned: 12\n\n\n Workers Launched: 12\n\n\n Buffers: shared hit=728798037 read=82974833\n\n\n -> Hash Semi Join (cost=30059688.07..47951761.31 rows=220376 width=20) (actual time=1268845.181..2007864.725 rows=7057357 loops=13)\n\n\n Hash Cond: (flows.dstaddr = flows_1.dstaddr)\n\n\n Buffers: shared hit=728795193 read=82974833\n\n\n -> Nested Loop (cost=0.03..17891246.86 rows=220376 width=20) (actual time=0.207..723790.283 rows=37910370 loops=13)\n\n\n Buffers: shared hit=590692229 read=14991777\n\n\n -> Parallel Seq Scan on flows (cost=0.00..16018049.14 rows=55094048 width=20) (actual time=0.152..566179.117 rows=45371630 loops=13)\n\n\n Buffers: shared hit=860990 read=14991777\n\n\n -> Index Only Scan using mynetworks_ipaddr_idx on mynetworks (cost=0.03..0.03 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=589831190)\n\n\n Index Cond: (ipaddr >>= (flows.srcaddr)::ip4r)\n\n\n Heap Fetches: 0\n\n\n Buffers: shared hit=589831203\n\n\n -> Hash (cost=30059641.47..30059641.47 rows=13305 width=4) (actual time=1268811.101..1268811.101 rows=3803508 loops=13)\n\n\n Buckets: 4194304 (originally 16384) Batches: 1 (originally 1) Memory Usage: 166486kB\n\n\n Buffers: shared hit=138102964 read=67983056\n\n\n -> HashAggregate (cost=30059561.64..30059601.56 rows=13305 width=4) (actual time=1265248.165..1267432.083 rows=3803508 loops=13)\n\n\n Group Key: flows_1.dstaddr\n\n\n Buffers: shared hit=138102964 read=67983056\n\n\n -> Nested Loop Anti Join (cost=0.00..29729327.92 rows=660467447 width=4) (actual time=0.389..1201072.707 rows=125838232 loops=13)\n\n\n Join Filter: (mynetworks_1.ipaddr >> (flows_1.dstaddr)::ip4r)\n\n\n Rows Removed by Join Filter: 503353617\n\n\n Buffers: shared hit=138102964 read=67983056\n\n\n -> Seq Scan on flows flows_1 (cost=0.00..17836152.73 rows=661128576 width=4) (actual time=0.322..343152.274 rows=589831190 loops=13)\n\n\n Buffers: shared hit=138102915 read=67983056\n\n\n -> Materialize (cost=0.00..1.02 rows=4 width=8) (actual time=0.000..0.000 rows=2 loops=7667805470)\n\n\n Buffers: shared hit=13\n\n\n -> Seq Scan on mynetworks mynetworks_1 (cost=0.00..1.01 rows=4 width=8) (actual time=0.006..0.007 rows=4 loops=13)\n\n\n Buffers: shared hit=13\n\n\n Planning time: 0.941 ms\n\n\n Execution time: 2228345.171 ms\n\n\n(48 rows)\n\n\n\n \n\n\nWith a work_mem at 6GB, I noticed that for the first 20 minutes the query was running, the i/o wait was much lower, hovering aroun 3% then it jumped 45% until almost the end of the query. \n\n\n \n\n\nflowscompact and dstexterne are actually views. I use views to simplify query writing and to \"abstract\" queries that are use often in other queries. flowscompact is a view built on table flows (having about 590 million rows), it only keeps\r\n the most often used fields.\n\n\n\nflows=# \\d+ flowscompact;\n\n\n View \"public.flowscompact\"\n\n\n Column | Type | Modifiers | Storage | Description \n\n\n-----------+--------------------------+-----------+---------+-------------\n\n\n flow_id | bigint | | plain | \n\n\n sysuptime | bigint | | plain | \n\n\n exaddr | ip4 | | plain | \n\n\n dpkts | integer | | plain | \n\n\n doctets | bigint | | plain | \n\n\n first | bigint | | plain | \n\n\n last | bigint | | plain | \n\n\n srcaddr | ip4 | | plain | \n\n\n dstaddr | ip4 | | plain | \n\n\n srcport | integer | | plain | \n\n\n dstport | integer | | plain | \n\n\n prot | smallint | | plain | \n\n\n tos | smallint | | plain | \n\n\n tcp_flags | smallint | | plain | \n\n\n timestamp | timestamp with time zone | | plain | \n\n\nView definition:\n\n\n SELECT flowstimestamp.flow_id,\n\n\n flowstimestamp.sysuptime,\n\n\n flowstimestamp.exaddr,\n\n\n flowstimestamp.dpkts,\n\n\n flowstimestamp.doctets,\n\n\n flowstimestamp.first,\n\n\n flowstimestamp.last,\n\n\n flowstimestamp.srcaddr,\n\n\n flowstimestamp.dstaddr,\n\n\n flowstimestamp.srcport,\n\n\n flowstimestamp.dstport,\n\n\n flowstimestamp.prot,\n\n\n flowstimestamp.tos,\n\n\n flowstimestamp.tcp_flags,\n\n\n flowstimestamp.\"timestamp\"\n\n\n FROM flowstimestamp;\n\n\n\nmynetworks is a table having one column and 4 rows; it contains a list of our network networks:\n\n\n\nflows=# select * from mynetworks;\n\n\n ipaddr \n\n\n----------------\n\n\n 192.168.0.0/24\n\n\n 10.112.12.0/30\n\n\n 10.112.12.4/30\n\n\n 10.112.12.8/30\n\n\n(4 row)\n\n\nflows=# \\d+ mynetworks;\n\n\n Table \"public.mynetworks\"\n\n\n Column | Type | Modifiers | Storage | Stats target | Description \n\n\n--------+------+-----------+---------+--------------+-------------\n\n\n ipaddr | ip4r | | plain | | \n\n\nIndexes:\n\n\n \"mynetworks_ipaddr_idx\" gist (ipaddr)\n\n\n\ndstexterne is a view listing all the destination IPv4 addresses not inside our network; it has one column and 3.8 million rows.\n\n\n\nflows=# \\d+ dstexterne;\n\n\n View \"public.dstexterne\"\n\n\n Column | Type | Modifiers | Storage | Description \n\n\n---------+------+-----------+---------+-------------\n\n\n dstaddr | ip4 | | plain | \n\n\nView definition:\n\n\n SELECT DISTINCT flowscompact.dstaddr\n\n\n FROM flowscompact\n\n\n LEFT JOIN mynetworks ON mynetworks.ipaddr >> flowscompact.dstaddr::ip4r\n\n\n WHERE mynetworks.ipaddr IS NULL;\n\n\n\nThanks!\n\n\n \n\n\n\nCharles\n\n\n\n\n \nCharles,\n \nDon’t change temp_buffers.\nTry to increase work_mem even more, say work_mem=’12GB’, because it’s still using disk for sorting (starting around 20th minute as you noticed).\nSee if this:\n“Sort Method: external merge Disk: 3049216kB”\ngoes away.\n\nIgor",
"msg_date": "Fri, 14 Jul 2017 19:13:28 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "From: [email protected] [mailto:[email protected]] On Behalf Of Igor Neyman\r\nSent: Friday, July 14, 2017 3:13 PM\r\nTo: Charles Nadeau <[email protected]>\r\nCc: Jeff Janes <[email protected]>; [email protected]\r\nSubject: Re: [PERFORM] Very poor read performance, query independent\r\n\r\nFrom: Charles Nadeau [mailto:[email protected]]\r\nSent: Friday, July 14, 2017 11:35 AM\r\nTo: Igor Neyman <[email protected]<mailto:[email protected]>>\r\nCc: Jeff Janes <[email protected]<mailto:[email protected]>>; [email protected]<mailto:[email protected]>\r\nSubject: Re: [PERFORM] Very poor read performance, query independent\r\n\r\nIgor,\r\n\r\nInitially temp_buffer was left to its default value (8MB). Watching the content of the directory that stores the temporary files, I found that I need at most 21GB of temporary files space. Should I set temp_buffer to 21GB?\r\nHere is the explain you requested with work_mem set to 6GB:\r\nflows=# set work_mem='6GB';\r\nSET\r\nflows=# explain (analyze, buffers) SELECT DISTINCT\r\n srcaddr,\r\n dstaddr,\r\n dstport,\r\n COUNT(*) AS conversation,\r\n SUM(doctets) / 1024 / 1024 AS mbytes\r\nFROM\r\n flowscompact,\r\n mynetworks\r\nWHERE\r\n mynetworks.ipaddr >>= flowscompact.srcaddr\r\n AND dstaddr IN\r\n (\r\n SELECT\r\n dstaddr\r\n FROM\r\n dstexterne\r\n )\r\nGROUP BY\r\n srcaddr,\r\n dstaddr,\r\n dstport\r\nORDER BY\r\n mbytes DESC LIMIT 50;\r\n QUERY PLAN\r\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n Limit (cost=48135680.07..48135680.22 rows=50 width=52) (actual time=2227678.196..2227678.223 rows=50 loops=1)\r\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\r\n -> Unique (cost=48135680.07..48143613.62 rows=2644514 width=52) (actual time=2227678.194..2227678.217 rows=50 loops=1)\r\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\r\n -> Sort (cost=48135680.07..48137002.33 rows=2644514 width=52) (actual time=2227678.192..2227678.202 rows=50 loops=1)\r\n Sort Key: (((sum(flows.doctets) / '1024'::numeric) / '1024'::numeric)) DESC, flows.srcaddr, flows.dstaddr, flows.dstport, (count(*))\r\n Sort Method: quicksort Memory: 654395kB\r\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\r\n -> GroupAggregate (cost=48059426.65..48079260.50 rows=2644514 width=52) (actual time=2167909.030..2211446.192 rows=5859671 loops=1)\r\n Group Key: flows.srcaddr, flows.dstaddr, flows.dstport\r\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\r\n -> Sort (cost=48059426.65..48060748.90 rows=2644514 width=20) (actual time=2167896.815..2189107.205 rows=91745640 loops=1)\r\n Sort Key: flows.srcaddr, flows.dstaddr, flows.dstport\r\n Sort Method: external merge Disk: 3049216kB\r\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\r\n -> Gather (cost=30060688.07..48003007.07 rows=2644514 width=20) (actual time=1268989.000..1991357.232 rows=91745640 loops=1)\r\n Workers Planned: 12\r\n Workers Launched: 12\r\n Buffers: shared hit=728798037 read=82974833\r\n -> Hash Semi Join (cost=30059688.07..47951761.31 rows=220376 width=20) (actual time=1268845.181..2007864.725 rows=7057357 loops=13)\r\n Hash Cond: (flows.dstaddr = flows_1.dstaddr)\r\n Buffers: shared hit=728795193 read=82974833\r\n -> Nested Loop (cost=0.03..17891246.86 rows=220376 width=20) (actual time=0.207..723790.283 rows=37910370 loops=13)\r\n Buffers: shared hit=590692229 read=14991777\r\n -> Parallel Seq Scan on flows (cost=0.00..16018049.14 rows=55094048 width=20) (actual time=0.152..566179.117 rows=45371630 loops=13)\r\n Buffers: shared hit=860990 read=14991777\r\n -> Index Only Scan using mynetworks_ipaddr_idx on mynetworks (cost=0.03..0.03 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=589831190)\r\n Index Cond: (ipaddr >>= (flows.srcaddr)::ip4r)\r\n Heap Fetches: 0\r\n Buffers: shared hit=589831203\r\n -> Hash (cost=30059641.47..30059641.47 rows=13305 width=4) (actual time=1268811.101..1268811.101 rows=3803508 loops=13)\r\n Buckets: 4194304 (originally 16384) Batches: 1 (originally 1) Memory Usage: 166486kB\r\n Buffers: shared hit=138102964 read=67983056\r\n -> HashAggregate (cost=30059561.64..30059601.56 rows=13305 width=4) (actual time=1265248.165..1267432.083 rows=3803508 loops=13)\r\n Group Key: flows_1.dstaddr\r\n Buffers: shared hit=138102964 read=67983056\r\n -> Nested Loop Anti Join (cost=0.00..29729327.92 rows=660467447 width=4) (actual time=0.389..1201072.707 rows=125838232 loops=13)\r\n Join Filter: (mynetworks_1.ipaddr >> (flows_1.dstaddr)::ip4r)\r\n Rows Removed by Join Filter: 503353617\r\n Buffers: shared hit=138102964 read=67983056\r\n -> Seq Scan on flows flows_1 (cost=0.00..17836152.73 rows=661128576 width=4) (actual time=0.322..343152.274 rows=589831190 loops=13)\r\n Buffers: shared hit=138102915 read=67983056\r\n -> Materialize (cost=0.00..1.02 rows=4 width=8) (actual time=0.000..0.000 rows=2 loops=7667805470)\r\n Buffers: shared hit=13\r\n -> Seq Scan on mynetworks mynetworks_1 (cost=0.00..1.01 rows=4 width=8) (actual time=0.006..0.007 rows=4 loops=13)\r\n Buffers: shared hit=13\r\n Planning time: 0.941 ms\r\n Execution time: 2228345.171 ms\r\n(48 rows)\r\n\r\nWith a work_mem at 6GB, I noticed that for the first 20 minutes the query was running, the i/o wait was much lower, hovering aroun 3% then it jumped 45% until almost the end of the query.\r\n\r\nflowscompact and dstexterne are actually views. I use views to simplify query writing and to \"abstract\" queries that are use often in other queries. flowscompact is a view built on table flows (having about 590 million rows), it only keeps the most often used fields.\r\nflows=# \\d+ flowscompact;\r\n View \"public.flowscompact\"\r\n Column | Type | Modifiers | Storage | Description\r\n-----------+--------------------------+-----------+---------+-------------\r\n flow_id | bigint | | plain |\r\n sysuptime | bigint | | plain |\r\n exaddr | ip4 | | plain |\r\n dpkts | integer | | plain |\r\n doctets | bigint | | plain |\r\n first | bigint | | plain |\r\n last | bigint | | plain |\r\n srcaddr | ip4 | | plain |\r\n dstaddr | ip4 | | plain |\r\n srcport | integer | | plain |\r\n dstport | integer | | plain |\r\n prot | smallint | | plain |\r\n tos | smallint | | plain |\r\n tcp_flags | smallint | | plain |\r\n timestamp | timestamp with time zone | | plain |\r\nView definition:\r\n SELECT flowstimestamp.flow_id,\r\n flowstimestamp.sysuptime,\r\n flowstimestamp.exaddr,\r\n flowstimestamp.dpkts,\r\n flowstimestamp.doctets,\r\n flowstimestamp.first,\r\n flowstimestamp.last,\r\n flowstimestamp.srcaddr,\r\n flowstimestamp.dstaddr,\r\n flowstimestamp.srcport,\r\n flowstimestamp.dstport,\r\n flowstimestamp.prot,\r\n flowstimestamp.tos,\r\n flowstimestamp.tcp_flags,\r\n flowstimestamp.\"timestamp\"\r\n FROM flowstimestamp;\r\nmynetworks is a table having one column and 4 rows; it contains a list of our network networks:\r\nflows=# select * from mynetworks;\r\n ipaddr\r\n----------------\r\n 192.168.0.0/24<http://192.168.0.0/24>\r\n 10.112.12.0/30<http://10.112.12.0/30>\r\n 10.112.12.4/30<http://10.112.12.4/30>\r\n 10.112.12.8/30<http://10.112.12.8/30>\r\n(4 row)\r\nflows=# \\d+ mynetworks;\r\n Table \"public.mynetworks\"\r\n Column | Type | Modifiers | Storage | Stats target | Description\r\n--------+------+-----------+---------+--------------+-------------\r\n ipaddr | ip4r | | plain | |\r\nIndexes:\r\n \"mynetworks_ipaddr_idx\" gist (ipaddr)\r\ndstexterne is a view listing all the destination IPv4 addresses not inside our network; it has one column and 3.8 million rows.\r\nflows=# \\d+ dstexterne;\r\n View \"public.dstexterne\"\r\n Column | Type | Modifiers | Storage | Description\r\n---------+------+-----------+---------+-------------\r\n dstaddr | ip4 | | plain |\r\nView definition:\r\n SELECT DISTINCT flowscompact.dstaddr\r\n FROM flowscompact\r\n LEFT JOIN mynetworks ON mynetworks.ipaddr >> flowscompact.dstaddr::ip4r\r\n WHERE mynetworks.ipaddr IS NULL;\r\nThanks!\r\n\r\nCharles\r\n\r\nCharles,\r\n\r\nAlso, let’s try to simplify your query and see if it performs better.\r\nYou are grouping by srcaddr, dstaddr, dstport, that makes DISTINCT not needed.\r\nAnd after simplifying WHERE clause (let me know if the result is not what you want), the query looks like:\r\n\r\nSELECT srcaddr, dstaddr, dstport,\r\n COUNT(*) AS conversation,\r\n SUM(doctets) / 1024 / 1024 AS mbytes\r\nFROM flowscompact\r\nWHERE srcaddr IN (SELECT ipaddr FROM mynetworks)\r\n AND dstaddr NOT IN (SELECT ipaddr FROM mynetworks)\r\nGROUP BY srcaddr, dstaddr, dstport\r\nORDER BY mbytes DESC\r\nLIMIT 50;\r\n\r\nNow, you didn’t provide the definition of flowstimestamp table.\r\nIf this table doesn’t have an index on (srcaddr, dstaddr, dstport) creating one should help (I think).\r\n\r\nIgor\r\n\r\n\r\n\r\n\n\n\n\n\n\n\n\n\n \n \n\n\nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of Igor Neyman\nSent: Friday, July 14, 2017 3:13 PM\nTo: Charles Nadeau <[email protected]>\nCc: Jeff Janes <[email protected]>; [email protected]\nSubject: Re: [PERFORM] Very poor read performance, query independent\n\n\n \n\nFrom: Charles Nadeau [mailto:[email protected]]\r\n\nSent: Friday, July 14, 2017 11:35 AM\nTo: Igor Neyman <[email protected]>\nCc: Jeff Janes <[email protected]>;\r\[email protected]\nSubject: Re: [PERFORM] Very poor read performance, query independent\n \n\n\n\nIgor,\n\n\n \n\n\nInitially temp_buffer was left to its default value (8MB). Watching the content of the directory that stores the temporary files, I found that I need at most 21GB of temporary files space. Should I set temp_buffer to 21GB?\n\n\nHere is the explain you requested with work_mem set to 6GB:\n\n\n\nflows=# set work_mem='6GB';\n\n\nSET\n\n\nflows=# explain (analyze, buffers) SELECT DISTINCT\n\n\n srcaddr,\n\n\n dstaddr,\n\n\n dstport,\n\n\n COUNT(*) AS conversation,\n\n\n SUM(doctets) / 1024 / 1024 AS mbytes\n\n\nFROM\n\n\n flowscompact,\n\n\n mynetworks\n\n\nWHERE\n\n\n mynetworks.ipaddr >>= flowscompact.srcaddr\n\n\n AND dstaddr IN\n\n\n (\n\n\n SELECT\n\n\n dstaddr\n\n\n FROM\n\n\n dstexterne\n\n\n )\n\n\nGROUP BY\n\n\n srcaddr,\n\n\n dstaddr,\n\n\n dstport\n\n\nORDER BY\n\n\n mbytes DESC LIMIT 50;\n\n\n QUERY PLAN \n\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n Limit (cost=48135680.07..48135680.22 rows=50 width=52) (actual time=2227678.196..2227678.223 rows=50 loops=1)\n\n\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\n\n\n -> Unique (cost=48135680.07..48143613.62 rows=2644514 width=52) (actual time=2227678.194..2227678.217 rows=50 loops=1)\n\n\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\n\n\n -> Sort (cost=48135680.07..48137002.33 rows=2644514 width=52) (actual time=2227678.192..2227678.202 rows=50 loops=1)\n\n\n Sort Key: (((sum(flows.doctets) / '1024'::numeric) / '1024'::numeric)) DESC, flows.srcaddr, flows.dstaddr, flows.dstport, (count(*))\n\n\n Sort Method: quicksort Memory: 654395kB\n\n\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\n\n\n -> GroupAggregate (cost=48059426.65..48079260.50 rows=2644514 width=52) (actual time=2167909.030..2211446.192 rows=5859671 loops=1)\n\n\n Group Key: flows.srcaddr, flows.dstaddr, flows.dstport\n\n\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\n\n\n -> Sort (cost=48059426.65..48060748.90 rows=2644514 width=20) (actual time=2167896.815..2189107.205 rows=91745640 loops=1)\n\n\n Sort Key: flows.srcaddr, flows.dstaddr, flows.dstport\n\n\n Sort Method: external merge Disk: 3049216kB\n\n\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\n\n\n -> Gather (cost=30060688.07..48003007.07 rows=2644514 width=20) (actual time=1268989.000..1991357.232 rows=91745640 loops=1)\n\n\n Workers Planned: 12\n\n\n Workers Launched: 12\n\n\n Buffers: shared hit=728798037 read=82974833\n\n\n -> Hash Semi Join (cost=30059688.07..47951761.31 rows=220376 width=20) (actual time=1268845.181..2007864.725 rows=7057357 loops=13)\n\n\n Hash Cond: (flows.dstaddr = flows_1.dstaddr)\n\n\n Buffers: shared hit=728795193 read=82974833\n\n\n -> Nested Loop (cost=0.03..17891246.86 rows=220376 width=20) (actual time=0.207..723790.283 rows=37910370 loops=13)\n\n\n Buffers: shared hit=590692229 read=14991777\n\n\n -> Parallel Seq Scan on flows (cost=0.00..16018049.14 rows=55094048 width=20) (actual time=0.152..566179.117 rows=45371630 loops=13)\n\n\n Buffers: shared hit=860990 read=14991777\n\n\n -> Index Only Scan using mynetworks_ipaddr_idx on mynetworks (cost=0.03..0.03 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=589831190)\n\n\n Index Cond: (ipaddr >>= (flows.srcaddr)::ip4r)\n\n\n Heap Fetches: 0\n\n\n Buffers: shared hit=589831203\n\n\n -> Hash (cost=30059641.47..30059641.47 rows=13305 width=4) (actual time=1268811.101..1268811.101 rows=3803508 loops=13)\n\n\n Buckets: 4194304 (originally 16384) Batches: 1 (originally 1) Memory Usage: 166486kB\n\n\n Buffers: shared hit=138102964 read=67983056\n\n\n -> HashAggregate (cost=30059561.64..30059601.56 rows=13305 width=4) (actual time=1265248.165..1267432.083 rows=3803508 loops=13)\n\n\n Group Key: flows_1.dstaddr\n\n\n Buffers: shared hit=138102964 read=67983056\n\n\n -> Nested Loop Anti Join (cost=0.00..29729327.92 rows=660467447 width=4) (actual time=0.389..1201072.707 rows=125838232 loops=13)\n\n\n Join Filter: (mynetworks_1.ipaddr >> (flows_1.dstaddr)::ip4r)\n\n\n Rows Removed by Join Filter: 503353617\n\n\n Buffers: shared hit=138102964 read=67983056\n\n\n -> Seq Scan on flows flows_1 (cost=0.00..17836152.73 rows=661128576 width=4) (actual time=0.322..343152.274 rows=589831190 loops=13)\n\n\n Buffers: shared hit=138102915 read=67983056\n\n\n -> Materialize (cost=0.00..1.02 rows=4 width=8) (actual time=0.000..0.000 rows=2 loops=7667805470)\n\n\n Buffers: shared hit=13\n\n\n -> Seq Scan on mynetworks mynetworks_1 (cost=0.00..1.01 rows=4 width=8) (actual time=0.006..0.007 rows=4 loops=13)\n\n\n Buffers: shared hit=13\n\n\n Planning time: 0.941 ms\n\n\n Execution time: 2228345.171 ms\n\n\n(48 rows)\n\n\n\n \n\n\nWith a work_mem at 6GB, I noticed that for the first 20 minutes the query was running, the i/o wait was much lower, hovering aroun 3% then it jumped 45% until almost the end of the query. \n\n\n \n\n\nflowscompact and dstexterne are actually views. I use views to simplify query writing and to \"abstract\" queries that are use often in other queries. flowscompact is a view built on table flows (having about 590 million rows), it only keeps\r\n the most often used fields.\n\n\n\nflows=# \\d+ flowscompact;\n\n\n View \"public.flowscompact\"\n\n\n Column | Type | Modifiers | Storage | Description \n\n\n-----------+--------------------------+-----------+---------+-------------\n\n\n flow_id | bigint | | plain | \n\n\n sysuptime | bigint | | plain | \n\n\n exaddr | ip4 | | plain | \n\n\n dpkts | integer | | plain | \n\n\n doctets | bigint | | plain | \n\n\n first | bigint | | plain | \n\n\n last | bigint | | plain | \n\n\n srcaddr | ip4 | | plain | \n\n\n dstaddr | ip4 | | plain | \n\n\n srcport | integer | | plain | \n\n\n dstport | integer | | plain | \n\n\n prot | smallint | | plain | \n\n\n tos | smallint | | plain | \n\n\n tcp_flags | smallint | | plain | \n\n\n timestamp | timestamp with time zone | | plain | \n\n\nView definition:\n\n\n SELECT flowstimestamp.flow_id,\n\n\n flowstimestamp.sysuptime,\n\n\n flowstimestamp.exaddr,\n\n\n flowstimestamp.dpkts,\n\n\n flowstimestamp.doctets,\n\n\n flowstimestamp.first,\n\n\n flowstimestamp.last,\n\n\n flowstimestamp.srcaddr,\n\n\n flowstimestamp.dstaddr,\n\n\n flowstimestamp.srcport,\n\n\n flowstimestamp.dstport,\n\n\n flowstimestamp.prot,\n\n\n flowstimestamp.tos,\n\n\n flowstimestamp.tcp_flags,\n\n\n flowstimestamp.\"timestamp\"\n\n\n FROM flowstimestamp;\n\n\n\nmynetworks is a table having one column and 4 rows; it contains a list of our network networks:\n\n\n\nflows=# select * from mynetworks;\n\n\n ipaddr \n\n\n----------------\n\n\n 192.168.0.0/24\n\n\n 10.112.12.0/30\n\n\n 10.112.12.4/30\n\n\n 10.112.12.8/30\n\n\n(4 row)\n\n\nflows=# \\d+ mynetworks;\n\n\n Table \"public.mynetworks\"\n\n\n Column | Type | Modifiers | Storage | Stats target | Description \n\n\n--------+------+-----------+---------+--------------+-------------\n\n\n ipaddr | ip4r | | plain | | \n\n\nIndexes:\n\n\n \"mynetworks_ipaddr_idx\" gist (ipaddr)\n\n\n\ndstexterne is a view listing all the destination IPv4 addresses not inside our network; it has one column and 3.8 million rows.\n\n\n\nflows=# \\d+ dstexterne;\n\n\n View \"public.dstexterne\"\n\n\n Column | Type | Modifiers | Storage | Description \n\n\n---------+------+-----------+---------+-------------\n\n\n dstaddr | ip4 | | plain | \n\n\nView definition:\n\n\n SELECT DISTINCT flowscompact.dstaddr\n\n\n FROM flowscompact\n\n\n LEFT JOIN mynetworks ON mynetworks.ipaddr >> flowscompact.dstaddr::ip4r\n\n\n WHERE mynetworks.ipaddr IS NULL;\n\n\n\nThanks!\n\n\n \n\n\n\nCharles\n\n\n\n\n \nCharles,\n \nAlso, let’s try to simplify your query and see if it performs better.\nYou are grouping by srcaddr, dstaddr, dstport, that makes DISTINCT not needed.\nAnd after simplifying WHERE clause (let me know if the result is not what you want), the query looks like:\n \nSELECT srcaddr, dstaddr, dstport,\n COUNT(*) AS conversation,\n SUM(doctets) / 1024 / 1024 AS mbytes\nFROM flowscompact\nWHERE srcaddr IN (SELECT ipaddr FROM mynetworks)\n AND dstaddr NOT IN (SELECT ipaddr FROM mynetworks)\nGROUP BY srcaddr, dstaddr, dstport\nORDER BY mbytes DESC\r\n\nLIMIT 50;\n \nNow, you didn’t provide the definition of flowstimestamp table.\nIf this table doesn’t have an index on (srcaddr, dstaddr, dstport) creating one should help (I think).\n \nIgor",
"msg_date": "Fri, 14 Jul 2017 20:18:37 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Ah yes - that seems more sensible (but still slower than I would expect \nfor 5 disks RAID 0). You should be able to get something like 5 * \n(single disk speed) i.e about 500MB/s.\n\nMight be worth increasing device read ahead (more than you have \nalready). Some of these so-called 'smart' RAID cards need to be hit over \nthe head before they will perform. E.g: I believe you have it set to 128 \n- I'd try 4096 or even 16384 (In the past I've used those settings on \nsome extremely stupid cards that refused to max out their disks known \nspeeds).\n\nAlso worth investigating is RAID stripe size - for DW work it makes \nsense for it to be reasonably big (256K to 1M), which again will help \nspeed is sequential scans.\n\nCheers\n\nMark\n\n\nOn 15/07/17 02:09, Charles Nadeau wrote:\n> Mark,\n>\n> First I must say that I changed my disks configuration from 4 disks in \n> RAID 10 to 5 disks in RAID 0 because I almost ran out of disk space \n> during the last ingest of data.\n> Here is the result test you asked. It was done with a cold cache:\n>\n> flows=# \\timing\n> Timing is on.\n> flows=# explain select count(*) from flows;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------\n> Finalize Aggregate (cost=17214914.09..17214914.09 rows=1 width=8)\n> -> Gather (cost=17214914.07..17214914.09 rows=1 width=8)\n> Workers Planned: 1\n> -> Partial Aggregate (cost=17213914.07..17213914.07\n> rows=1 width=8)\n> -> Parallel Seq Scan on flows\n> (cost=0.00..17019464.49 rows=388899162 width=0)\n> (5 rows)\n>\n> Time: 171.835 ms\n> flows=# select pg_relation_size('flows');\n> pg_relation_size\n> ------------------\n> 129865867264\n> (1 row)\n>\n> Time: 57.157 ms\n> flows=# select count(*) from flows;\n> LOG: duration: 625546.522 ms statement: select count(*) from flows;\n> count\n> -----------\n> 589831190\n> (1 row)\n>\n> Time: 625546.662 ms\n>\n> The throughput reported by Postgresql is almost 198MB/s, and the \n> throughput as mesured by dstat during the query execution was between \n> 25 and 299MB/s. It is much better than what I had before! The i/o wait \n> was about 12% all through the query. One thing I noticed is the \n> discrepency between the read throughput reported by pg_activity and \n> the one reported by dstat: pg_activity always report a value lower \n> than dstat.\n>\n> Besides the change of disks configuration, here is what contributed \n> the most to the improvment of the performance so far:\n>\n> Using Hugepage\n> Increasing effective_io_concurrency to 256\n> Reducing random_page_cost from 22 to 4\n> Reducing min_parallel_relation_size to 512kB to have more workers\n> when doing sequential parallel scan of my biggest table\n>\n>\n> Thanks for recomending this test, I now know what the real throughput \n> should be!\n>\n> Charles\n>\n> On Wed, Jul 12, 2017 at 4:11 AM, Mark Kirkwood \n> <[email protected] <mailto:[email protected]>> \n> wrote:\n>\n> Hmm - how are you measuring that sequential scan speed of 4MB/s?\n> I'd recommend doing a very simple test e.g, here's one on my\n> workstation - 13 GB single table on 1 SATA drive - cold cache\n> after reboot, sequential scan using Postgres 9.6.2:\n>\n> bench=# EXPLAIN SELECT count(*) FROM pgbench_accounts;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------\n> Aggregate (cost=2889345.00..2889345.01 rows=1 width=8)\n> -> Seq Scan on pgbench_accounts (cost=0.00..2639345.00\n> rows=100000000 width=0)\n> (2 rows)\n>\n>\n> bench=# SELECT pg_relation_size('pgbench_accounts');\n> pg_relation_size\n> ------------------\n> 13429514240\n> (1 row)\n>\n> bench=# SELECT count(*) FROM pgbench_accounts;\n> count\n> -----------\n> 100000000\n> (1 row)\n>\n> Time: 118884.277 ms\n>\n>\n> So doing the math seq read speed is about 110MB/s (i.e 13 GB in\n> 120 sec). Sure enough, while I was running the query iostat showed:\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s\n> avgrq-sz avgqu-sz await r_await w_await svctm %util\n> sda 0.00 0.00 926.00 0.00 114.89 0.00 \n> 254.10 1.90 2.03 2.03 0.00 1.08 100.00\n>\n>\n> So might be useful for us to see something like that from your\n> system - note you need to check you really have flushed the cache,\n> and that no other apps are using the db.\n>\n> regards\n>\n> Mark\n>\n>\n> On 12/07/17 00:46, Charles Nadeau wrote:\n>\n> After reducing random_page_cost to 4 and testing more, I can\n> report that the aggregate read throughput for parallel\n> sequential scan is about 90MB/s. However the throughput for\n> sequential scan is still around 4MB/s.\n>\n>\n>\n>\n>\n> -- \n> Charles Nadeau Ph.D.\n> http://charlesnadeau.blogspot.com/\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 15 Jul 2017 11:09:18 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Thinking about this a bit more - if somewhat more blazing performance is \nneeded, then this could be achieved via losing the RAID card and \nspinning disks altogether and buying 1 of the NVME or SATA solid state \nproducts: e.g\n\n- Samsung 960 Pro or Evo 2 TB (approx 1 or 2 GB/s seq scan speeds and \n200K IOPS)\n\n- Intel S3610 or similar 1.2 TB (500 MB/s seq scan and 30K IOPS)\n\n\nThe Samsung needs an M.2 port on the mobo (but most should have 'em - \nand if not PCIe X4 adapter cards are quite cheap). The Intel is a bit \nmore expensive compared to the Samsung, and is slower but has a longer \nlifetime. However for your workload the Sammy is probably fine.\n\nregards\n\nMark\n\nOn 15/07/17 11:09, Mark Kirkwood wrote:\n> Ah yes - that seems more sensible (but still slower than I would \n> expect for 5 disks RAID 0).\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 15 Jul 2017 11:57:11 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Mark,\n\nThe server is a HP DL 380 G6. It doesn't really work with SATA drives. And\nwhen you find one that is compatible, it is only used at 3Gb/s with a\nmaximum of 50000 IOPS (a well know caracteristic of the HP P410i SAS RAID\ncontroller). I am looking at getting a Kingston Digital HyperX Predator\nthat I could use in one of the PCIe 2.0 x4 slot. However I am worried about\nthe \"thermal runaway\", i.e. when the server can't get a temperature reading\nfrom a PCIe card, it spins the fans at full speed to protect the server\nagainst high temperature. The machine being next to my desk I worry about\nthe deafening noise it will create.\nThanks!\n\nChales\n\nOn Sat, Jul 15, 2017 at 1:57 AM, Mark Kirkwood <\[email protected]> wrote:\n\n> Thinking about this a bit more - if somewhat more blazing performance is\n> needed, then this could be achieved via losing the RAID card and spinning\n> disks altogether and buying 1 of the NVME or SATA solid state products: e.g\n>\n> - Samsung 960 Pro or Evo 2 TB (approx 1 or 2 GB/s seq scan speeds and 200K\n> IOPS)\n>\n> - Intel S3610 or similar 1.2 TB (500 MB/s seq scan and 30K IOPS)\n>\n>\n> The Samsung needs an M.2 port on the mobo (but most should have 'em - and\n> if not PCIe X4 adapter cards are quite cheap). The Intel is a bit more\n> expensive compared to the Samsung, and is slower but has a longer lifetime.\n> However for your workload the Sammy is probably fine.\n>\n> regards\n>\n> Mark\n>\n> On 15/07/17 11:09, Mark Kirkwood wrote:\n>\n>> Ah yes - that seems more sensible (but still slower than I would expect\n>> for 5 disks RAID 0).\n>>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\nMark,The server is a HP DL 380 G6. It doesn't really work with SATA drives. And when you find one that is compatible, it is only used at 3Gb/s with a maximum of 50000 IOPS (a well know caracteristic of the HP P410i SAS RAID controller). I am looking at getting a Kingston Digital HyperX Predator that I could use in one of the PCIe 2.0 x4 slot. However I am worried about the \"thermal runaway\", i.e. when the server can't get a temperature reading from a PCIe card, it spins the fans at full speed to protect the server against high temperature. The machine being next to my desk I worry about the deafening noise it will create.Thanks!ChalesOn Sat, Jul 15, 2017 at 1:57 AM, Mark Kirkwood <[email protected]> wrote:Thinking about this a bit more - if somewhat more blazing performance is needed, then this could be achieved via losing the RAID card and spinning disks altogether and buying 1 of the NVME or SATA solid state products: e.g\n\n- Samsung 960 Pro or Evo 2 TB (approx 1 or 2 GB/s seq scan speeds and 200K IOPS)\n\n- Intel S3610 or similar 1.2 TB (500 MB/s seq scan and 30K IOPS)\n\n\nThe Samsung needs an M.2 port on the mobo (but most should have 'em - and if not PCIe X4 adapter cards are quite cheap). The Intel is a bit more expensive compared to the Samsung, and is slower but has a longer lifetime. However for your workload the Sammy is probably fine.\n\nregards\n\nMark\n\nOn 15/07/17 11:09, Mark Kirkwood wrote:\n\nAh yes - that seems more sensible (but still slower than I would expect for 5 disks RAID 0).\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Charles Nadeau Ph.D.http://charlesnadeau.blogspot.com/",
"msg_date": "Sat, 15 Jul 2017 18:12:25 +0200",
"msg_from": "Charles Nadeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Mark,\n\nI increased the read ahead to 16384 and it doesn't improve performance. My\nRAID 0 use a stripe size of 256k, the maximum size supported by the\ncontroller.\nThanks!\n\nCharles\n\nOn Sat, Jul 15, 2017 at 1:02 AM, Mark Kirkwood <\[email protected]> wrote:\n\n> Ah yes - that seems more sensible (but still slower than I would expect\n> for 5 disks RAID 0). You should be able to get something like 5 * (single\n> disk speed) i.e about 500MB/s.\n>\n> Might be worth increasing device read ahead (more than you have already).\n> Some of these so-called 'smart' RAID cards need to be hit over the head\n> before they will perform. E.g: I believe you have it set to 128 - I'd try\n> 4096 or even 16384 (In the past I've used those settings on some extremely\n> stupid cards that refused to max out their disks known speeds).\n>\n> Also worth investigating is RAID stripe size - for DW work it makes sense\n> for it to be reasonably big (256K to 1M), which again will help speed is\n> sequential scans.\n>\n> Cheers\n>\n> Mark\n>\n> P.s I used to work for Greenplum, so this type of problem came up a lot\n> :-) . The best cards were the LSI and Areca!\n>\n>\n>\n> On 15/07/17 02:09, Charles Nadeau wrote:\n>\n>> Mark,\n>>\n>> First I must say that I changed my disks configuration from 4 disks in\n>> RAID 10 to 5 disks in RAID 0 because I almost ran out of disk space during\n>> the last ingest of data.\n>> Here is the result test you asked. It was done with a cold cache:\n>>\n>> flows=# \\timing\n>> Timing is on.\n>> flows=# explain select count(*) from flows;\n>> QUERY PLAN\n>> ------------------------------------------------------------\n>> -----------------------------------\n>> Finalize Aggregate (cost=17214914.09..17214914.09 rows=1 width=8)\n>> -> Gather (cost=17214914.07..17214914.09 rows=1 width=8)\n>> Workers Planned: 1\n>> -> Partial Aggregate (cost=17213914.07..17213914.07\n>> rows=1 width=8)\n>> -> Parallel Seq Scan on flows\n>> (cost=0.00..17019464.49 rows=388899162 width=0)\n>> (5 rows)\n>>\n>> Time: 171.835 ms\n>> flows=# select pg_relation_size('flows');\n>> pg_relation_size\n>> ------------------\n>> 129865867264\n>> (1 row)\n>>\n>> Time: 57.157 ms\n>> flows=# select count(*) from flows;\n>> LOG: duration: 625546.522 ms statement: select count(*) from flows;\n>> count\n>> -----------\n>> 589831190\n>> (1 row)\n>>\n>> Time: 625546.662 ms\n>>\n>> The throughput reported by Postgresql is almost 198MB/s, and the\n>> throughput as mesured by dstat during the query execution was between 25\n>> and 299MB/s. It is much better than what I had before! The i/o wait was\n>> about 12% all through the query. One thing I noticed is the discrepency\n>> between the read throughput reported by pg_activity and the one reported by\n>> dstat: pg_activity always report a value lower than dstat.\n>>\n>> Besides the change of disks configuration, here is what contributed the\n>> most to the improvment of the performance so far:\n>>\n>> Using Hugepage\n>> Increasing effective_io_concurrency to 256\n>> Reducing random_page_cost from 22 to 4\n>> Reducing min_parallel_relation_size to 512kB to have more workers\n>> when doing sequential parallel scan of my biggest table\n>>\n>>\n>> Thanks for recomending this test, I now know what the real throughput\n>> should be!\n>>\n>> Charles\n>>\n>> On Wed, Jul 12, 2017 at 4:11 AM, Mark Kirkwood <\n>> [email protected] <mailto:[email protected]>>\n>> wrote:\n>>\n>> Hmm - how are you measuring that sequential scan speed of 4MB/s?\n>> I'd recommend doing a very simple test e.g, here's one on my\n>> workstation - 13 GB single table on 1 SATA drive - cold cache\n>> after reboot, sequential scan using Postgres 9.6.2:\n>>\n>> bench=# EXPLAIN SELECT count(*) FROM pgbench_accounts;\n>> QUERY PLAN\n>> ------------------------------------------------------------\n>> ------------------------\n>> Aggregate (cost=2889345.00..2889345.01 rows=1 width=8)\n>> -> Seq Scan on pgbench_accounts (cost=0.00..2639345.00\n>> rows=100000000 width=0)\n>> (2 rows)\n>>\n>>\n>> bench=# SELECT pg_relation_size('pgbench_accounts');\n>> pg_relation_size\n>> ------------------\n>> 13429514240\n>> (1 row)\n>>\n>> bench=# SELECT count(*) FROM pgbench_accounts;\n>> count\n>> -----------\n>> 100000000\n>> (1 row)\n>>\n>> Time: 118884.277 ms\n>>\n>>\n>> So doing the math seq read speed is about 110MB/s (i.e 13 GB in\n>> 120 sec). Sure enough, while I was running the query iostat showed:\n>>\n>> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s\n>> avgrq-sz avgqu-sz await r_await w_await svctm %util\n>> sda 0.00 0.00 926.00 0.00 114.89 0.00\n>> 254.10 1.90 2.03 2.03 0.00 1.08 100.00\n>>\n>>\n>> So might be useful for us to see something like that from your\n>> system - note you need to check you really have flushed the cache,\n>> and that no other apps are using the db.\n>>\n>> regards\n>>\n>> Mark\n>>\n>>\n>> On 12/07/17 00:46, Charles Nadeau wrote:\n>>\n>> After reducing random_page_cost to 4 and testing more, I can\n>> report that the aggregate read throughput for parallel\n>> sequential scan is about 90MB/s. However the throughput for\n>> sequential scan is still around 4MB/s.\n>>\n>>\n>>\n>>\n>>\n>> --\n>> Charles Nadeau Ph.D.\n>> http://charlesnadeau.blogspot.com/\n>>\n>\n>\n\n\n-- \nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\nMark,I increased the read ahead to 16384 and it doesn't improve performance. My RAID 0 use a stripe size of 256k, the maximum size supported by the controller.Thanks!CharlesOn Sat, Jul 15, 2017 at 1:02 AM, Mark Kirkwood <[email protected]> wrote:Ah yes - that seems more sensible (but still slower than I would expect for 5 disks RAID 0). You should be able to get something like 5 * (single disk speed) i.e about 500MB/s.\n\nMight be worth increasing device read ahead (more than you have already). Some of these so-called 'smart' RAID cards need to be hit over the head before they will perform. E.g: I believe you have it set to 128 - I'd try 4096 or even 16384 (In the past I've used those settings on some extremely stupid cards that refused to max out their disks known speeds).\n\nAlso worth investigating is RAID stripe size - for DW work it makes sense for it to be reasonably big (256K to 1M), which again will help speed is sequential scans.\n\nCheers\n\nMark\n\nP.s I used to work for Greenplum, so this type of problem came up a lot :-) . The best cards were the LSI and Areca!\n\n\nOn 15/07/17 02:09, Charles Nadeau wrote:\n\nMark,\n\nFirst I must say that I changed my disks configuration from 4 disks in RAID 10 to 5 disks in RAID 0 because I almost ran out of disk space during the last ingest of data.\nHere is the result test you asked. It was done with a cold cache:\n\n flows=# \\timing\n Timing is on.\n flows=# explain select count(*) from flows;\n QUERY PLAN\n -----------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=17214914.09..17214914.09 rows=1 width=8)\n -> Gather (cost=17214914.07..17214914.09 rows=1 width=8)\n Workers Planned: 1\n -> Partial Aggregate (cost=17213914.07..17213914.07\n rows=1 width=8)\n -> Parallel Seq Scan on flows\n (cost=0.00..17019464.49 rows=388899162 width=0)\n (5 rows)\n\n Time: 171.835 ms\n flows=# select pg_relation_size('flows');\n pg_relation_size\n ------------------\n 129865867264\n (1 row)\n\n Time: 57.157 ms\n flows=# select count(*) from flows;\n LOG: duration: 625546.522 ms statement: select count(*) from flows;\n count\n -----------\n 589831190\n (1 row)\n\n Time: 625546.662 ms\n\nThe throughput reported by Postgresql is almost 198MB/s, and the throughput as mesured by dstat during the query execution was between 25 and 299MB/s. It is much better than what I had before! The i/o wait was about 12% all through the query. One thing I noticed is the discrepency between the read throughput reported by pg_activity and the one reported by dstat: pg_activity always report a value lower than dstat.\n\nBesides the change of disks configuration, here is what contributed the most to the improvment of the performance so far:\n\n Using Hugepage\n Increasing effective_io_concurrency to 256\n Reducing random_page_cost from 22 to 4\n Reducing min_parallel_relation_size to 512kB to have more workers\n when doing sequential parallel scan of my biggest table\n\n\nThanks for recomending this test, I now know what the real throughput should be!\n\nCharles\n\nOn Wed, Jul 12, 2017 at 4:11 AM, Mark Kirkwood <[email protected] <mailto:[email protected]>> wrote:\n\n Hmm - how are you measuring that sequential scan speed of 4MB/s?\n I'd recommend doing a very simple test e.g, here's one on my\n workstation - 13 GB single table on 1 SATA drive - cold cache\n after reboot, sequential scan using Postgres 9.6.2:\n\n bench=# EXPLAIN SELECT count(*) FROM pgbench_accounts;\n QUERY PLAN\n ------------------------------------------------------------------------------------\n Aggregate (cost=2889345.00..2889345.01 rows=1 width=8)\n -> Seq Scan on pgbench_accounts (cost=0.00..2639345.00\n rows=100000000 width=0)\n (2 rows)\n\n\n bench=# SELECT pg_relation_size('pgbench_accounts');\n pg_relation_size\n ------------------\n 13429514240\n (1 row)\n\n bench=# SELECT count(*) FROM pgbench_accounts;\n count\n -----------\n 100000000\n (1 row)\n\n Time: 118884.277 ms\n\n\n So doing the math seq read speed is about 110MB/s (i.e 13 GB in\n 120 sec). Sure enough, while I was running the query iostat showed:\n\n Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s\n avgrq-sz avgqu-sz await r_await w_await svctm %util\n sda 0.00 0.00 926.00 0.00 114.89 0.00 254.10 1.90 2.03 2.03 0.00 1.08 100.00\n\n\n So might be useful for us to see something like that from your\n system - note you need to check you really have flushed the cache,\n and that no other apps are using the db.\n\n regards\n\n Mark\n\n\n On 12/07/17 00:46, Charles Nadeau wrote:\n\n After reducing random_page_cost to 4 and testing more, I can\n report that the aggregate read throughput for parallel\n sequential scan is about 90MB/s. However the throughput for\n sequential scan is still around 4MB/s.\n\n\n\n\n\n-- \nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\n\n-- Charles Nadeau Ph.D.http://charlesnadeau.blogspot.com/",
"msg_date": "Sat, 15 Jul 2017 19:53:56 +0200",
"msg_from": "Charles Nadeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "On Sat, Jul 15, 2017 at 11:53 AM, Charles Nadeau\n<[email protected]> wrote:\n> Mark,\n>\n> I increased the read ahead to 16384 and it doesn't improve performance. My\n> RAID 0 use a stripe size of 256k, the maximum size supported by the\n> controller.\n\nAre your queries still spilling to disk for sorts? If this is the\ncase, and they're just too big to fit in memory, then you need to move\nyour temp space, where sorts happen, onto another disk array that\nisn't your poor overworked raid-10 array. Contention between sorts and\nreads can kill performance quick, esp on spinning rust.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 15 Jul 2017 11:58:32 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Right, that is a bit of a show stopper for those SSD (the Intel needs \nSATA 6Gb/s and the Sammy's need PCIe 3.0 to perform to their rated specs).\n\nregards\n\nMark\n\n\nOn 16/07/17 04:12, Charles Nadeau wrote:\n> Mark,\n>\n> The server is a . It doesn't really work with SATA drives. And when \n> you find one that is compatible, it is only used at 3Gb/s with a \n> maximum of 50000 IOPS (a well know caracteristic of the HP P410i SAS \n> RAID controller). I am looking at getting a Kingston Digital HyperX \n> Predator that I could use in one of the PCIe 2.0 x4 slot. However I am \n> worried about the \"thermal runaway\", i.e. when the server can't get a \n> temperature reading from a PCIe card, it spins the fans at full speed \n> to protect the server against high temperature. The machine being next \n> to my desk I worry about the deafening noise it will create.\n> Thanks!\n>\n> Chales\n>\n> On Sat, Jul 15, 2017 at 1:57 AM, Mark Kirkwood \n> <[email protected] <mailto:[email protected]>> \n> wrote:\n>\n> Thinking about this a bit more - if somewhat more blazing\n> performance is needed, then this could be achieved via losing the\n> RAID card and spinning disks altogether and buying 1 of the NVME\n> or SATA solid state products: e.g\n>\n> - Samsung 960 Pro or Evo 2 TB (approx 1 or 2 GB/s seq scan speeds\n> and 200K IOPS)\n>\n> - Intel S3610 or similar 1.2 TB (500 MB/s seq scan and 30K IOPS)\n>\n>\n> The Samsung needs an M.2 port on the mobo (but most should have\n> 'em - and if not PCIe X4 adapter cards are quite cheap). The Intel\n> is a bit more expensive compared to the Samsung, and is slower but\n> has a longer lifetime. However for your workload the Sammy is\n> probably fine.\n>\n> regards\n>\n> Mark\n>\n> On 15/07/17 11:09, Mark Kirkwood wrote:\n>\n> Ah yes - that seems more sensible (but still slower than I\n> would expect for 5 disks RAID 0).\n>\n>\n>\n>\n> -- \n> Sent via pgsql-performance mailing list\n> ([email protected]\n> <mailto:[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> <http://www.postgresql.org/mailpref/pgsql-performance>\n>\n>\n>\n>\n> -- \n> Charles Nadeau Ph.D.\n> http://charlesnadeau.blogspot.com/\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 16 Jul 2017 11:48:08 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Igor,\n\nI set the work_mem to 12GB, restarted postrgresql, repeat the same \"explain\n(analyze, buffers)...\" as above and the read throughput was very low, at\nmost 10MB/s. All the sorting operation are now done in memory.\nI lowered the work_mem back to 6GB, restarted postrgresql, repeat the same\n\"explain (analyze, buffers)...\" as above and the read throughput was very\nlow, at most 10MB/s. The 1st sorting operation is still done in memory, the\nsecond one to disk. I think I need about 4GB to do all sort in memory.\nOne thing I remember from Friday's \"explain (analyze, buffers)...\". I set\ntemp_buffer and work_mem to 6GB as I read both your message one after the\nother. So I decided to try again: I then set work_mem=6GB,\ntemp_buffers=6GB, restarted postrgresql, repeat the same \"explain (analyze,\nbuffers)...\" as above and the read throughput was very low again, at most\n10MB/s. The 1st sorting operation is still done in memory, the second one\nto disk.\nFor the last test, I brought back work_mem and temp_buffer to their\noriginal value. The read throughput is still low.\nIn all cases, about 20 minutes after the query starts, it start writing to\ndisk furiously. Here are the peak values as reported by dstat:\nwork_mem=12GB, temp_buffers=8MB: peak of 393MB/s\nwork_mem=6GB, temp_buffers=8MB: peak of 579MB/s\nwork_mem=6GB, temp_buffers=6GB: peak of 418MB/s\nwork_mem=1468006kB and temp_buffers=8MB, peak of 61MB/s\nAlso, at peak write, the server alomost ran of memory: the cache almost\ngoes to 0 and it starts swapping.\n\nThis query is a bit extreme in terms of sorting. Maybe I should try to\nbenchmark while counting all the records of my biggest table like Mark\nKirkwood suggested. I'll do some more tests and post the results back to\nthe mailing list.\nThanks!\n\nCharles\n\nOn Fri, Jul 14, 2017 at 9:13 PM, Igor Neyman <[email protected]> wrote:\n\n> *From:* Charles Nadeau [mailto:[email protected]]\n> *Sent:* Friday, July 14, 2017 11:35 AM\n> *To:* Igor Neyman <[email protected]>\n> *Cc:* Jeff Janes <[email protected]>; [email protected]\n> *Subject:* Re: [PERFORM] Very poor read performance, query independent\n>\n>\n>\n> Igor,\n>\n>\n>\n> Initially temp_buffer was left to its default value (8MB). Watching the\n> content of the directory that stores the temporary files, I found that I\n> need at most 21GB of temporary files space. Should I set temp_buffer to\n> 21GB?\n>\n> Here is the explain you requested with work_mem set to 6GB:\n>\n> flows=# set work_mem='6GB';\n>\n> SET\n>\n> flows=# explain (analyze, buffers) SELECT DISTINCT\n>\n> srcaddr,\n>\n> dstaddr,\n>\n> dstport,\n>\n> COUNT(*) AS conversation,\n>\n> SUM(doctets) / 1024 / 1024 AS mbytes\n>\n> FROM\n>\n> flowscompact,\n>\n> mynetworks\n>\n> WHERE\n>\n> mynetworks.ipaddr >>= flowscompact.srcaddr\n>\n> AND dstaddr IN\n>\n> (\n>\n> SELECT\n>\n> dstaddr\n>\n> FROM\n>\n> dstexterne\n>\n> )\n>\n> GROUP BY\n>\n> srcaddr,\n>\n> dstaddr,\n>\n> dstport\n>\n> ORDER BY\n>\n> mbytes DESC LIMIT 50;\n>\n>\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> ------------------------------------------------------------------------\n>\n> Limit (cost=48135680.07..48135680.22 rows=50 width=52) (actual\n> time=2227678.196..2227678.223 rows=50 loops=1)\n>\n> Buffers: shared hit=728798038 read=82974833, temp read=381154\n> written=381154\n>\n> -> Unique (cost=48135680.07..48143613.62 rows=2644514 width=52)\n> (actual time=2227678.194..2227678.217 rows=50 loops=1)\n>\n> Buffers: shared hit=728798038 read=82974833, temp read=381154\n> written=381154\n>\n> -> Sort (cost=48135680.07..48137002.33 rows=2644514 width=52)\n> (actual time=2227678.192..2227678.202 rows=50 loops=1)\n>\n> Sort Key: (((sum(flows.doctets) / '1024'::numeric) /\n> '1024'::numeric)) DESC, flows.srcaddr, flows.dstaddr, flows.dstport,\n> (count(*))\n>\n> Sort Method: quicksort Memory: 654395kB\n>\n> Buffers: shared hit=728798038 read=82974833, temp\n> read=381154 written=381154\n>\n> -> GroupAggregate (cost=48059426.65..48079260.50\n> rows=2644514 width=52) (actual time=2167909.030..2211446.192 rows=5859671\n> loops=1)\n>\n> Group Key: flows.srcaddr, flows.dstaddr, flows.dstport\n>\n> Buffers: shared hit=728798038 read=82974833, temp\n> read=381154 written=381154\n>\n> -> Sort (cost=48059426.65..48060748.90\n> rows=2644514 width=20) (actual time=2167896.815..2189107.205 rows=91745640\n> loops=1)\n>\n> Sort Key: flows.srcaddr, flows.dstaddr,\n> flows.dstport\n>\n> Sort Method: external merge Disk: 3049216kB\n>\n> Buffers: shared hit=728798038 read=82974833,\n> temp read=381154 written=381154\n>\n> -> Gather (cost=30060688.07..48003007.07\n> rows=2644514 width=20) (actual time=1268989.000..1991357.232 rows=91745640\n> loops=1)\n>\n> Workers Planned: 12\n>\n> Workers Launched: 12\n>\n> Buffers: shared hit=728798037\n> read=82974833\n>\n> -> Hash Semi Join\n> (cost=30059688.07..47951761.31 rows=220376 width=20) (actual\n> time=1268845.181..2007864.725 rows=7057357 loops=13)\n>\n> Hash Cond: (flows.dstaddr =\n> flows_1.dstaddr)\n>\n> Buffers: shared hit=728795193\n> read=82974833\n>\n> -> Nested Loop\n> (cost=0.03..17891246.86 rows=220376 width=20) (actual\n> time=0.207..723790.283 rows=37910370 loops=13)\n>\n> Buffers: shared hit=590692229\n> read=14991777\n>\n> -> Parallel Seq Scan on\n> flows (cost=0.00..16018049.14 rows=55094048 width=20) (actual\n> time=0.152..566179.117 rows=45371630 loops=13)\n>\n> Buffers: shared\n> hit=860990 read=14991777\n>\n> -> Index Only Scan using\n> mynetworks_ipaddr_idx on mynetworks (cost=0.03..0.03 rows=1 width=8)\n> (actual time=0.002..0.002 rows=1 loops=589831190)\n>\n> Index Cond: (ipaddr >>=\n> (flows.srcaddr)::ip4r)\n>\n> Heap Fetches: 0\n>\n> Buffers: shared\n> hit=589831203\n>\n> -> Hash\n> (cost=30059641.47..30059641.47 rows=13305 width=4) (actual\n> time=1268811.101..1268811.101 rows=3803508 loops=13)\n>\n> Buckets: 4194304 (originally\n> 16384) Batches: 1 (originally 1) Memory Usage: 166486kB\n>\n> Buffers: shared hit=138102964\n> read=67983056\n>\n> -> HashAggregate\n> (cost=30059561.64..30059601.56 rows=13305 width=4) (actual\n> time=1265248.165..1267432.083 rows=3803508 loops=13)\n>\n> Group Key:\n> flows_1.dstaddr\n>\n> Buffers: shared\n> hit=138102964 read=67983056\n>\n> -> Nested Loop Anti\n> Join (cost=0.00..29729327.92 rows=660467447 width=4) (actual\n> time=0.389..1201072.707 rows=125838232 loops=13)\n>\n> Join Filter:\n> (mynetworks_1.ipaddr >> (flows_1.dstaddr)::ip4r)\n>\n> Rows Removed by\n> Join Filter: 503353617\n>\n> Buffers: shared\n> hit=138102964 read=67983056\n>\n> -> Seq Scan on\n> flows flows_1 (cost=0.00..17836152.73 rows=661128576 width=4) (actual\n> time=0.322..343152.274 rows=589831190 loops=13)\n>\n> Buffers:\n> shared hit=138102915 read=67983056\n>\n> -> Materialize\n> (cost=0.00..1.02 rows=4 width=8) (actual time=0.000..0.000 rows=2\n> loops=7667805470)\n>\n> Buffers:\n> shared hit=13\n>\n> -> Seq\n> Scan on mynetworks mynetworks_1 (cost=0.00..1.01 rows=4 width=8) (actual\n> time=0.006..0.007 rows=4 loops=13)\n>\n>\n> Buffers: shared hit=13\n>\n> Planning time: 0.941 ms\n>\n> Execution time: 2228345.171 ms\n>\n> (48 rows)\n>\n>\n>\n> With a work_mem at 6GB, I noticed that for the first 20 minutes the query\n> was running, the i/o wait was much lower, hovering aroun 3% then it jumped\n> 45% until almost the end of the query.\n>\n>\n>\n> flowscompact and dstexterne are actually views. I use views to simplify\n> query writing and to \"abstract\" queries that are use often in other\n> queries. flowscompact is a view built on table flows (having about 590\n> million rows), it only keeps the most often used fields.\n>\n> flows=# \\d+ flowscompact;\n>\n> View \"public.flowscompact\"\n>\n> Column | Type | Modifiers | Storage | Description\n>\n> -----------+--------------------------+-----------+---------+-------------\n>\n> flow_id | bigint | | plain |\n>\n> sysuptime | bigint | | plain |\n>\n> exaddr | ip4 | | plain |\n>\n> dpkts | integer | | plain |\n>\n> doctets | bigint | | plain |\n>\n> first | bigint | | plain |\n>\n> last | bigint | | plain |\n>\n> srcaddr | ip4 | | plain |\n>\n> dstaddr | ip4 | | plain |\n>\n> srcport | integer | | plain |\n>\n> dstport | integer | | plain |\n>\n> prot | smallint | | plain |\n>\n> tos | smallint | | plain |\n>\n> tcp_flags | smallint | | plain |\n>\n> timestamp | timestamp with time zone | | plain |\n>\n> View definition:\n>\n> SELECT flowstimestamp.flow_id,\n>\n> flowstimestamp.sysuptime,\n>\n> flowstimestamp.exaddr,\n>\n> flowstimestamp.dpkts,\n>\n> flowstimestamp.doctets,\n>\n> flowstimestamp.first,\n>\n> flowstimestamp.last,\n>\n> flowstimestamp.srcaddr,\n>\n> flowstimestamp.dstaddr,\n>\n> flowstimestamp.srcport,\n>\n> flowstimestamp.dstport,\n>\n> flowstimestamp.prot,\n>\n> flowstimestamp.tos,\n>\n> flowstimestamp.tcp_flags,\n>\n> flowstimestamp.\"timestamp\"\n>\n> FROM flowstimestamp;\n>\n> mynetworks is a table having one column and 4 rows; it contains a list of\n> our network networks:\n>\n> flows=# select * from mynetworks;\n>\n> ipaddr\n>\n> ----------------\n>\n> 192.168.0.0/24\n>\n> 10.112.12.0/30\n>\n> 10.112.12.4/30\n>\n> 10.112.12.8/30\n>\n> (4 row)\n>\n> flows=# \\d+ mynetworks;\n>\n> Table \"public.mynetworks\"\n>\n> Column | Type | Modifiers | Storage | Stats target | Description\n>\n> --------+------+-----------+---------+--------------+-------------\n>\n> ipaddr | ip4r | | plain | |\n>\n> Indexes:\n>\n> \"mynetworks_ipaddr_idx\" gist (ipaddr)\n>\n> dstexterne is a view listing all the destination IPv4 addresses not inside\n> our network; it has one column and 3.8 million rows.\n>\n> flows=# \\d+ dstexterne;\n>\n> View \"public.dstexterne\"\n>\n> Column | Type | Modifiers | Storage | Description\n>\n> ---------+------+-----------+---------+-------------\n>\n> dstaddr | ip4 | | plain |\n>\n> View definition:\n>\n> SELECT DISTINCT flowscompact.dstaddr\n>\n> FROM flowscompact\n>\n> LEFT JOIN mynetworks ON mynetworks.ipaddr >>\n> flowscompact.dstaddr::ip4r\n>\n> WHERE mynetworks.ipaddr IS NULL;\n>\n> Thanks!\n>\n>\n>\n> Charles\n>\n>\n>\n> Charles,\n>\n>\n>\n> Don’t change temp_buffers.\n>\n> Try to increase work_mem even more, say work_mem=’12GB’, because it’s\n> still using disk for sorting (starting around 20th minute as you noticed).\n>\n> See if this:\n>\n> “Sort Method: external merge Disk: 3049216kB”\n>\n> goes away.\n>\n> Igor\n>\n>\n>\n\n\n\n-- \nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\nIgor,I set the work_mem to 12GB, restarted postrgresql, repeat the same \"explain (analyze, buffers)...\" as above and the read throughput was very low, at most 10MB/s. All the sorting operation are now done in memory.I lowered the work_mem back to 6GB, restarted postrgresql, repeat the same \"explain (analyze, buffers)...\" as above and the read throughput was very low, at most 10MB/s. The 1st sorting operation is still done in memory, the second one to disk. I think I need about 4GB to do all sort in memory.One thing I remember from Friday's \"explain (analyze, buffers)...\". I set temp_buffer and work_mem to 6GB as I read both your message one after the other. So I decided to try again: I then set work_mem=6GB, temp_buffers=6GB, restarted postrgresql, repeat the same \"explain (analyze, buffers)...\" as above and the read throughput was very low again, at most 10MB/s. The 1st sorting operation is still done in memory, the second one to disk.For the last test, I brought back work_mem and temp_buffer to their original value. The read throughput is still low.In all cases, about 20 minutes after the query starts, it start writing to disk furiously. Here are the peak values as reported by dstat:work_mem=12GB, temp_buffers=8MB: peak of 393MB/swork_mem=6GB, temp_buffers=8MB: peak of 579MB/swork_mem=6GB, temp_buffers=6GB: peak of 418MB/swork_mem=1468006kB and temp_buffers=8MB, peak of 61MB/sAlso, at peak write, the server alomost ran of memory: the cache almost goes to 0 and it starts swapping.This query is a bit extreme in terms of sorting. Maybe I should try to benchmark while counting all the records of my biggest table like Mark Kirkwood suggested. I'll do some more tests and post the results back to the mailing list.Thanks!CharlesOn Fri, Jul 14, 2017 at 9:13 PM, Igor Neyman <[email protected]> wrote:\n\n\nFrom: Charles Nadeau [mailto:[email protected]]\n\nSent: Friday, July 14, 2017 11:35 AM\nTo: Igor Neyman <[email protected]>\nCc: Jeff Janes <[email protected]>; [email protected]\nSubject: Re: [PERFORM] Very poor read performance, query independent\n \n\n\n\nIgor,\n\n\n \n\n\nInitially temp_buffer was left to its default value (8MB). Watching the content of the directory that stores the temporary files, I found that I need at most 21GB of temporary files space. Should I set temp_buffer to 21GB?\n\n\nHere is the explain you requested with work_mem set to 6GB:\n\n\n\nflows=# set work_mem='6GB';\n\n\nSET\n\n\nflows=# explain (analyze, buffers) SELECT DISTINCT\n\n\n srcaddr,\n\n\n dstaddr,\n\n\n dstport,\n\n\n COUNT(*) AS conversation,\n\n\n SUM(doctets) / 1024 / 1024 AS mbytes\n\n\nFROM\n\n\n flowscompact,\n\n\n mynetworks\n\n\nWHERE\n\n\n mynetworks.ipaddr >>= flowscompact.srcaddr\n\n\n AND dstaddr IN\n\n\n (\n\n\n SELECT\n\n\n dstaddr\n\n\n FROM\n\n\n dstexterne\n\n\n )\n\n\nGROUP BY\n\n\n srcaddr,\n\n\n dstaddr,\n\n\n dstport\n\n\nORDER BY\n\n\n mbytes DESC LIMIT 50;\n\n\n QUERY PLAN \n\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n Limit (cost=48135680.07..48135680.22 rows=50 width=52) (actual time=2227678.196..2227678.223 rows=50 loops=1)\n\n\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\n\n\n -> Unique (cost=48135680.07..48143613.62 rows=2644514 width=52) (actual time=2227678.194..2227678.217 rows=50 loops=1)\n\n\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\n\n\n -> Sort (cost=48135680.07..48137002.33 rows=2644514 width=52) (actual time=2227678.192..2227678.202 rows=50 loops=1)\n\n\n Sort Key: (((sum(flows.doctets) / '1024'::numeric) / '1024'::numeric)) DESC, flows.srcaddr, flows.dstaddr, flows.dstport, (count(*))\n\n\n Sort Method: quicksort Memory: 654395kB\n\n\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\n\n\n -> GroupAggregate (cost=48059426.65..48079260.50 rows=2644514 width=52) (actual time=2167909.030..2211446.192 rows=5859671 loops=1)\n\n\n Group Key: flows.srcaddr, flows.dstaddr, flows.dstport\n\n\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\n\n\n -> Sort (cost=48059426.65..48060748.90 rows=2644514 width=20) (actual time=2167896.815..2189107.205 rows=91745640 loops=1)\n\n\n Sort Key: flows.srcaddr, flows.dstaddr, flows.dstport\n\n\n Sort Method: external merge Disk: 3049216kB\n\n\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\n\n\n -> Gather (cost=30060688.07..48003007.07 rows=2644514 width=20) (actual time=1268989.000..1991357.232 rows=91745640 loops=1)\n\n\n Workers Planned: 12\n\n\n Workers Launched: 12\n\n\n Buffers: shared hit=728798037 read=82974833\n\n\n -> Hash Semi Join (cost=30059688.07..47951761.31 rows=220376 width=20) (actual time=1268845.181..2007864.725 rows=7057357 loops=13)\n\n\n Hash Cond: (flows.dstaddr = flows_1.dstaddr)\n\n\n Buffers: shared hit=728795193 read=82974833\n\n\n -> Nested Loop (cost=0.03..17891246.86 rows=220376 width=20) (actual time=0.207..723790.283 rows=37910370 loops=13)\n\n\n Buffers: shared hit=590692229 read=14991777\n\n\n -> Parallel Seq Scan on flows (cost=0.00..16018049.14 rows=55094048 width=20) (actual time=0.152..566179.117 rows=45371630 loops=13)\n\n\n Buffers: shared hit=860990 read=14991777\n\n\n -> Index Only Scan using mynetworks_ipaddr_idx on mynetworks (cost=0.03..0.03 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=589831190)\n\n\n Index Cond: (ipaddr >>= (flows.srcaddr)::ip4r)\n\n\n Heap Fetches: 0\n\n\n Buffers: shared hit=589831203\n\n\n -> Hash (cost=30059641.47..30059641.47 rows=13305 width=4) (actual time=1268811.101..1268811.101 rows=3803508 loops=13)\n\n\n Buckets: 4194304 (originally 16384) Batches: 1 (originally 1) Memory Usage: 166486kB\n\n\n Buffers: shared hit=138102964 read=67983056\n\n\n -> HashAggregate (cost=30059561.64..30059601.56 rows=13305 width=4) (actual time=1265248.165..1267432.083 rows=3803508 loops=13)\n\n\n Group Key: flows_1.dstaddr\n\n\n Buffers: shared hit=138102964 read=67983056\n\n\n -> Nested Loop Anti Join (cost=0.00..29729327.92 rows=660467447 width=4) (actual time=0.389..1201072.707 rows=125838232 loops=13)\n\n\n Join Filter: (mynetworks_1.ipaddr >> (flows_1.dstaddr)::ip4r)\n\n\n Rows Removed by Join Filter: 503353617\n\n\n Buffers: shared hit=138102964 read=67983056\n\n\n -> Seq Scan on flows flows_1 (cost=0.00..17836152.73 rows=661128576 width=4) (actual time=0.322..343152.274 rows=589831190 loops=13)\n\n\n Buffers: shared hit=138102915 read=67983056\n\n\n -> Materialize (cost=0.00..1.02 rows=4 width=8) (actual time=0.000..0.000 rows=2 loops=7667805470)\n\n\n Buffers: shared hit=13\n\n\n -> Seq Scan on mynetworks mynetworks_1 (cost=0.00..1.01 rows=4 width=8) (actual time=0.006..0.007 rows=4 loops=13)\n\n\n Buffers: shared hit=13\n\n\n Planning time: 0.941 ms\n\n\n Execution time: 2228345.171 ms\n\n\n(48 rows)\n\n\n\n \n\n\nWith a work_mem at 6GB, I noticed that for the first 20 minutes the query was running, the i/o wait was much lower, hovering aroun 3% then it jumped 45% until almost the end of the query. \n\n\n \n\n\nflowscompact and dstexterne are actually views. I use views to simplify query writing and to \"abstract\" queries that are use often in other queries. flowscompact is a view built on table flows (having about 590 million rows), it only keeps\n the most often used fields.\n\n\n\nflows=# \\d+ flowscompact;\n\n\n View \"public.flowscompact\"\n\n\n Column | Type | Modifiers | Storage | Description \n\n\n-----------+--------------------------+-----------+---------+-------------\n\n\n flow_id | bigint | | plain | \n\n\n sysuptime | bigint | | plain | \n\n\n exaddr | ip4 | | plain | \n\n\n dpkts | integer | | plain | \n\n\n doctets | bigint | | plain | \n\n\n first | bigint | | plain | \n\n\n last | bigint | | plain | \n\n\n srcaddr | ip4 | | plain | \n\n\n dstaddr | ip4 | | plain | \n\n\n srcport | integer | | plain | \n\n\n dstport | integer | | plain | \n\n\n prot | smallint | | plain | \n\n\n tos | smallint | | plain | \n\n\n tcp_flags | smallint | | plain | \n\n\n timestamp | timestamp with time zone | | plain | \n\n\nView definition:\n\n\n SELECT flowstimestamp.flow_id,\n\n\n flowstimestamp.sysuptime,\n\n\n flowstimestamp.exaddr,\n\n\n flowstimestamp.dpkts,\n\n\n flowstimestamp.doctets,\n\n\n flowstimestamp.first,\n\n\n flowstimestamp.last,\n\n\n flowstimestamp.srcaddr,\n\n\n flowstimestamp.dstaddr,\n\n\n flowstimestamp.srcport,\n\n\n flowstimestamp.dstport,\n\n\n flowstimestamp.prot,\n\n\n flowstimestamp.tos,\n\n\n flowstimestamp.tcp_flags,\n\n\n flowstimestamp.\"timestamp\"\n\n\n FROM flowstimestamp;\n\n\n\nmynetworks is a table having one column and 4 rows; it contains a list of our network networks:\n\n\n\nflows=# select * from mynetworks;\n\n\n ipaddr \n\n\n----------------\n\n\n 192.168.0.0/24\n\n\n 10.112.12.0/30\n\n\n 10.112.12.4/30\n\n\n 10.112.12.8/30\n\n\n(4 row)\n\n\nflows=# \\d+ mynetworks;\n\n\n Table \"public.mynetworks\"\n\n\n Column | Type | Modifiers | Storage | Stats target | Description \n\n\n--------+------+-----------+---------+--------------+-------------\n\n\n ipaddr | ip4r | | plain | | \n\n\nIndexes:\n\n\n \"mynetworks_ipaddr_idx\" gist (ipaddr)\n\n\n\ndstexterne is a view listing all the destination IPv4 addresses not inside our network; it has one column and 3.8 million rows.\n\n\n\nflows=# \\d+ dstexterne;\n\n\n View \"public.dstexterne\"\n\n\n Column | Type | Modifiers | Storage | Description \n\n\n---------+------+-----------+---------+-------------\n\n\n dstaddr | ip4 | | plain | \n\n\nView definition:\n\n\n SELECT DISTINCT flowscompact.dstaddr\n\n\n FROM flowscompact\n\n\n LEFT JOIN mynetworks ON mynetworks.ipaddr >> flowscompact.dstaddr::ip4r\n\n\n WHERE mynetworks.ipaddr IS NULL;\n\n\n\nThanks!\n\n\n \n\n\n\nCharles\n\n\n\n\n \nCharles,\n \nDon’t change temp_buffers.\nTry to increase work_mem even more, say work_mem=’12GB’, because it’s still using disk for sorting (starting around 20th minute as you noticed).\nSee if this:\n“Sort Method: external merge Disk: 3049216kB”\ngoes away.\n\nIgor\n \n\n\n\n\n-- Charles Nadeau Ph.D.http://charlesnadeau.blogspot.com/",
"msg_date": "Sun, 16 Jul 2017 11:20:57 +0200",
"msg_from": "Charles Nadeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Scott,\n\nThe temp tablespace is on a disk of his own.\nThanks!\n\nCharles\n\nOn Sat, Jul 15, 2017 at 7:58 PM, Scott Marlowe <[email protected]>\nwrote:\n\n> On Sat, Jul 15, 2017 at 11:53 AM, Charles Nadeau\n> <[email protected]> wrote:\n> > Mark,\n> >\n> > I increased the read ahead to 16384 and it doesn't improve performance.\n> My\n> > RAID 0 use a stripe size of 256k, the maximum size supported by the\n> > controller.\n>\n> Are your queries still spilling to disk for sorts? If this is the\n> case, and they're just too big to fit in memory, then you need to move\n> your temp space, where sorts happen, onto another disk array that\n> isn't your poor overworked raid-10 array. Contention between sorts and\n> reads can kill performance quick, esp on spinning rust.\n>\n\n\n\n-- \nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\nScott,The temp tablespace is on a disk of his own.Thanks!CharlesOn Sat, Jul 15, 2017 at 7:58 PM, Scott Marlowe <[email protected]> wrote:On Sat, Jul 15, 2017 at 11:53 AM, Charles Nadeau\n<[email protected]> wrote:\n> Mark,\n>\n> I increased the read ahead to 16384 and it doesn't improve performance. My\n> RAID 0 use a stripe size of 256k, the maximum size supported by the\n> controller.\n\nAre your queries still spilling to disk for sorts? If this is the\ncase, and they're just too big to fit in memory, then you need to move\nyour temp space, where sorts happen, onto another disk array that\nisn't your poor overworked raid-10 array. Contention between sorts and\nreads can kill performance quick, esp on spinning rust.\n-- Charles Nadeau Ph.D.http://charlesnadeau.blogspot.com/",
"msg_date": "Sun, 16 Jul 2017 11:22:00 +0200",
"msg_from": "Charles Nadeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Igor,\n\nThe 1st clause of the where statement won't select addresses the same way\nas the one I wrote using the extension for IPv6 and IPv6 data types.\n\nflowstimestamp is a view:\nflows=# \\d+ flowstimestamp\n View \"public.flowstimestamp\"\n Column | Type | Modifiers | Storage | Description\n-------------+--------------------------+-----------+---------+-------------\n flow_id | bigint | | plain |\n unix_secs | bigint | | plain |\n unix_nsecs | bigint | | plain |\n sysuptime | bigint | | plain |\n exaddr | ip4 | | plain |\n dpkts | integer | | plain |\n doctets | bigint | | plain |\n first | bigint | | plain |\n last | bigint | | plain |\n engine_type | smallint | | plain |\n engine_id | smallint | | plain |\n srcaddr | ip4 | | plain |\n dstaddr | ip4 | | plain |\n nexthop | ip4 | | plain |\n input | integer | | plain |\n output | integer | | plain |\n srcport | integer | | plain |\n dstport | integer | | plain |\n prot | smallint | | plain |\n tos | smallint | | plain |\n tcp_flags | smallint | | plain |\n src_mask | smallint | | plain |\n dst_mask | smallint | | plain |\n src_as | integer | | plain |\n dst_as | integer | | plain |\n timestamp | timestamp with time zone | | plain |\nView definition:\n SELECT flows.flow_id,\n flows.unix_secs,\n flows.unix_nsecs,\n flows.sysuptime,\n flows.exaddr,\n flows.dpkts,\n flows.doctets,\n flows.first,\n flows.last,\n flows.engine_type,\n flows.engine_id,\n flows.srcaddr,\n flows.dstaddr,\n flows.nexthop,\n flows.input,\n flows.output,\n flows.srcport,\n flows.dstport,\n flows.prot,\n flows.tos,\n flows.tcp_flags,\n flows.src_mask,\n flows.dst_mask,\n flows.src_as,\n flows.dst_as,\n to_timestamp((flows.unix_secs + flows.unix_nsecs / 1000000000)::double\nprecision) AS \"timestamp\"\n FROM flows;\n\nAnd it can use the indexes of flows:\nIndexes:\n \"flows_pkey\" PRIMARY KEY, btree (flow_id)\n \"flows_dstaddr_dstport\" btree (dstaddr, dstport)\n \"flows_srcaddr_dstaddr_idx\" btree (srcaddr, dstaddr)\n \"flows_srcaddr_srcport\" btree (srcaddr, srcport)\n \"flows_srcport_dstport_idx\" btree (srcport, dstport)\n\nThanks!\n\nCharles\n\nOn Fri, Jul 14, 2017 at 10:18 PM, Igor Neyman <[email protected]>\nwrote:\n\n>\n>\n>\n>\n> *From:* [email protected] [mailto:pgsql-performance-\n> [email protected]] *On Behalf Of *Igor Neyman\n> *Sent:* Friday, July 14, 2017 3:13 PM\n> *To:* Charles Nadeau <[email protected]>\n>\n> *Cc:* Jeff Janes <[email protected]>; [email protected]\n> *Subject:* Re: [PERFORM] Very poor read performance, query independent\n>\n>\n>\n> *From:* Charles Nadeau [mailto:[email protected]\n> <[email protected]>]\n> *Sent:* Friday, July 14, 2017 11:35 AM\n> *To:* Igor Neyman <[email protected]>\n> *Cc:* Jeff Janes <[email protected]>; [email protected]\n> *Subject:* Re: [PERFORM] Very poor read performance, query independent\n>\n>\n>\n> Igor,\n>\n>\n>\n> Initially temp_buffer was left to its default value (8MB). Watching the\n> content of the directory that stores the temporary files, I found that I\n> need at most 21GB of temporary files space. Should I set temp_buffer to\n> 21GB?\n>\n> Here is the explain you requested with work_mem set to 6GB:\n>\n> flows=# set work_mem='6GB';\n>\n> SET\n>\n> flows=# explain (analyze, buffers) SELECT DISTINCT\n>\n> srcaddr,\n>\n> dstaddr,\n>\n> dstport,\n>\n> COUNT(*) AS conversation,\n>\n> SUM(doctets) / 1024 / 1024 AS mbytes\n>\n> FROM\n>\n> flowscompact,\n>\n> mynetworks\n>\n> WHERE\n>\n> mynetworks.ipaddr >>= flowscompact.srcaddr\n>\n> AND dstaddr IN\n>\n> (\n>\n> SELECT\n>\n> dstaddr\n>\n> FROM\n>\n> dstexterne\n>\n> )\n>\n> GROUP BY\n>\n> srcaddr,\n>\n> dstaddr,\n>\n> dstport\n>\n> ORDER BY\n>\n> mbytes DESC LIMIT 50;\n>\n>\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> ------------------------------------------------------------------------\n>\n> Limit (cost=48135680.07..48135680.22 rows=50 width=52) (actual\n> time=2227678.196..2227678.223 rows=50 loops=1)\n>\n> Buffers: shared hit=728798038 read=82974833, temp read=381154\n> written=381154\n>\n> -> Unique (cost=48135680.07..48143613.62 rows=2644514 width=52)\n> (actual time=2227678.194..2227678.217 rows=50 loops=1)\n>\n> Buffers: shared hit=728798038 read=82974833, temp read=381154\n> written=381154\n>\n> -> Sort (cost=48135680.07..48137002.33 rows=2644514 width=52)\n> (actual time=2227678.192..2227678.202 rows=50 loops=1)\n>\n> Sort Key: (((sum(flows.doctets) / '1024'::numeric) /\n> '1024'::numeric)) DESC, flows.srcaddr, flows.dstaddr, flows.dstport,\n> (count(*))\n>\n> Sort Method: quicksort Memory: 654395kB\n>\n> Buffers: shared hit=728798038 read=82974833, temp\n> read=381154 written=381154\n>\n> -> GroupAggregate (cost=48059426.65..48079260.50\n> rows=2644514 width=52) (actual time=2167909.030..2211446.192 rows=5859671\n> loops=1)\n>\n> Group Key: flows.srcaddr, flows.dstaddr, flows.dstport\n>\n> Buffers: shared hit=728798038 read=82974833, temp\n> read=381154 written=381154\n>\n> -> Sort (cost=48059426.65..48060748.90\n> rows=2644514 width=20) (actual time=2167896.815..2189107.205 rows=91745640\n> loops=1)\n>\n> Sort Key: flows.srcaddr, flows.dstaddr,\n> flows.dstport\n>\n> Sort Method: external merge Disk: 3049216kB\n>\n> Buffers: shared hit=728798038 read=82974833,\n> temp read=381154 written=381154\n>\n> -> Gather (cost=30060688.07..48003007.07\n> rows=2644514 width=20) (actual time=1268989.000..1991357.232 rows=91745640\n> loops=1)\n>\n> Workers Planned: 12\n>\n> Workers Launched: 12\n>\n> Buffers: shared hit=728798037\n> read=82974833\n>\n> -> Hash Semi Join\n> (cost=30059688.07..47951761.31 rows=220376 width=20) (actual\n> time=1268845.181..2007864.725 rows=7057357 loops=13)\n>\n> Hash Cond: (flows.dstaddr =\n> flows_1.dstaddr)\n>\n> Buffers: shared hit=728795193\n> read=82974833\n>\n> -> Nested Loop\n> (cost=0.03..17891246.86 rows=220376 width=20) (actual\n> time=0.207..723790.283 rows=37910370 loops=13)\n>\n> Buffers: shared hit=590692229\n> read=14991777\n>\n> -> Parallel Seq Scan on\n> flows (cost=0.00..16018049.14 rows=55094048 width=20) (actual\n> time=0.152..566179.117 rows=45371630 loops=13)\n>\n> Buffers: shared\n> hit=860990 read=14991777\n>\n> -> Index Only Scan using\n> mynetworks_ipaddr_idx on mynetworks (cost=0.03..0.03 rows=1 width=8)\n> (actual time=0.002..0.002 rows=1 loops=589831190)\n>\n> Index Cond: (ipaddr >>=\n> (flows.srcaddr)::ip4r)\n>\n> Heap Fetches: 0\n>\n> Buffers: shared\n> hit=589831203\n>\n> -> Hash\n> (cost=30059641.47..30059641.47 rows=13305 width=4) (actual\n> time=1268811.101..1268811.101 rows=3803508 loops=13)\n>\n> Buckets: 4194304 (originally\n> 16384) Batches: 1 (originally 1) Memory Usage: 166486kB\n>\n> Buffers: shared hit=138102964\n> read=67983056\n>\n> -> HashAggregate\n> (cost=30059561.64..30059601.56 rows=13305 width=4) (actual\n> time=1265248.165..1267432.083 rows=3803508 loops=13)\n>\n> Group Key:\n> flows_1.dstaddr\n>\n> Buffers: shared\n> hit=138102964 read=67983056\n>\n> -> Nested Loop Anti\n> Join (cost=0.00..29729327.92 rows=660467447 width=4) (actual\n> time=0.389..1201072.707 rows=125838232 loops=13)\n>\n> Join Filter:\n> (mynetworks_1.ipaddr >> (flows_1.dstaddr)::ip4r)\n>\n> Rows Removed by\n> Join Filter: 503353617\n>\n> Buffers: shared\n> hit=138102964 read=67983056\n>\n> -> Seq Scan on\n> flows flows_1 (cost=0.00..17836152.73 rows=661128576 width=4) (actual\n> time=0.322..343152.274 rows=589831190 loops=13)\n>\n> Buffers:\n> shared hit=138102915 read=67983056\n>\n> -> Materialize\n> (cost=0.00..1.02 rows=4 width=8) (actual time=0.000..0.000 rows=2\n> loops=7667805470)\n>\n> Buffers:\n> shared hit=13\n>\n> -> Seq\n> Scan on mynetworks mynetworks_1 (cost=0.00..1.01 rows=4 width=8) (actual\n> time=0.006..0.007 rows=4 loops=13)\n>\n>\n> Buffers: shared hit=13\n>\n> Planning time: 0.941 ms\n>\n> Execution time: 2228345.171 ms\n>\n> (48 rows)\n>\n>\n>\n> With a work_mem at 6GB, I noticed that for the first 20 minutes the query\n> was running, the i/o wait was much lower, hovering aroun 3% then it jumped\n> 45% until almost the end of the query.\n>\n>\n>\n> flowscompact and dstexterne are actually views. I use views to simplify\n> query writing and to \"abstract\" queries that are use often in other\n> queries. flowscompact is a view built on table flows (having about 590\n> million rows), it only keeps the most often used fields.\n>\n> flows=# \\d+ flowscompact;\n>\n> View \"public.flowscompact\"\n>\n> Column | Type | Modifiers | Storage | Description\n>\n> -----------+--------------------------+-----------+---------+-------------\n>\n> flow_id | bigint | | plain |\n>\n> sysuptime | bigint | | plain |\n>\n> exaddr | ip4 | | plain |\n>\n> dpkts | integer | | plain |\n>\n> doctets | bigint | | plain |\n>\n> first | bigint | | plain |\n>\n> last | bigint | | plain |\n>\n> srcaddr | ip4 | | plain |\n>\n> dstaddr | ip4 | | plain |\n>\n> srcport | integer | | plain |\n>\n> dstport | integer | | plain |\n>\n> prot | smallint | | plain |\n>\n> tos | smallint | | plain |\n>\n> tcp_flags | smallint | | plain |\n>\n> timestamp | timestamp with time zone | | plain |\n>\n> View definition:\n>\n> SELECT flowstimestamp.flow_id,\n>\n> flowstimestamp.sysuptime,\n>\n> flowstimestamp.exaddr,\n>\n> flowstimestamp.dpkts,\n>\n> flowstimestamp.doctets,\n>\n> flowstimestamp.first,\n>\n> flowstimestamp.last,\n>\n> flowstimestamp.srcaddr,\n>\n> flowstimestamp.dstaddr,\n>\n> flowstimestamp.srcport,\n>\n> flowstimestamp.dstport,\n>\n> flowstimestamp.prot,\n>\n> flowstimestamp.tos,\n>\n> flowstimestamp.tcp_flags,\n>\n> flowstimestamp.\"timestamp\"\n>\n> FROM flowstimestamp;\n>\n> mynetworks is a table having one column and 4 rows; it contains a list of\n> our network networks:\n>\n> flows=# select * from mynetworks;\n>\n> ipaddr\n>\n> ----------------\n>\n> 192.168.0.0/24\n>\n> 10.112.12.0/30\n>\n> 10.112.12.4/30\n>\n> 10.112.12.8/30\n>\n> (4 row)\n>\n> flows=# \\d+ mynetworks;\n>\n> Table \"public.mynetworks\"\n>\n> Column | Type | Modifiers | Storage | Stats target | Description\n>\n> --------+------+-----------+---------+--------------+-------------\n>\n> ipaddr | ip4r | | plain | |\n>\n> Indexes:\n>\n> \"mynetworks_ipaddr_idx\" gist (ipaddr)\n>\n> dstexterne is a view listing all the destination IPv4 addresses not inside\n> our network; it has one column and 3.8 million rows.\n>\n> flows=# \\d+ dstexterne;\n>\n> View \"public.dstexterne\"\n>\n> Column | Type | Modifiers | Storage | Description\n>\n> ---------+------+-----------+---------+-------------\n>\n> dstaddr | ip4 | | plain |\n>\n> View definition:\n>\n> SELECT DISTINCT flowscompact.dstaddr\n>\n> FROM flowscompact\n>\n> LEFT JOIN mynetworks ON mynetworks.ipaddr >>\n> flowscompact.dstaddr::ip4r\n>\n> WHERE mynetworks.ipaddr IS NULL;\n>\n> Thanks!\n>\n>\n>\n> Charles\n>\n>\n>\n> Charles,\n>\n>\n>\n> Also, let’s try to simplify your query and see if it performs better.\n>\n> You are grouping by srcaddr, dstaddr, dstport, that makes DISTINCT not\n> needed.\n>\n> And after simplifying WHERE clause (let me know if the result is not what\n> you want), the query looks like:\n>\n>\n>\n> SELECT srcaddr, dstaddr, dstport,\n>\n> COUNT(*) AS conversation,\n>\n> SUM(doctets) / 1024 / 1024 AS mbytes\n>\n> FROM flowscompact\n>\n> WHERE srcaddr IN (SELECT ipaddr FROM mynetworks)\n>\n> AND dstaddr NOT IN (SELECT ipaddr FROM mynetworks)\n>\n> GROUP BY srcaddr, dstaddr, dstport\n>\n> ORDER BY mbytes DESC\n>\n> LIMIT 50;\n>\n>\n>\n> Now, you didn’t provide the definition of flowstimestamp table.\n>\n> If this table doesn’t have an index on (srcaddr, dstaddr, dstport)\n> creating one should help (I think).\n>\n>\n>\n> Igor\n>\n>\n>\n>\n>\n>\n>\n\n\n\n-- \nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\nIgor,The 1st clause of the where statement won't select addresses the same way as the one I wrote using the extension for IPv6 and IPv6 data types.flowstimestamp is a view:flows=# \\d+ flowstimestamp View \"public.flowstimestamp\" Column | Type | Modifiers | Storage | Description -------------+--------------------------+-----------+---------+------------- flow_id | bigint | | plain | unix_secs | bigint | | plain | unix_nsecs | bigint | | plain | sysuptime | bigint | | plain | exaddr | ip4 | | plain | dpkts | integer | | plain | doctets | bigint | | plain | first | bigint | | plain | last | bigint | | plain | engine_type | smallint | | plain | engine_id | smallint | | plain | srcaddr | ip4 | | plain | dstaddr | ip4 | | plain | nexthop | ip4 | | plain | input | integer | | plain | output | integer | | plain | srcport | integer | | plain | dstport | integer | | plain | prot | smallint | | plain | tos | smallint | | plain | tcp_flags | smallint | | plain | src_mask | smallint | | plain | dst_mask | smallint | | plain | src_as | integer | | plain | dst_as | integer | | plain | timestamp | timestamp with time zone | | plain | View definition: SELECT flows.flow_id, flows.unix_secs, flows.unix_nsecs, flows.sysuptime, flows.exaddr, flows.dpkts, flows.doctets, flows.first, flows.last, flows.engine_type, flows.engine_id, flows.srcaddr, flows.dstaddr, flows.nexthop, flows.input, flows.output, flows.srcport, flows.dstport, flows.prot, flows.tos, flows.tcp_flags, flows.src_mask, flows.dst_mask, flows.src_as, flows.dst_as, to_timestamp((flows.unix_secs + flows.unix_nsecs / 1000000000)::double precision) AS \"timestamp\" FROM flows;And it can use the indexes of flows:Indexes: \"flows_pkey\" PRIMARY KEY, btree (flow_id) \"flows_dstaddr_dstport\" btree (dstaddr, dstport) \"flows_srcaddr_dstaddr_idx\" btree (srcaddr, dstaddr) \"flows_srcaddr_srcport\" btree (srcaddr, srcport) \"flows_srcport_dstport_idx\" btree (srcport, dstport)Thanks!CharlesOn Fri, Jul 14, 2017 at 10:18 PM, Igor Neyman <[email protected]> wrote:\n\n\n \n \n\n\nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of Igor Neyman\nSent: Friday, July 14, 2017 3:13 PM\nTo: Charles Nadeau <[email protected]>\nCc: Jeff Janes <[email protected]>; [email protected]\nSubject: Re: [PERFORM] Very poor read performance, query independent\n\n\n \n\nFrom: Charles Nadeau [mailto:[email protected]]\n\nSent: Friday, July 14, 2017 11:35 AM\nTo: Igor Neyman <[email protected]>\nCc: Jeff Janes <[email protected]>;\[email protected]\nSubject: Re: [PERFORM] Very poor read performance, query independent\n \n\n\n\nIgor,\n\n\n \n\n\nInitially temp_buffer was left to its default value (8MB). Watching the content of the directory that stores the temporary files, I found that I need at most 21GB of temporary files space. Should I set temp_buffer to 21GB?\n\n\nHere is the explain you requested with work_mem set to 6GB:\n\n\n\nflows=# set work_mem='6GB';\n\n\nSET\n\n\nflows=# explain (analyze, buffers) SELECT DISTINCT\n\n\n srcaddr,\n\n\n dstaddr,\n\n\n dstport,\n\n\n COUNT(*) AS conversation,\n\n\n SUM(doctets) / 1024 / 1024 AS mbytes\n\n\nFROM\n\n\n flowscompact,\n\n\n mynetworks\n\n\nWHERE\n\n\n mynetworks.ipaddr >>= flowscompact.srcaddr\n\n\n AND dstaddr IN\n\n\n (\n\n\n SELECT\n\n\n dstaddr\n\n\n FROM\n\n\n dstexterne\n\n\n )\n\n\nGROUP BY\n\n\n srcaddr,\n\n\n dstaddr,\n\n\n dstport\n\n\nORDER BY\n\n\n mbytes DESC LIMIT 50;\n\n\n QUERY PLAN \n\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n Limit (cost=48135680.07..48135680.22 rows=50 width=52) (actual time=2227678.196..2227678.223 rows=50 loops=1)\n\n\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\n\n\n -> Unique (cost=48135680.07..48143613.62 rows=2644514 width=52) (actual time=2227678.194..2227678.217 rows=50 loops=1)\n\n\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\n\n\n -> Sort (cost=48135680.07..48137002.33 rows=2644514 width=52) (actual time=2227678.192..2227678.202 rows=50 loops=1)\n\n\n Sort Key: (((sum(flows.doctets) / '1024'::numeric) / '1024'::numeric)) DESC, flows.srcaddr, flows.dstaddr, flows.dstport, (count(*))\n\n\n Sort Method: quicksort Memory: 654395kB\n\n\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\n\n\n -> GroupAggregate (cost=48059426.65..48079260.50 rows=2644514 width=52) (actual time=2167909.030..2211446.192 rows=5859671 loops=1)\n\n\n Group Key: flows.srcaddr, flows.dstaddr, flows.dstport\n\n\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\n\n\n -> Sort (cost=48059426.65..48060748.90 rows=2644514 width=20) (actual time=2167896.815..2189107.205 rows=91745640 loops=1)\n\n\n Sort Key: flows.srcaddr, flows.dstaddr, flows.dstport\n\n\n Sort Method: external merge Disk: 3049216kB\n\n\n Buffers: shared hit=728798038 read=82974833, temp read=381154 written=381154\n\n\n -> Gather (cost=30060688.07..48003007.07 rows=2644514 width=20) (actual time=1268989.000..1991357.232 rows=91745640 loops=1)\n\n\n Workers Planned: 12\n\n\n Workers Launched: 12\n\n\n Buffers: shared hit=728798037 read=82974833\n\n\n -> Hash Semi Join (cost=30059688.07..47951761.31 rows=220376 width=20) (actual time=1268845.181..2007864.725 rows=7057357 loops=13)\n\n\n Hash Cond: (flows.dstaddr = flows_1.dstaddr)\n\n\n Buffers: shared hit=728795193 read=82974833\n\n\n -> Nested Loop (cost=0.03..17891246.86 rows=220376 width=20) (actual time=0.207..723790.283 rows=37910370 loops=13)\n\n\n Buffers: shared hit=590692229 read=14991777\n\n\n -> Parallel Seq Scan on flows (cost=0.00..16018049.14 rows=55094048 width=20) (actual time=0.152..566179.117 rows=45371630 loops=13)\n\n\n Buffers: shared hit=860990 read=14991777\n\n\n -> Index Only Scan using mynetworks_ipaddr_idx on mynetworks (cost=0.03..0.03 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=589831190)\n\n\n Index Cond: (ipaddr >>= (flows.srcaddr)::ip4r)\n\n\n Heap Fetches: 0\n\n\n Buffers: shared hit=589831203\n\n\n -> Hash (cost=30059641.47..30059641.47 rows=13305 width=4) (actual time=1268811.101..1268811.101 rows=3803508 loops=13)\n\n\n Buckets: 4194304 (originally 16384) Batches: 1 (originally 1) Memory Usage: 166486kB\n\n\n Buffers: shared hit=138102964 read=67983056\n\n\n -> HashAggregate (cost=30059561.64..30059601.56 rows=13305 width=4) (actual time=1265248.165..1267432.083 rows=3803508 loops=13)\n\n\n Group Key: flows_1.dstaddr\n\n\n Buffers: shared hit=138102964 read=67983056\n\n\n -> Nested Loop Anti Join (cost=0.00..29729327.92 rows=660467447 width=4) (actual time=0.389..1201072.707 rows=125838232 loops=13)\n\n\n Join Filter: (mynetworks_1.ipaddr >> (flows_1.dstaddr)::ip4r)\n\n\n Rows Removed by Join Filter: 503353617\n\n\n Buffers: shared hit=138102964 read=67983056\n\n\n -> Seq Scan on flows flows_1 (cost=0.00..17836152.73 rows=661128576 width=4) (actual time=0.322..343152.274 rows=589831190 loops=13)\n\n\n Buffers: shared hit=138102915 read=67983056\n\n\n -> Materialize (cost=0.00..1.02 rows=4 width=8) (actual time=0.000..0.000 rows=2 loops=7667805470)\n\n\n Buffers: shared hit=13\n\n\n -> Seq Scan on mynetworks mynetworks_1 (cost=0.00..1.01 rows=4 width=8) (actual time=0.006..0.007 rows=4 loops=13)\n\n\n Buffers: shared hit=13\n\n\n Planning time: 0.941 ms\n\n\n Execution time: 2228345.171 ms\n\n\n(48 rows)\n\n\n\n \n\n\nWith a work_mem at 6GB, I noticed that for the first 20 minutes the query was running, the i/o wait was much lower, hovering aroun 3% then it jumped 45% until almost the end of the query. \n\n\n \n\n\nflowscompact and dstexterne are actually views. I use views to simplify query writing and to \"abstract\" queries that are use often in other queries. flowscompact is a view built on table flows (having about 590 million rows), it only keeps\n the most often used fields.\n\n\n\nflows=# \\d+ flowscompact;\n\n\n View \"public.flowscompact\"\n\n\n Column | Type | Modifiers | Storage | Description \n\n\n-----------+--------------------------+-----------+---------+-------------\n\n\n flow_id | bigint | | plain | \n\n\n sysuptime | bigint | | plain | \n\n\n exaddr | ip4 | | plain | \n\n\n dpkts | integer | | plain | \n\n\n doctets | bigint | | plain | \n\n\n first | bigint | | plain | \n\n\n last | bigint | | plain | \n\n\n srcaddr | ip4 | | plain | \n\n\n dstaddr | ip4 | | plain | \n\n\n srcport | integer | | plain | \n\n\n dstport | integer | | plain | \n\n\n prot | smallint | | plain | \n\n\n tos | smallint | | plain | \n\n\n tcp_flags | smallint | | plain | \n\n\n timestamp | timestamp with time zone | | plain | \n\n\nView definition:\n\n\n SELECT flowstimestamp.flow_id,\n\n\n flowstimestamp.sysuptime,\n\n\n flowstimestamp.exaddr,\n\n\n flowstimestamp.dpkts,\n\n\n flowstimestamp.doctets,\n\n\n flowstimestamp.first,\n\n\n flowstimestamp.last,\n\n\n flowstimestamp.srcaddr,\n\n\n flowstimestamp.dstaddr,\n\n\n flowstimestamp.srcport,\n\n\n flowstimestamp.dstport,\n\n\n flowstimestamp.prot,\n\n\n flowstimestamp.tos,\n\n\n flowstimestamp.tcp_flags,\n\n\n flowstimestamp.\"timestamp\"\n\n\n FROM flowstimestamp;\n\n\n\nmynetworks is a table having one column and 4 rows; it contains a list of our network networks:\n\n\n\nflows=# select * from mynetworks;\n\n\n ipaddr \n\n\n----------------\n\n\n 192.168.0.0/24\n\n\n 10.112.12.0/30\n\n\n 10.112.12.4/30\n\n\n 10.112.12.8/30\n\n\n(4 row)\n\n\nflows=# \\d+ mynetworks;\n\n\n Table \"public.mynetworks\"\n\n\n Column | Type | Modifiers | Storage | Stats target | Description \n\n\n--------+------+-----------+---------+--------------+-------------\n\n\n ipaddr | ip4r | | plain | | \n\n\nIndexes:\n\n\n \"mynetworks_ipaddr_idx\" gist (ipaddr)\n\n\n\ndstexterne is a view listing all the destination IPv4 addresses not inside our network; it has one column and 3.8 million rows.\n\n\n\nflows=# \\d+ dstexterne;\n\n\n View \"public.dstexterne\"\n\n\n Column | Type | Modifiers | Storage | Description \n\n\n---------+------+-----------+---------+-------------\n\n\n dstaddr | ip4 | | plain | \n\n\nView definition:\n\n\n SELECT DISTINCT flowscompact.dstaddr\n\n\n FROM flowscompact\n\n\n LEFT JOIN mynetworks ON mynetworks.ipaddr >> flowscompact.dstaddr::ip4r\n\n\n WHERE mynetworks.ipaddr IS NULL;\n\n\n\nThanks!\n\n\n \n\n\n\nCharles\n\n\n\n\n \nCharles,\n \nAlso, let’s try to simplify your query and see if it performs better.\nYou are grouping by srcaddr, dstaddr, dstport, that makes DISTINCT not needed.\nAnd after simplifying WHERE clause (let me know if the result is not what you want), the query looks like:\n \nSELECT srcaddr, dstaddr, dstport,\n COUNT(*) AS conversation,\n SUM(doctets) / 1024 / 1024 AS mbytes\nFROM flowscompact\nWHERE srcaddr IN (SELECT ipaddr FROM mynetworks)\n AND dstaddr NOT IN (SELECT ipaddr FROM mynetworks)\nGROUP BY srcaddr, dstaddr, dstport\nORDER BY mbytes DESC\n\nLIMIT 50;\n \nNow, you didn’t provide the definition of flowstimestamp table.\nIf this table doesn’t have an index on (srcaddr, dstaddr, dstport) creating one should help (I think).\n \nIgor\n \n \n \n\n\n\n\n\n-- Charles Nadeau Ph.D.http://charlesnadeau.blogspot.com/",
"msg_date": "Mon, 17 Jul 2017 13:22:47 +0200",
"msg_from": "Charles Nadeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "On Fri, Jul 14, 2017 at 12:34 PM, Charles Nadeau\n<[email protected]> wrote:\n> Workers Planned: 12\n> Workers Launched: 12\n> Buffers: shared hit=728798037 read=82974833\n> -> Hash Semi Join\n> (cost=30059688.07..47951761.31 rows=220376 width=20) (actual\n> time=1268845.181..2007864.725 rows=7057357 loops=13)\n> Hash Cond: (flows.dstaddr =\n> flows_1.dstaddr)\n> Buffers: shared hit=728795193\n> read=82974833\n> -> Nested Loop\n> (cost=0.03..17891246.86 rows=220376 width=20) (actual time=0.207..723790.283\n> rows=37910370 loops=13)\n> Buffers: shared hit=590692229\n> read=14991777\n> -> Parallel Seq Scan on flows\n> (cost=0.00..16018049.14 rows=55094048 width=20) (actual\n> time=0.152..566179.117 rows=45371630 loops=13)\n> Buffers: shared\n> hit=860990 read=14991777\n> -> Index Only Scan using\n> mynetworks_ipaddr_idx on mynetworks (cost=0.03..0.03 rows=1 width=8)\n> (actual time=0.002..0.002 rows=1 loops=589831190)\n> Index Cond: (ipaddr >>=\n> (flows.srcaddr)::ip4r)\n> Heap Fetches: 0\n> Buffers: shared\n> hit=589831203\n\n12 workers on a parallel sequential scan on a RAID-10 volume of\nrotating disks may not be a good idea.\n\nHave you measured average request size and average wait times with iostat?\n\nRun \"iostat -x -m -d 60\" while running the query and copy a few\nrelevant lines (or attach the whole thing). I suspect 12 parallel\nsequential scans are degrading your array's performance to random I/O\nperformance, and that explains the 10MB/s very well (a rotating disk\nwill give you about 3-4MB/s at random I/O, and you've got 2 mirrors on\nthat array).\n\nYou could try setting the max_parallel_workers_per_gather to 2, which\nshould be the optimum allocation for your I/O layout.\n\nYou might also want to test switching to the deadline scheduler. While\nthe controller may get more aggregate thoughput rearranging your I/O\nrequests, high I/O latency will severly reduce postgres' ability to\nsaturate the I/O system itself, and deadlines tends to minimize\nlatency. I've had good results in the past using deadline, but take\nthis suggestion with a grain of salt, YMMV.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 17 Jul 2017 17:56:23 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Claudio,\n\nFind attached the iostat measured while redoing the query above\n(iostat1.txt). sda holds my temp directory (noop i/o scheduler), sdb the\nswap partition (cfq i/o scheduler) only and sdc (5 disks RAID0, noop i/o\nscheduler) holds the data. I didn't pay attention to the load caused by 12\nparallel scans as I thought the RAID card would be smart enough to\nre-arrange the read requests optimally regardless of the load. At one\nmoment during the query, there is a write storm to the swap drive (a bit\nlike this case:\nhttps://www.postgresql.org/message-id/AANLkTi%3Diw4fC2RgTxhw0aGpyXANhOT%3DXBnjLU1_v6PdA%40mail.gmail.com).\nI can hardly explain it as there is plenty of memory on this server. The\nexecution time of the query was 4801.1s (about 1h20min).\nI reduced max_parallel_workers_per_gather to 2 and max_parallel_workers to\n3, restarted postgresql then ran the query again while running iostat again\n(iostat2.txt): The query ran much faster, 1992.8s (about 33min) instead of\n4801.1s (about 1h20min) and the swap storm is gone! You were right about\nthe max_parallel_workers_per_gather!!\nFor the last test, I changed the scheduler on sdc to deadline (iostat3.txt)\nkeeping max_parallel_workers_per_gather=2 and max_parallel_workers=3 then\nrestarted postgresql. The execution time is almost the same: 1938.7s vs\n1992.8s for the noop scheduler.\n\nThanks a lot for the suggestion, I'll keep my number of worker low to make\nsure I maximize my array usage.\n\nCharles\n\nOn Mon, Jul 17, 2017 at 10:56 PM, Claudio Freire <[email protected]>\nwrote:\n\n> On Fri, Jul 14, 2017 at 12:34 PM, Charles Nadeau\n> <[email protected]> wrote:\n> > Workers Planned: 12\n> > Workers Launched: 12\n> > Buffers: shared hit=728798037\n> read=82974833\n> > -> Hash Semi Join\n> > (cost=30059688.07..47951761.31 rows=220376 width=20) (actual\n> > time=1268845.181..2007864.725 rows=7057357 loops=13)\n> > Hash Cond: (flows.dstaddr =\n> > flows_1.dstaddr)\n> > Buffers: shared hit=728795193\n> > read=82974833\n> > -> Nested Loop\n> > (cost=0.03..17891246.86 rows=220376 width=20) (actual\n> time=0.207..723790.283\n> > rows=37910370 loops=13)\n> > Buffers: shared\n> hit=590692229\n> > read=14991777\n> > -> Parallel Seq Scan on\n> flows\n> > (cost=0.00..16018049.14 rows=55094048 width=20) (actual\n> > time=0.152..566179.117 rows=45371630 loops=13)\n> > Buffers: shared\n> > hit=860990 read=14991777\n> > -> Index Only Scan using\n> > mynetworks_ipaddr_idx on mynetworks (cost=0.03..0.03 rows=1 width=8)\n> > (actual time=0.002..0.002 rows=1 loops=589831190)\n> > Index Cond: (ipaddr\n> >>=\n> > (flows.srcaddr)::ip4r)\n> > Heap Fetches: 0\n> > Buffers: shared\n> > hit=589831203\n>\n> 12 workers on a parallel sequential scan on a RAID-10 volume of\n> rotating disks may not be a good idea.\n>\n> Have you measured average request size and average wait times with iostat?\n>\n> Run \"iostat -x -m -d 60\" while running the query and copy a few\n> relevant lines (or attach the whole thing). I suspect 12 parallel\n> sequential scans are degrading your array's performance to random I/O\n> performance, and that explains the 10MB/s very well (a rotating disk\n> will give you about 3-4MB/s at random I/O, and you've got 2 mirrors on\n> that array).\n>\n> You could try setting the max_parallel_workers_per_gather to 2, which\n> should be the optimum allocation for your I/O layout.\n>\n> You might also want to test switching to the deadline scheduler. While\n> the controller may get more aggregate thoughput rearranging your I/O\n> requests, high I/O latency will severly reduce postgres' ability to\n> saturate the I/O system itself, and deadlines tends to minimize\n> latency. I've had good results in the past using deadline, but take\n> this suggestion with a grain of salt, YMMV.\n>\n\n\n\n-- \nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 18 Jul 2017 11:20:35 +0200",
"msg_from": "Charles Nadeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "On Tue, Jul 18, 2017 at 6:20 AM, Charles Nadeau\n<[email protected]> wrote:\n> Claudio,\n>\n> At one moment\n> during the query, there is a write storm to the swap drive (a bit like this\n> case:\n> https://www.postgresql.org/message-id/AANLkTi%3Diw4fC2RgTxhw0aGpyXANhOT%3DXBnjLU1_v6PdA%40mail.gmail.com).\n> I can hardly explain it as there is plenty of memory on this server.\n\nThat sounds a lot like NUMA zone_reclaim issues:\n\nhttps://www.postgresql.org/message-id/[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 18 Jul 2017 13:01:29 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "On Tue, Jul 18, 2017 at 1:01 PM, Claudio Freire <[email protected]> wrote:\n> On Tue, Jul 18, 2017 at 6:20 AM, Charles Nadeau\n> <[email protected]> wrote:\n>> Claudio,\n>>\n>> At one moment\n>> during the query, there is a write storm to the swap drive (a bit like this\n>> case:\n>> https://www.postgresql.org/message-id/AANLkTi%3Diw4fC2RgTxhw0aGpyXANhOT%3DXBnjLU1_v6PdA%40mail.gmail.com).\n>> I can hardly explain it as there is plenty of memory on this server.\n>\n> That sounds a lot like NUMA zone_reclaim issues:\n>\n> https://www.postgresql.org/message-id/[email protected]\n\nI realize you have zone_reclaim_mode set to 0. Still, the symptoms are\neerily similar.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 18 Jul 2017 14:13:58 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "On Tue, Jul 18, 2017 at 02:13:58PM -0300, Claudio Freire wrote:\n> On Tue, Jul 18, 2017 at 1:01 PM, Claudio Freire <[email protected]> wrote:\n> > On Tue, Jul 18, 2017 at 6:20 AM, Charles Nadeau\n> > <[email protected]> wrote:\n> >> Claudio,\n> >>\n> >> At one moment\n> >> during the query, there is a write storm to the swap drive (a bit like this\n> >> case:\n> >> https://www.postgresql.org/message-id/AANLkTi%3Diw4fC2RgTxhw0aGpyXANhOT%3DXBnjLU1_v6PdA%40mail.gmail.com).\n> >> I can hardly explain it as there is plenty of memory on this server.\n> >\n> > That sounds a lot like NUMA zone_reclaim issues:\n> >\n> > https://www.postgresql.org/message-id/[email protected]\n> \n> I realize you have zone_reclaim_mode set to 0. Still, the symptoms are\n> eerily similar.\n\nDid you look at disabling KSM and/or THP ?\n\nsudo sh -c 'echo 2 >/sys/kernel/mm/ksm/run'\n\nhttps://www.postgresql.org/message-id/20170524155855.GH31097%40telsasoft.com\nhttps://www.postgresql.org/message-id/CANQNgOrD02f8mR3Y8Pi=zFsoL14RqNQA8hwz1r4rSnDLr1b2Cw@mail.gmail.com\nhttps://www.postgresql.org/message-id/CAHyXU0y9hviyKWvQZxX5UWfH9M2LYvwvAOPQ_DUPva2b71t12g%40mail.gmail.com\nhttps://www.postgresql.org/message-id/[email protected]\nhttps://www.postgresql.org/message-id/CAE_gQfW3dBiELcOppYN6v%3D8%2B%2BpEeywD7iXGw-OT3doB8SXO4_A%40mail.gmail.com\nhttps://www.postgresql.org/message-id/flat/1436268563235-5856914.post%40n5.nabble.com#[email protected]\nhttps://www.postgresql.org/message-id/CAL_0b1tJOZCx3Lo3Eve1RqGaT%[email protected]\nhttps://www.postgresql.org/message-id/[email protected]\nhttps://www.postgresql.org/message-id/1415981309.90631.YahooMailNeo%40web133205.mail.ir2.yahoo.com\nhttps://www.postgresql.org/message-id/CAHyXU0yXYpCXN4%3D81ZDRQu-oGzrcq2qNAXDpyz4oiQPPAGk4ew%40mail.gmail.com\nhttps://www.pythian.com/blog/performance-tuning-hugepages-in-linux/\nhttp://structureddata.org/2012/06/18/linux-6-transparent-huge-pages-and-hadoop-workloads/\n\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 18 Jul 2017 13:01:52 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "On Tue, Jul 18, 2017 at 3:20 AM, Charles Nadeau\n<[email protected]> wrote:\n> Claudio,\n>\n> Find attached the iostat measured while redoing the query above\n> (iostat1.txt). sda holds my temp directory (noop i/o scheduler), sdb the\n> swap partition (cfq i/o scheduler) only and sdc (5 disks RAID0, noop i/o\n> scheduler) holds the data. I didn't pay attention to the load caused by 12\n> parallel scans as I thought the RAID card would be smart enough to\n> re-arrange the read requests optimally regardless of the load. At one moment\n> during the query, there is a write storm to the swap drive (a bit like this\n> case:\n> https://www.postgresql.org/message-id/AANLkTi%3Diw4fC2RgTxhw0aGpyXANhOT%3DXBnjLU1_v6PdA%40mail.gmail.com).\n\nMy experience from that case (and few more) has led me to believe\nthat Linux database servers with plenty of memory should have their\nswaps turned off. The Linux kernel works hard to swap out little used\nmemory to make more space for caching active data.\n\nProblem is that whatever decides to swap stuff out gets stupid when\npresented with 512GB RAM and starts swapping out things like sys v\nshared_buffers etc.\n\nHere's the thing, either your memory is big enough to buffer your\nwhole data set, so nothing should get swapped out to make room for\ncaching.\n\nOR your dataset is much bigger than memory. In which case, making more\nroom gets very little if it comes at the cost of waiting for stuff you\nneed to get read back in.\n\nLinux servers should also have zone reclaim turned off, and THP disabled.\n\nTry running \"sudo swapoff -a\" and see if it gets rid of your swap storms.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 18 Jul 2017 19:08:14 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Justin,\n\nThanks for the extensive reading list, very educative.\n\nAfter reading\nhttps://blog.jcole.us/2010/09/28/mysql-swap-insanity-and-the-numa-architecture/\nI was thinking that it could be a NUMA/THP-related problem.\nTurning off THP solved the \"swap storm\" problem. Some queries are even 40%\nfaster with THP off.\nThen also turning off KSM improved performance by another 5%\nI was seriously worried about this issue as we received today another\nserver with 144GB of RAM.\n\nI will try to post a little summary of all the suggestion I received via\nthis thread later this week/early next week.\n\nThanks!\n\nCharles\n\nOn Tue, Jul 18, 2017 at 8:01 PM, Justin Pryzby <[email protected]> wrote:\n\n> On Tue, Jul 18, 2017 at 02:13:58PM -0300, Claudio Freire wrote:\n> > On Tue, Jul 18, 2017 at 1:01 PM, Claudio Freire <[email protected]>\n> wrote:\n> > > On Tue, Jul 18, 2017 at 6:20 AM, Charles Nadeau\n> > > <[email protected]> wrote:\n> > >> Claudio,\n> > >>\n> > >> At one moment\n> > >> during the query, there is a write storm to the swap drive (a bit\n> like this\n> > >> case:\n> > >> https://www.postgresql.org/message-id/AANLkTi%\n> 3Diw4fC2RgTxhw0aGpyXANhOT%3DXBnjLU1_v6PdA%40mail.gmail.com).\n> > >> I can hardly explain it as there is plenty of memory on this server.\n> > >\n> > > That sounds a lot like NUMA zone_reclaim issues:\n> > >\n> > > https://www.postgresql.org/message-id/[email protected]\n> >\n> > I realize you have zone_reclaim_mode set to 0. Still, the symptoms are\n> > eerily similar.\n>\n> Did you look at disabling KSM and/or THP ?\n>\n> sudo sh -c 'echo 2 >/sys/kernel/mm/ksm/run'\n>\n> https://www.postgresql.org/message-id/20170524155855.\n> GH31097%40telsasoft.com\n> https://www.postgresql.org/message-id/CANQNgOrD02f8mR3Y8Pi=\n> [email protected]\n> https://www.postgresql.org/message-id/CAHyXU0y9hviyKWvQZxX5UWfH9M2LY\n> vwvAOPQ_DUPva2b71t12g%40mail.gmail.com\n> https://www.postgresql.org/message-id/20130716195834.\n> [email protected]\n> https://www.postgresql.org/message-id/CAE_gQfW3dBiELcOppYN6v%3D8%2B%\n> 2BpEeywD7iXGw-OT3doB8SXO4_A%40mail.gmail.com\n> https://www.postgresql.org/message-id/flat/1436268563235-\n> 5856914.post%40n5.nabble.com#[email protected]\n> https://www.postgresql.org/message-id/CAL_0b1tJOZCx3Lo3Eve1RqGaT%2BJJ_\n> [email protected]\n> https://www.postgresql.org/message-id/[email protected]\n> https://www.postgresql.org/message-id/1415981309.90631.\n> YahooMailNeo%40web133205.mail.ir2.yahoo.com\n> https://www.postgresql.org/message-id/CAHyXU0yXYpCXN4%3D81ZDRQu-\n> oGzrcq2qNAXDpyz4oiQPPAGk4ew%40mail.gmail.com\n> https://www.pythian.com/blog/performance-tuning-hugepages-in-linux/\n> http://structureddata.org/2012/06/18/linux-6-transparent-huge-pages-and-\n> hadoop-workloads/\n>\n> Justin\n>\n\n\n\n-- \nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\nJustin,Thanks for the extensive reading list, very educative.After reading https://blog.jcole.us/2010/09/28/mysql-swap-insanity-and-the-numa-architecture/ I was thinking that it could be a NUMA/THP-related problem.Turning off THP solved the \"swap storm\" problem. Some queries are even 40% faster with THP off.Then also turning off KSM improved performance by another 5%I was seriously worried about this issue as we received today another server with 144GB of RAM.I will try to post a little summary of all the suggestion I received via this thread later this week/early next week.Thanks!CharlesOn Tue, Jul 18, 2017 at 8:01 PM, Justin Pryzby <[email protected]> wrote:On Tue, Jul 18, 2017 at 02:13:58PM -0300, Claudio Freire wrote:\n> On Tue, Jul 18, 2017 at 1:01 PM, Claudio Freire <[email protected]> wrote:\n> > On Tue, Jul 18, 2017 at 6:20 AM, Charles Nadeau\n> > <[email protected]> wrote:\n> >> Claudio,\n> >>\n> >> At one moment\n> >> during the query, there is a write storm to the swap drive (a bit like this\n> >> case:\n> >> https://www.postgresql.org/message-id/AANLkTi%3Diw4fC2RgTxhw0aGpyXANhOT%3DXBnjLU1_v6PdA%40mail.gmail.com).\n> >> I can hardly explain it as there is plenty of memory on this server.\n> >\n> > That sounds a lot like NUMA zone_reclaim issues:\n> >\n> > https://www.postgresql.org/message-id/[email protected]\n>\n> I realize you have zone_reclaim_mode set to 0. Still, the symptoms are\n> eerily similar.\n\nDid you look at disabling KSM and/or THP ?\n\nsudo sh -c 'echo 2 >/sys/kernel/mm/ksm/run'\n\nhttps://www.postgresql.org/message-id/20170524155855.GH31097%40telsasoft.com\nhttps://www.postgresql.org/message-id/CANQNgOrD02f8mR3Y8Pi=zFsoL14RqNQA8hwz1r4rSnDLr1b2Cw@mail.gmail.com\nhttps://www.postgresql.org/message-id/CAHyXU0y9hviyKWvQZxX5UWfH9M2LYvwvAOPQ_DUPva2b71t12g%40mail.gmail.com\nhttps://www.postgresql.org/message-id/[email protected]\nhttps://www.postgresql.org/message-id/CAE_gQfW3dBiELcOppYN6v%3D8%2B%2BpEeywD7iXGw-OT3doB8SXO4_A%40mail.gmail.com\nhttps://www.postgresql.org/message-id/flat/1436268563235-5856914.post%40n5.nabble.com#[email protected]\nhttps://www.postgresql.org/message-id/CAL_0b1tJOZCx3Lo3Eve1RqGaT%[email protected]\nhttps://www.postgresql.org/message-id/[email protected]\nhttps://www.postgresql.org/message-id/1415981309.90631.YahooMailNeo%40web133205.mail.ir2.yahoo.com\nhttps://www.postgresql.org/message-id/CAHyXU0yXYpCXN4%3D81ZDRQu-oGzrcq2qNAXDpyz4oiQPPAGk4ew%40mail.gmail.com\nhttps://www.pythian.com/blog/performance-tuning-hugepages-in-linux/\nhttp://structureddata.org/2012/06/18/linux-6-transparent-huge-pages-and-hadoop-workloads/\n\nJustin\n-- Charles Nadeau Ph.D.http://charlesnadeau.blogspot.com/",
"msg_date": "Wed, 19 Jul 2017 13:48:54 +0200",
"msg_from": "Charles Nadeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Mark,\n\nI received yesterday a second server having 16 drives bays. Just for a\nquick trial, I used 2 old 60GB SSD (a Kingston V300 and a ADATA SP900) to\nbuild a RAID0. To my surprise, my very pecky RAID controller (HP P410i)\nrecognised them without a fuss (although as SATAII drives at 3Gb/s. A quick\nfio benchmark gives me 22000 random 4k read IOPS, more than my 5 146GB 10k\nSAS disks in RAID0). I moved my most frequently used index to this array\nand will try to do some benchmarks.\nKnowing that SSDs based on SandForce-2281 controller are recognised by my\nserver, I may buy a pair of bigger/newer ones to put my tables on.\n\nThanks!\n\nCharles\n\nOn Sat, Jul 15, 2017 at 1:57 AM, Mark Kirkwood <\[email protected]> wrote:\n\n> Thinking about this a bit more - if somewhat more blazing performance is\n> needed, then this could be achieved via losing the RAID card and spinning\n> disks altogether and buying 1 of the NVME or SATA solid state products: e.g\n>\n> - Samsung 960 Pro or Evo 2 TB (approx 1 or 2 GB/s seq scan speeds and 200K\n> IOPS)\n>\n> - Intel S3610 or similar 1.2 TB (500 MB/s seq scan and 30K IOPS)\n>\n>\n> The Samsung needs an M.2 port on the mobo (but most should have 'em - and\n> if not PCIe X4 adapter cards are quite cheap). The Intel is a bit more\n> expensive compared to the Samsung, and is slower but has a longer lifetime.\n> However for your workload the Sammy is probably fine.\n>\n> regards\n>\n> Mark\n>\n> On 15/07/17 11:09, Mark Kirkwood wrote:\n>\n>> Ah yes - that seems more sensible (but still slower than I would expect\n>> for 5 disks RAID 0).\n>>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\nMark,I received yesterday a second server having 16 drives bays. Just for a quick trial, I used 2 old 60GB SSD (a Kingston V300 and a ADATA SP900) to build a RAID0. To my surprise, my very pecky RAID controller (HP P410i) recognised them without a fuss (although as SATAII drives at 3Gb/s. A quick fio benchmark gives me 22000 random 4k read IOPS, more than my 5 146GB 10k SAS disks in RAID0). I moved my most frequently used index to this array and will try to do some benchmarks.Knowing that SSDs based on SandForce-2281 controller are recognised by my server, I may buy a pair of bigger/newer ones to put my tables on.Thanks!CharlesOn Sat, Jul 15, 2017 at 1:57 AM, Mark Kirkwood <[email protected]> wrote:Thinking about this a bit more - if somewhat more blazing performance is needed, then this could be achieved via losing the RAID card and spinning disks altogether and buying 1 of the NVME or SATA solid state products: e.g\n\n- Samsung 960 Pro or Evo 2 TB (approx 1 or 2 GB/s seq scan speeds and 200K IOPS)\n\n- Intel S3610 or similar 1.2 TB (500 MB/s seq scan and 30K IOPS)\n\n\nThe Samsung needs an M.2 port on the mobo (but most should have 'em - and if not PCIe X4 adapter cards are quite cheap). The Intel is a bit more expensive compared to the Samsung, and is slower but has a longer lifetime. However for your workload the Sammy is probably fine.\n\nregards\n\nMark\n\nOn 15/07/17 11:09, Mark Kirkwood wrote:\n\nAh yes - that seems more sensible (but still slower than I would expect for 5 disks RAID 0).\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Charles Nadeau Ph.D.http://charlesnadeau.blogspot.com/",
"msg_date": "Thu, 20 Jul 2017 14:50:53 +0200",
"msg_from": "Charles Nadeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "All,\n\nHere is a list of what I did based of the suggestions made after my initial\npost:\n*Reduce max_parallel_workers to 4: Values higher makes the workers wait for\ndata as the RAID0 array can't deliver high enough IOPS.\n*Reduce random_page_cost to 1: Forcing the use of index makes queries\nfaster despite low random throughput.\n*Increase shared_buffer to 66GB and effective_cache_size to 53GB: With the\nnew server having 144GB of RAM, increasing shared_buffer allows Postgresql\nto keep a lot of data in memory reducing the need to go to disk.\n*Reduce min_parallel_relation_size to 512kB to have more workers when doing\nsequential parallel scan\n*Increased the /sys/block/sd[ac]/queue/read_ahead_kb to 16384 for my arrays\nusing HDD\n*Reused old SSDs (that are compatible with my RAID controller, to my\nsurprise) to put my most used index and tables.\n\nThanks to everybody who made suggestions. I now know more about Postgresql\ntuning.\n\nCharles\n\nOn Mon, Jul 10, 2017 at 4:03 PM, Charles Nadeau <[email protected]>\nwrote:\n\n> I’m running PostgreSQL 9.6.3 on Ubuntu 16.10 (kernel 4.4.0-85-generic).\n> Hardware is:\n>\n> *2x Intel Xeon E5550\n>\n> *72GB RAM\n>\n> *Hardware RAID10 (4 x 146GB SAS 10k) P410i controller with 1GB FBWC (80%\n> read/20% write) for Postgresql data only:\n>\n> Logical Drive: 3\n>\n> Size: 273.4 GB\n>\n> Fault Tolerance: 1+0\n>\n> Heads: 255\n>\n> Sectors Per Track: 32\n>\n> Cylinders: 65535\n>\n> Strip Size: 128 KB\n>\n> Full Stripe Size: 256 KB\n>\n> Status: OK\n>\n> Caching: Enabled\n>\n> Unique Identifier: 600508B1001037383941424344450A00\n>\n> Disk Name: /dev/sdc\n>\n> Mount Points: /mnt/data 273.4 GB\n>\n> OS Status: LOCKED\n>\n> Logical Drive Label: A00A194750123456789ABCDE516F\n>\n> Mirror Group 0:\n>\n> physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SAS, 146 GB, OK)\n>\n> physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SAS, 146 GB, OK)\n>\n> Mirror Group 1:\n>\n> physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SAS, 146 GB, OK)\n>\n> physicaldrive 2I:1:8 (port 2I:box 1:bay 8, SAS, 146 GB, OK)\n>\n> Drive Type: Data\n>\n> Formatted with ext4 with: sudo mkfs.ext4 -E stride=32,stripe_width=64 -v\n> /dev/sdc1.\n>\n> Mounted in /etc/fstab with this line: \"UUID=99fef4ae-51dc-4365-9210-0b153b1cbbd0\n> /mnt/data ext4 rw,nodiratime,user_xattr,noatime,nobarrier,errors=remount-ro\n> 0 1\"\n>\n> Postgresql is the only application running on this server.\n>\n>\n> Postgresql is used as a mini data warehouse to generate reports and do\n> statistical analysis. It is used by at most 2 users and fresh data is added\n> every 10 days. The database has 16 tables: one is 224GB big and the rest\n> are between 16kB and 470MB big.\n>\n>\n> My configuration is:\n>\n>\n> name | current_setting | source\n>\n> ---------------------------------+--------------------------\n> ----------------------+----------------------\n>\n> application_name | psql | client\n>\n> autovacuum_vacuum_scale_factor | 0 | configuration file\n>\n> autovacuum_vacuum_threshold | 2000 | configuration file\n>\n> checkpoint_completion_target | 0.9 | configuration file\n>\n> checkpoint_timeout | 30min | configuration file\n>\n> client_encoding | UTF8 | client\n>\n> client_min_messages | log | configuration file\n>\n> cluster_name | 9.6/main | configuration file\n>\n> cpu_index_tuple_cost | 0.001 | configuration file\n>\n> cpu_operator_cost | 0.0005 | configuration file\n>\n> cpu_tuple_cost | 0.003 | configuration file\n>\n> DateStyle | ISO, YMD | configuration file\n>\n> default_statistics_target | 100 | configuration file\n>\n> default_text_search_config | pg_catalog.english | configuration file\n>\n> dynamic_shared_memory_type | posix | configuration file\n>\n> effective_cache_size | 22GB | configuration file\n>\n> effective_io_concurrency | 4 | configuration file\n>\n> external_pid_file | /var/run/postgresql/9.6-main.pid | configuration file\n>\n> lc_messages | C | configuration file\n>\n> lc_monetary | en_CA.UTF-8 | configuration file\n>\n> lc_numeric | en_CA.UTF-8 | configuration file\n>\n> lc_time | en_CA.UTF-8 | configuration file\n>\n> listen_addresses | * | configuration file\n>\n> lock_timeout | 100s | configuration file\n>\n> log_autovacuum_min_duration | 0 | configuration file\n>\n> log_checkpoints | on | configuration file\n>\n> log_connections | on | configuration file\n>\n> log_destination | csvlog | configuration file\n>\n> log_directory | /mnt/bigzilla/data/toburn/hp/postgresql/pg_log |\n> configuration file\n>\n> log_disconnections | on | configuration file\n>\n> log_error_verbosity | default | configuration file\n>\n> log_file_mode | 0600 | configuration file\n>\n> log_filename | postgresql-%Y-%m-%d_%H%M%S.log | configuration file\n>\n> log_line_prefix | user=%u,db=%d,app=%aclient=%h | configuration file\n>\n> log_lock_waits | on | configuration file\n>\n> log_min_duration_statement | 0 | configuration file\n>\n> log_min_error_statement | debug1 | configuration file\n>\n> log_min_messages | debug1 | configuration file\n>\n> log_rotation_size | 1GB | configuration file\n>\n> log_temp_files | 0 | configuration file\n>\n> log_timezone | localtime | configuration file\n>\n> logging_collector | on | configuration file\n>\n> maintenance_work_mem | 3GB | configuration file\n>\n> max_connections | 10 | configuration file\n>\n> max_locks_per_transaction | 256 | configuration file\n>\n> max_parallel_workers_per_gather | 14 | configuration file\n>\n> max_stack_depth | 2MB | environment variable\n>\n> max_wal_size | 4GB | configuration file\n>\n> max_worker_processes | 14 | configuration file\n>\n> min_wal_size | 2GB | configuration file\n>\n> parallel_setup_cost | 1000 | configuration file\n>\n> parallel_tuple_cost | 0.012 | configuration file\n>\n> port | 5432 | configuration file\n>\n> random_page_cost | 22 | configuration file\n>\n> seq_page_cost | 1 | configuration file\n>\n> shared_buffers | 34GB | configuration file\n>\n> shared_preload_libraries | pg_stat_statements | configuration file\n>\n> ssl | on | configuration file\n>\n> ssl_cert_file | /etc/ssl/certs/ssl-cert-snakeoil.pem | configuration file\n>\n> ssl_key_file | /etc/ssl/private/ssl-cert-snakeoil.key | configuration file\n>\n> statement_timeout | 1000000s | configuration file\n>\n> stats_temp_directory | /var/run/postgresql/9.6-main.pg_stat_tmp |\n> configuration file\n>\n> superuser_reserved_connections | 1 | configuration file\n>\n> syslog_facility | local1 | configuration file\n>\n> syslog_ident | postgres | configuration file\n>\n> syslog_sequence_numbers | on | configuration file\n>\n> temp_file_limit | 80GB | configuration file\n>\n> TimeZone | localtime | configuration file\n>\n> track_activities | on | configuration file\n>\n> track_counts | on | configuration file\n>\n> track_functions | all | configuration file\n>\n> unix_socket_directories | /var/run/postgresql | configuration file\n>\n> vacuum_cost_delay | 1ms | configuration file\n>\n> vacuum_cost_limit | 5000 | configuration file\n>\n> vacuum_cost_page_dirty | 200 | configuration file\n>\n> vacuum_cost_page_hit | 10 | configuration file\n>\n> vacuum_cost_page_miss | 100 | configuration file\n>\n> wal_buffers | 16MB | configuration file\n>\n> wal_compression | on | configuration file\n>\n> wal_sync_method | fdatasync | configuration file\n>\n> work_mem | 1468006kB | configuration file\n>\n>\n> The part of /etc/sysctl.conf I modified is:\n>\n> vm.swappiness = 1\n>\n> vm.dirty_background_bytes = 134217728\n>\n> vm.dirty_bytes = 1073741824\n>\n> vm.overcommit_ratio = 100\n>\n> vm.zone_reclaim_mode = 0\n>\n> kernel.numa_balancing = 0\n>\n> kernel.sched_autogroup_enabled = 0\n>\n> kernel.sched_migration_cost_ns = 5000000\n>\n>\n> The problem I have is very poor read. When I benchmark my array with fio I\n> get random reads of about 200MB/s and 1100IOPS and sequential reads of\n> about 286MB/s and 21000IPS. But when I watch my queries using pg_activity,\n> I get at best 4MB/s. Also using dstat I can see that iowait time is at\n> about 25%. This problem is not query-dependent.\n>\n> I backed up the database, I reformated the array making sure it is well\n> aligned then restored the database and got the same result.\n>\n> Where should I target my troubleshooting at this stage? I reformatted my\n> drive, I tuned my postgresql.conf and OS as much as I could. The hardware\n> doesn’t seem to have any issues, I am really puzzled.\n>\n> Thanks!\n>\n>\n> Charles\n>\n> --\n> Charles Nadeau Ph.D.\n>\n\n\n\n-- \nCharles Nadeau Ph.D.\nhttp://charlesnadeau.blogspot.com/\n\nAll,Here is a list of what I did based of the suggestions made after my initial post:*Reduce max_parallel_workers to 4: Values higher makes the workers wait for data as the RAID0 array can't deliver high enough IOPS.*Reduce random_page_cost to 1: Forcing the use of index makes queries faster despite low random throughput.*Increase shared_buffer to 66GB and effective_cache_size to 53GB: With the new server having 144GB of RAM, increasing shared_buffer allows Postgresql to keep a lot of data in memory reducing the need to go to disk.*Reduce min_parallel_relation_size to 512kB to have more workers when doing sequential parallel scan*Increased the /sys/block/sd[ac]/queue/read_ahead_kb to 16384 for my arrays using HDD*Reused old SSDs (that are compatible with my RAID controller, to my surprise) to put my most used index and tables.Thanks to everybody who made suggestions. I now know more about Postgresql tuning.CharlesOn Mon, Jul 10, 2017 at 4:03 PM, Charles Nadeau <[email protected]> wrote:\nI’m running\nPostgreSQL 9.6.3 on Ubuntu 16.10 (kernel 4.4.0-85-generic). Hardware\nis:\n*2x Intel Xeon E5550\n*72GB RAM\n*Hardware RAID10 (4\nx 146GB SAS 10k) P410i controller with 1GB FBWC (80% read/20% write)\nfor Postgresql data only:\n Logical Drive:\n3\n Size: 273.4\nGB\n Fault\nTolerance: 1+0\n Heads: 255\n Sectors Per\nTrack: 32\n Cylinders:\n65535\n Strip Size:\n128 KB\n Full Stripe\nSize: 256 KB\n Status: OK\n Caching: \nEnabled\n Unique\nIdentifier: 600508B1001037383941424344450A00\n Disk Name:\n/dev/sdc\n Mount\nPoints: /mnt/data 273.4 GB\n OS Status:\nLOCKED\n Logical\nDrive Label: A00A194750123456789ABCDE516F\n Mirror\nGroup 0:\n \nphysicaldrive 2I:1:5 (port 2I:box 1:bay 5, SAS, 146 GB, OK)\n \nphysicaldrive 2I:1:6 (port 2I:box 1:bay 6, SAS, 146 GB, OK)\n Mirror\nGroup 1:\n \nphysicaldrive 2I:1:7 (port 2I:box 1:bay 7, SAS, 146 GB, OK)\n \nphysicaldrive 2I:1:8 (port 2I:box 1:bay 8, SAS, 146 GB, OK)\n Drive Type:\nData\nFormatted with ext4\nwith: sudo mkfs.ext4 -E stride=32,stripe_width=64 -v /dev/sdc1.\nMounted in\n/etc/fstab with this line: \"UUID=99fef4ae-51dc-4365-9210-0b153b1cbbd0\n/mnt/data ext4\nrw,nodiratime,user_xattr,noatime,nobarrier,errors=remount-ro 0 1\"\nPostgresql is the\nonly application running on this server.\n\n\nPostgresql is used\nas a mini data warehouse to generate reports and do statistical\nanalysis. It is used by at most 2 users and fresh data is added every\n10 days. The database has 16 tables: one is 224GB big and the rest\nare between 16kB and 470MB big.\n\n\nMy configuration is:\n\n\n name \n | current_setting | \nsource \n\n---------------------------------+------------------------------------------------+----------------------\n application_name \n | psql | client\n\nautovacuum_vacuum_scale_factor | 0 \n | configuration file\n\nautovacuum_vacuum_threshold | 2000 \n | configuration file\n\ncheckpoint_completion_target | 0.9 \n | configuration file\n checkpoint_timeout \n | 30min |\nconfiguration file\n client_encoding \n | UTF8 | client\n client_min_messages\n | log |\nconfiguration file\n cluster_name \n | 9.6/main |\nconfiguration file\n\ncpu_index_tuple_cost | 0.001 \n | configuration file\n cpu_operator_cost \n | 0.0005 |\nconfiguration file\n cpu_tuple_cost \n | 0.003 |\nconfiguration file\n DateStyle \n | ISO, YMD |\nconfiguration file\n\ndefault_statistics_target | 100 \n | configuration file\n\ndefault_text_search_config | pg_catalog.english \n | configuration file\n\ndynamic_shared_memory_type | posix \n | configuration file\n\neffective_cache_size | 22GB \n | configuration file\n\neffective_io_concurrency | 4 \n | configuration file\n external_pid_file \n | /var/run/postgresql/9.6-main.pid |\nconfiguration file\n lc_messages \n | C |\nconfiguration file\n lc_monetary \n | en_CA.UTF-8 |\nconfiguration file\n lc_numeric \n | en_CA.UTF-8 |\nconfiguration file\n lc_time \n | en_CA.UTF-8 |\nconfiguration file\n listen_addresses \n | * |\nconfiguration file\n lock_timeout \n | 100s |\nconfiguration file\n\nlog_autovacuum_min_duration | 0 \n | configuration file\n log_checkpoints \n | on |\nconfiguration file\n log_connections \n | on |\nconfiguration file\n log_destination \n | csvlog |\nconfiguration file\n log_directory \n | /mnt/bigzilla/data/toburn/hp/postgresql/pg_log |\nconfiguration file\n log_disconnections \n | on |\nconfiguration file\n log_error_verbosity\n | default |\nconfiguration file\n log_file_mode \n | 0600 |\nconfiguration file\n log_filename \n | postgresql-%Y-%m-%d_%H%M%S.log |\nconfiguration file\n log_line_prefix \n | user=%u,db=%d,app=%aclient=%h |\nconfiguration file\n log_lock_waits \n | on |\nconfiguration file\n\nlog_min_duration_statement | 0 \n | configuration file\n\nlog_min_error_statement | debug1 \n | configuration file\n log_min_messages \n | debug1 |\nconfiguration file\n log_rotation_size \n | 1GB |\nconfiguration file\n log_temp_files \n | 0 |\nconfiguration file\n log_timezone \n | localtime |\nconfiguration file\n logging_collector \n | on |\nconfiguration file\n\nmaintenance_work_mem | 3GB \n | configuration file\n max_connections \n | 10 |\nconfiguration file\n\nmax_locks_per_transaction | 256 \n | configuration file\n\nmax_parallel_workers_per_gather | 14 \n | configuration file\n max_stack_depth \n | 2MB |\nenvironment variable\n max_wal_size \n | 4GB |\nconfiguration file\n\nmax_worker_processes | 14 \n | configuration file\n min_wal_size \n | 2GB |\nconfiguration file\n parallel_setup_cost\n | 1000 |\nconfiguration file\n parallel_tuple_cost\n | 0.012 |\nconfiguration file\n port \n | 5432 |\nconfiguration file\n random_page_cost \n | 22 |\nconfiguration file\n seq_page_cost \n | 1 |\nconfiguration file\n shared_buffers \n | 34GB |\nconfiguration file\n\nshared_preload_libraries | pg_stat_statements \n | configuration file\n ssl \n | on |\nconfiguration file\n ssl_cert_file \n | /etc/ssl/certs/ssl-cert-snakeoil.pem |\nconfiguration file\n ssl_key_file \n | /etc/ssl/private/ssl-cert-snakeoil.key |\nconfiguration file\n statement_timeout \n | 1000000s |\nconfiguration file\n\nstats_temp_directory |\n/var/run/postgresql/9.6-main.pg_stat_tmp | configuration file\n\nsuperuser_reserved_connections | 1 \n | configuration file\n syslog_facility \n | local1 |\nconfiguration file\n syslog_ident \n | postgres |\nconfiguration file\n\nsyslog_sequence_numbers | on \n | configuration file\n temp_file_limit \n | 80GB |\nconfiguration file\n TimeZone \n | localtime |\nconfiguration file\n track_activities \n | on |\nconfiguration file\n track_counts \n | on |\nconfiguration file\n track_functions \n | all |\nconfiguration file\n\nunix_socket_directories | /var/run/postgresql \n | configuration file\n vacuum_cost_delay \n | 1ms |\nconfiguration file\n vacuum_cost_limit \n | 5000 |\nconfiguration file\n\nvacuum_cost_page_dirty | 200 \n | configuration file\n\nvacuum_cost_page_hit | 10 \n | configuration file\n\nvacuum_cost_page_miss | 100 \n | configuration file\n wal_buffers \n | 16MB |\nconfiguration file\n wal_compression \n | on |\nconfiguration file\n wal_sync_method \n | fdatasync |\nconfiguration file\n work_mem \n | 1468006kB |\nconfiguration file\n\n\nThe part of\n/etc/sysctl.conf I modified is:\nvm.swappiness = 1\nvm.dirty_background_bytes\n= 134217728\nvm.dirty_bytes =\n1073741824\nvm.overcommit_ratio\n= 100\nvm.zone_reclaim_mode\n= 0\nkernel.numa_balancing\n= 0\nkernel.sched_autogroup_enabled\n= 0\nkernel.sched_migration_cost_ns\n= 5000000\n\n\nThe problem I have\nis very poor read. When I benchmark my array with fio I get random\nreads of about 200MB/s and 1100IOPS and sequential reads of about\n286MB/s and 21000IPS. But when I watch my queries using pg_activity,\nI get at best 4MB/s. Also using dstat I can see that iowait time is\nat about 25%. This problem is not query-dependent.\nI backed up the\ndatabase, I reformated the array making sure it is well aligned then\nrestored the database and got the same result.\nWhere should I\ntarget my troubleshooting at this stage? I reformatted my drive, I\ntuned my postgresql.conf and OS as much as I could. The hardware\ndoesn’t seem to have any issues, I am really puzzled.\nThanks!\n\n\nCharles-- Charles Nadeau Ph.D.\n\n-- Charles Nadeau Ph.D.http://charlesnadeau.blogspot.com/",
"msg_date": "Tue, 25 Jul 2017 11:36:25 +0200",
"msg_from": "Charles Nadeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very poor read performance, query independent"
},
{
"msg_contents": "Nice!\n\nPleased that the general idea worked well for you!\n\nI'm also relieved that you did not follow my recommendation exactly - \nI'm been trialling a Samsung 960 Evo (256GB) and Intel 600p (256GB) and \nI've stumbled across the serious disadvantages of (consumer) M.2 drives \nusing TLC NAND - terrible sustained write performance! While these guys \ncan happily do ~ 2GB/s reads, their write performance is only 'burst \ncapable'. They have small SLC NAND 'write caches' that do ~1GB/s for a \n*limited time* (10-20s) and after that you get ~ 200 MB/s! Ouch - my old \nCrucial 550 can do 350 MB/s sustained writes (so two of them in RAID0 \nare doing 700 MB/s for hours).\n\nBigger capacity drives can do better - but overall I'm not that \nimpressed with the current trend of using TLC NAND.\n\nregards\n\nMark\n\n\nOn 21/07/17 00:50, Charles Nadeau wrote:\n> Mark,\n>\n> I received yesterday a second server having 16 drives bays. Just for a \n> quick trial, I used 2 old 60GB SSD (a Kingston V300 and a ADATA SP900) \n> to build a RAID0. To my surprise, my very pecky RAID controller (HP \n> P410i) recognised them without a fuss (although as SATAII drives at \n> 3Gb/s. A quick fio benchmark gives me 22000 random 4k read IOPS, more \n> than my 5 146GB 10k SAS disks in RAID0). I moved my most frequently \n> used index to this array and will try to do some benchmarks.\n> Knowing that SSDs based on SandForce-2281 controller are recognised by \n> my server, I may buy a pair of bigger/newer ones to put my tables on.\n>\n> Thanks!\n>\n> Charles\n>\n> On Sat, Jul 15, 2017 at 1:57 AM, Mark Kirkwood \n> <[email protected] <mailto:[email protected]>> \n> wrote:\n>\n> Thinking about this a bit more - if somewhat more blazing\n> performance is needed, then this could be achieved via losing the\n> RAID card and spinning disks altogether and buying 1 of the NVME\n> or SATA solid state products: e.g\n>\n> - Samsung 960 Pro or Evo 2 TB (approx 1 or 2 GB/s seq scan speeds\n> and 200K IOPS)\n>\n> - Intel S3610 or similar 1.2 TB (500 MB/s seq scan and 30K IOPS)\n>\n>\n> The Samsung needs an M.2 port on the mobo (but most should have\n> 'em - and if not PCIe X4 adapter cards are quite cheap). The Intel\n> is a bit more expensive compared to the Samsung, and is slower but\n> has a longer lifetime. However for your workload the Sammy is\n> probably fine.\n>\n> regards\n>\n> Mark\n>\n> On 15/07/17 11:09, Mark Kirkwood wrote:\n>\n> Ah yes - that seems more sensible (but still slower than I\n> would expect for 5 disks RAID 0).\n>\n>\n>\n>\n> -- \n> Sent via pgsql-performance mailing list\n> ([email protected]\n> <mailto:[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> <http://www.postgresql.org/mailpref/pgsql-performance>\n>\n>\n>\n>\n> -- \n> Charles Nadeau Ph.D.\n> http://charlesnadeau.blogspot.com/\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 19 Aug 2017 18:51:34 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very poor read performance, query independent"
}
] |
[
{
"msg_contents": "We are on Postgres 9.5, and have been running a daily vacuum analyze on the\nentire database since 8.2 \nThe data has grown exponentially since, and we are seeing that queries are\nnow being significantly affected while the vacuum analyze runs. The query\ndatabase is a Slony slave. \nSo the question is, is this typical behavior and should we still be running\na daily vacuum analyze on the database?\n\nThanks!\nRV\n\n\n\n--\nView this message in context: http://www.postgresql-archive.org/vacuum-analyze-affecting-query-performance-tp5970681.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 10 Jul 2017 10:25:07 -0700 (MST)",
"msg_from": "rverghese <[email protected]>",
"msg_from_op": true,
"msg_subject": "vacuum analyze affecting query performance"
},
{
"msg_contents": "rverghese wrote:\r\n> We are on Postgres 9.5, and have been running a daily vacuum analyze on the\r\n> entire database since 8.2\r\n> The data has grown exponentially since, and we are seeing that queries are\r\n> now being significantly affected while the vacuum analyze runs. The query\r\n> database is a Slony slave.\r\n> So the question is, is this typical behavior and should we still be running\r\n> a daily vacuum analyze on the database?\r\n\r\nWhile VACUUM runs on tables, you can expect performance to get worse,\r\nmostly because of contention for I/O resources (is that the case for you?).\r\n\r\nAutovacuum has become *much* better since PostgreSQL 8.2.\r\n\r\nIf you cannot find a \"quiet time\" during which you can keep running your\r\ndaily VACUUM without causing problems, don't do it and go with autovacuum\r\nby all means.\r\n\r\nAutovacuum is less disruptive than normal VACUUM, it is designed to not\r\nhog resources.\r\n\r\nIf the database is very busy and autovacuum has problems keeping up,\r\ntune it to be more aggressive (and it will still be less disruptive\r\nthan a manual VACUUM).\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 11 Jul 2017 07:15:12 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuum analyze affecting query performance"
},
{
"msg_contents": "Thanks for the info!\n\n\n\n--\nView this message in context: http://www.postgresql-archive.org/vacuum-analyze-affecting-query-performance-tp5970681p5970830.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 11 Jul 2017 08:57:01 -0700 (MST)",
"msg_from": "rverghese <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vacuum analyze affecting query performance"
}
] |
[
{
"msg_contents": "Hello Experts,\n\n\nwe have created a postgres dump using this command: pg_dump pixabay | gzip > pixabay.gz\n\n\nAfter restarting postgres (even with a new cluster) and creating a new database, postgres is hanging while extracting the dump: gunzip -c pixabay.gz | psql pixabay\n\n\nThe log file shows that the autovacuum task is running (almost) endless...\n\n\n2017-07-12 18:05:50.822 CEST [19586] hans@pixabay LOG: duration: 1594.319 ms statement: CREATE INDEX photos_download_photo_id ON photos_download USING btree (photo_id);\n2017-07-12 18:05:52.340 CEST [19586] hans@pixabay LOG: duration: 1517.955 ms statement: CREATE INDEX photos_download_user_id ON photos_download USING btree (user_id);\n2017-07-12 18:05:52.355 CEST [19586] hans@pixabay LOG: duration: 14.954 ms statement: CREATE INDEX photos_editorschoice_created ON photos_editorschoice USING btree (created);\n2017-07-12 18:05:52.367 CEST [19586] hans@pixabay LOG: duration: 11.609 ms statement: CREATE INDEX photos_indexphoto_created ON photos_indexphoto USING btree (created);\n2017-07-12 20:34:58.943 CEST [19626] ERROR: canceling autovacuum task\n2017-07-12 20:34:58.943 CEST [19626] CONTEXT: automatic analyze of table \"pixabay.public.photos_photo\"\n2017-07-12 20:34:59.942 CEST [19586] hans@pixabay LOG: duration: 8947575.013 ms statement: CREATE INDEX photos_photo_approved_by_id ON photos_photo USING btree (approved_by_id);\n2017-07-12 20:35:00.471 CEST [19586] hans@pixabay LOG: duration: 528.777 ms statement: CREATE INDEX photos_photo_approved_date ON photos_photo USING btree (approved_date);\n\nWhat could cause this problem or how can we debug it?\n\nWe are running Postgres 9.4 / Ubuntu 16.04\n\nThanks, Hans\n\n\n\n\n\n\n\n\n\n\n\nHello Experts, \n\n\nwe have created a postgres dump using this command: pg_dump pixabay | gzip > pixabay.gz\n\n\nAfter restarting postgres (even with a new cluster) and creating a new database, postgres is hanging while extracting the dump: gunzip -c pixabay.gz | psql pixabay\n\n\nThe log file shows that the autovacuum\n task is running (almost) endless...\n\n\n\n2017-07-12 18:05:50.822 CEST [19586] hans@pixabay LOG: duration: 1594.319 ms statement: CREATE INDEX photos_download_photo_id ON photos_download USING btree (photo_id);\n2017-07-12 18:05:52.340 CEST [19586] hans@pixabay LOG: duration: 1517.955 ms statement: CREATE INDEX photos_download_user_id ON photos_download USING btree (user_id);\n2017-07-12 18:05:52.355 CEST [19586] hans@pixabay LOG: duration: 14.954 ms statement: CREATE INDEX photos_editorschoice_created ON photos_editorschoice USING btree (created);\n2017-07-12 18:05:52.367 CEST [19586] hans@pixabay LOG: duration: 11.609 ms statement: CREATE INDEX photos_indexphoto_created ON photos_indexphoto USING btree (created);\n\n2017-07-12 20:34:58.943 CEST [19626] ERROR: canceling autovacuum task\n2017-07-12 20:34:58.943 CEST [19626] CONTEXT: automatic analyze of table \"pixabay.public.photos_photo\"\n2017-07-12 20:34:59.942 CEST [19586] hans@pixabay LOG: duration: 8947575.013 ms statement: CREATE INDEX photos_photo_approved_by_id ON photos_photo USING btree (approved_by_id);\n2017-07-12 20:35:00.471 CEST [19586] hans@pixabay LOG: duration: 528.777 ms statement: CREATE INDEX photos_photo_approved_date ON photos_photo USING btree (approved_date);\n\n\nWhat could cause this problem or how can we debug it?\n\n\nWe are running Postgres 9.4 / Ubuntu 16.04\n\n\nThanks, Hans",
"msg_date": "Wed, 12 Jul 2017 19:00:45 +0000",
"msg_from": "Hans Braxmeier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres Dump - Creating index never stops"
},
{
"msg_contents": "Hans Braxmeier <[email protected]> writes:\n> After restarting postgres (even with a new cluster) and creating a new database, postgres is hanging while extracting the dump: gunzip -c pixabay.gz | psql pixabay\n\n> The log file shows that the autovacuum task is running (almost) endless...\n\n> 2017-07-12 18:05:52.367 CEST [19586] hans@pixabay LOG: duration: 11.609 ms statement: CREATE INDEX photos_indexphoto_created ON photos_indexphoto USING btree (created);\n> 2017-07-12 20:34:58.943 CEST [19626] ERROR: canceling autovacuum task\n> 2017-07-12 20:34:58.943 CEST [19626] CONTEXT: automatic analyze of table \"pixabay.public.photos_photo\"\n> 2017-07-12 20:34:59.942 CEST [19586] hans@pixabay LOG: duration: 8947575.013 ms statement: CREATE INDEX photos_photo_approved_by_id ON photos_photo USING btree (approved_by_id);\n\nWhat that looks like is it took the system an unusually long time to\nnotice that it needed to cancel the autovacuum to avoid a deadlock\nwith the CREATE INDEX. Was either process consuming a noticeable\namount of CPU during that interval? Do you have deadlock_timeout\nset higher than the default 1s?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 12 Jul 2017 15:41:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Dump - Creating index never stops"
}
] |
[
{
"msg_contents": "Issuing exactly the same query as PostgreSQL 8.2.21 and PostgreSQL 9.3.2 will slow the response by 6.4 ms on average.\nWhat could be the cause?\nMeasurement method is as follows.\n・ PostgreSQL 8.2.21 installation\n ★Measurement\n・ Export DUMP of PostgreSQL 8.2.21\n・ PostgreSQL 8.2.21 uninstallation\n・ PostgreSQL 9.3.2 installation\n・ Dump import\n ★Measurement\n\n[query]\nselect\n table4.a as col_0_0_,\n table4.a as col_1_0_,\n table4.a as col_2_0_,\n table4.b as col_0_1_,\n table4.c,\n table4.d\nfrom\n table1,\n table2,\n table3,\n table4 \nwhere\n table1.a=table2.a and\n table1.a=\"parameter$1\" and\n table2.roleid=table3.roleid and\n table3.a=\"parameter$2\" and\n table4.b='3' and\n table2.a=table4.a;\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 19 Jul 2017 01:54:46 +0000",
"msg_from": "fx TATEISHI KOJI <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance degradation from PostgreSQL 8.2.21 to PostgreSQL 9.3.2"
},
{
"msg_contents": "fx TATEISHI KOJI wrote:\r\n> Issuing exactly the same query as PostgreSQL 8.2.21 and PostgreSQL 9.3.2 will slow the\r\n> response by 6.4 ms on average.\r\n> What could be the cause?\r\n> Measurement method is as follows.\r\n> ・ PostgreSQL 8.2.21 installation\r\n> ★Measurement\r\n> ・ Export DUMP of PostgreSQL 8.2.21\r\n> ・ PostgreSQL 8.2.21 uninstallation\r\n> ・ PostgreSQL 9.3.2 installation\r\n> ・ Dump import\r\n> ★Measurement\r\n\r\nIt is impossible to answer this with certainty without\r\nEXPLAIN (ANALYZE, BUFFERS) output, but my first guess is that\r\nthe statistics on the 9.3 installation are not up to date.\r\n\r\nANALYZE all involved tables, then try again and see if the\r\nperformance degradation has vanished.\r\n\r\nIf not, start studying the execution plans.\r\n\r\nAre the parameters in postgresql.conf set the same?\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 19 Jul 2017 06:46:43 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation from PostgreSQL 8.2.21 to PostgreSQL\n 9.3.2"
}
] |
[
{
"msg_contents": "Dear expert,\n\nI have to create a user which have permission to create schemas and create database objects in database.\nI am using postgres 9.1.\n\nCould you please assist me?\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n\n\nDear expert,\n \nI have to create a user which have permission to\ncreate schemas and create database objects in database.\nI am using postgres 9.1.\n \nCould you please assist me? \n\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Wed, 19 Jul 2017 12:23:33 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to grant only create schemas and create database objects\n permission to user."
},
{
"msg_contents": "On 2017-07-19 14:23, Dinesh Chandra 12108 wrote:\n> Dear expert,\n> \n> I have to create a user which have permission to CREATE SCHEMAS AND\n> CREATE DATABASE OBJECTS in database.\n> \n> I am using postgres 9.1.\n> \n> Could you please assist me?\n\nAccess control is managed using the GRANT command: \nhttps://www.postgresql.org/docs/9.1/static/sql-grant.html\n\nAbout halfway down that page it says:\n\n-----\nCREATE\nFor databases, allows new schemas to be created within the database.\n\nFor schemas, allows new objects to be created within the schema. To \nrename an existing object, you must own the object and have this \nprivilege for the containing schema.\n\nFor tablespaces, allows tables, indexes, and temporary files to be \ncreated within the tablespace, and allows databases to be created that \nhave the tablespace as their default tablespace. (Note that revoking \nthis privilege will not alter the placement of existing objects.)\n----\n\nI suggest you try it out on a test database first.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 19 Jul 2017 15:43:55 +0200",
"msg_from": "vinny <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to grant only create schemas and create database\n objects permission to user."
}
] |
[
{
"msg_contents": "Hi team,\n\nIs this possible to create the postgis extension by CREATE EXTENSION postgis; in PostgreSQL 9.1\n\nAs I know the template_postgis database is created by default during postgis installation to support the spatial objects.\nI am getting below the error messages while creating postgis extension, under POSTGIS=\"1.5.3\n\nERROR: could not open extension control file \"/data/PostgreSQL/9.1/share/postgresql/extension/postgis.control\": No such file or directory\n********** Error **********\nERROR: could not open extension control file \"/data/PostgreSQL/9.1/share/postgresql/extension/postgis.control\": No such file or directory\nSQL state: 58P01\n\n\nPlease help on this.\n\nRegards,\nDaulat\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n\n\nHi team,\n \nIs this possible to create the postgis extension by CREATE EXTENSION postgis; in PostgreSQL 9.1\n \nAs I know the template_postgis database is created by default during postgis installation to support the spatial objects.\nI am getting below the error messages while creating postgis extension, under POSTGIS=\"1.5.3\n \nERROR: could not open extension control file \"/data/PostgreSQL/9.1/share/postgresql/extension/postgis.control\": No such file or directory\n********** Error **********\nERROR: could not open extension control file \"/data/PostgreSQL/9.1/share/postgresql/extension/postgis.control\": No such file or directory\nSQL state: 58P01 \n \n \nPlease help on this.\n \nRegards,\nDaulat\n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Thu, 20 Jul 2017 09:05:13 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "CREATE EXTENSION postgis;"
},
{
"msg_contents": "Hi,\n\nPlease use the proper list, your question is not appropriate for this list\nwhich is about PostgreSQL performance.\n\nFlorent\n\nOn Thu, Jul 20, 2017 at 11:05 AM, Daulat Ram <[email protected]> wrote:\n\n> Hi team,\n>\n>\n>\n> Is this possible to create the postgis extension by *CREATE EXTENSION\n> postgis;* in PostgreSQL 9.1\n>\n>\n>\n> As I know the template_postgis database is created by default during\n> postgis installation to support the spatial objects.\n>\n> I am getting below the error messages while creating postgis extension,\n> under POSTGIS=\"1.5.3\n>\n>\n>\n> ERROR: could not open extension control file \"/data/PostgreSQL/9.1/share/\n> postgresql/extension/postgis.control\": No such file or directory\n>\n> ********** Error **********\n>\n> ERROR: could not open extension control file \"/data/PostgreSQL/9.1/share/\n> postgresql/extension/postgis.control\": No such file or directory\n>\n> SQL state: 58P01\n>\n>\n>\n>\n>\n> Please help on this.\n>\n>\n>\n> Regards,\n>\n> Daulat\n>\n> ------------------------------\n>\n> DISCLAIMER:\n>\n> This email message is for the sole use of the intended recipient(s) and\n> may contain confidential and privileged information. Any unauthorized\n> review, use, disclosure or distribution is prohibited. If you are not the\n> intended recipient, please contact the sender by reply email and destroy\n> all copies of the original message. Check all attachments for viruses\n> before opening them. All views or opinions presented in this e-mail are\n> those of the author and may not reflect the opinion of Cyient or those of\n> our affiliates.\n>\n\n\n\n-- \n[image: Nuxeo]\n\nFlorent Guillaume\nHead of R&D\n\nTwitter: @efge\n\nHi,Please use the proper list, your question is not appropriate for this list which is about PostgreSQL performance.FlorentOn Thu, Jul 20, 2017 at 11:05 AM, Daulat Ram <[email protected]> wrote:\n\n\nHi team,\n \nIs this possible to create the postgis extension by CREATE EXTENSION postgis; in PostgreSQL 9.1\n \nAs I know the template_postgis database is created by default during postgis installation to support the spatial objects.\nI am getting below the error messages while creating postgis extension, under POSTGIS=\"1.5.3\n \nERROR: could not open extension control file \"/data/PostgreSQL/9.1/share/postgresql/extension/postgis.control\": No such file or directory\n********** Error **********\nERROR: could not open extension control file \"/data/PostgreSQL/9.1/share/postgresql/extension/postgis.control\": No such file or directory\nSQL state: 58P01 \n \n \nPlease help on this.\n \nRegards,\nDaulat\n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n-- Florent Guillaume Head of R&DTwitter: @efge",
"msg_date": "Thu, 20 Jul 2017 12:51:54 +0200",
"msg_from": "Florent Guillaume <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE EXTENSION postgis;"
}
] |
[
{
"msg_contents": "Dear expert,\n\nI have to download PostgreSQL 9.5.6 version for windows 64 bit.\nam unable to find this version in PostgreSQL site.\nCould anyone tell me how I can find this software.\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n\n\nDear expert,\n \nI have to download PostgreSQL 9.5.6 version for windows 64 bit.\nam unable to find this version in PostgreSQL site.\nCould anyone tell me how I can find this software.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Fri, 21 Jul 2017 10:12:39 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unable to find PostgreSQL 9.5.6 software in PostgreSQL site."
},
{
"msg_contents": "Hi,\n\nPlease use the proper list, your question is not appropriate for this list\nwhich is about PostgreSQL performance.\n\nFlorent\n\n\nOn Fri, Jul 21, 2017 at 12:12 PM, Dinesh Chandra 12108 <\[email protected]> wrote:\n\n> Dear expert,\n>\n>\n>\n> I have to download PostgreSQL 9.5.6 version for windows 64 bit.\n>\n> am unable to find this version in PostgreSQL site.\n>\n> Could anyone tell me how I can find this software.\n>\n>\n>\n> *Regards,*\n>\n> *Dinesh Chandra*\n>\n> *|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.*\n>\n>\n>\n> ------------------------------\n>\n> DISCLAIMER:\n>\n> This email message is for the sole use of the intended recipient(s) and\n> may contain confidential and privileged information. Any unauthorized\n> review, use, disclosure or distribution is prohibited. If you are not the\n> intended recipient, please contact the sender by reply email and destroy\n> all copies of the original message. Check all attachments for viruses\n> before opening them. All views or opinions presented in this e-mail are\n> those of the author and may not reflect the opinion of Cyient or those of\n> our affiliates.\n>\n\n\n\n-- \n[image: Nuxeo]\n\nFlorent Guillaume\nHead of R&D\n\nTwitter: @efge\n\nHi,Please use the proper list, your question is not appropriate for this list which is about PostgreSQL performance.FlorentOn Fri, Jul 21, 2017 at 12:12 PM, Dinesh Chandra 12108 <[email protected]> wrote:\n\n\nDear expert,\n \nI have to download PostgreSQL 9.5.6 version for windows 64 bit.\nam unable to find this version in PostgreSQL site.\nCould anyone tell me how I can find this software.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n-- Florent Guillaume Head of R&DTwitter: @efge",
"msg_date": "Fri, 21 Jul 2017 14:47:39 +0200",
"msg_from": "Florent Guillaume <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unable to find PostgreSQL 9.5.6 software in PostgreSQL site."
}
] |
[
{
"msg_contents": "Hi team,\n\nI need to connect to MS-SQL server 2008/2012 from PostgreSQL 9.5 in Windows7 environment to fetch the tables of SQL server.\n\nPlease help on this.\n\nRegards,\nDaulat\n\n\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n\n\nHi team,\n \nI need to connect to MS-SQL server 2008/2012 from PostgreSQL 9.5 in Windows7 environment to fetch the tables of SQL server.\n \nPlease help on this.\n \nRegards,\nDaulat\n \n \n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Tue, 1 Aug 2017 04:25:59 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to access data of SQL server database from PostgreSQL "
},
{
"msg_contents": "Hi Daulat\n\n\nThis is not the list for that (check https://www.postgresql.org/list/), \nbut if you want to access sql server look at tds_fdw( \nhttps://github.com/tds-fdw/tds_fdw), I do not know if it installs on \nwindows, I've only used it on linux, and function very well.\n\nRegards\nAnthony\n\nOn 01/08/17 00:25, Daulat Ram wrote:\n>\n> Hi team,\n>\n> I need to connect to MS-SQL server 2008/2012 from PostgreSQL 9.5 in \n> Windows7 environment to fetch the tables of SQL server.\n>\n> Please help on this.\n>\n> Regards,\n>\n> Daulat\n>\n>\n> ------------------------------------------------------------------------\n>\n> DISCLAIMER:\n>\n> This email message is for the sole use of the intended recipient(s) \n> and may contain confidential and privileged information. Any \n> unauthorized review, use, disclosure or distribution is prohibited. If \n> you are not the intended recipient, please contact the sender by reply \n> email and destroy all copies of the original message. Check all \n> attachments for viruses before opening them. All views or opinions \n> presented in this e-mail are those of the author and may not reflect \n> the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\nHi Daulat\n\n\n\nThis is not the list for that (check\n https://www.postgresql.org/list/), but if you want to access sql\n server look at tds_fdw( https://github.com/tds-fdw/tds_fdw), I do\n not know if it installs on windows, I've only used it on linux,\n and function very well.\n\n Regards\n Anthony\n\nOn 01/08/17 00:25, Daulat Ram wrote:\n\n\n\n\n\n\nHi team,\n�\nI need to connect to MS-SQL server\n 2008/2012 from PostgreSQL 9.5 in Windows7 environment to fetch\n the tables of SQL server.\n�\nPlease help on this.\n�\nRegards,\nDaulat\n�\n�\n�\n\n\n\n\n DISCLAIMER:\n\n This email message is for the sole use of the intended\n recipient(s) and may contain confidential and privileged\n information. Any unauthorized review, use, disclosure or\n distribution is prohibited. If you are not the intended\n recipient, please contact the sender by reply email and destroy\n all copies of the original message. Check all attachments for\n viruses before opening them. All views or opinions presented in\n this e-mail are those of the author and may not reflect the\n opinion of Cyient or those of our affiliates.",
"msg_date": "Tue, 1 Aug 2017 09:06:17 -0400",
"msg_from": "Anthony Sotolongo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to access data of SQL server database from\n PostgreSQL"
},
{
"msg_contents": "On 07/31/2017 09:25 PM, Daulat Ram wrote:\n> Hi team,\n> \n> I need to connect to MS-SQL server 2008/2012 from PostgreSQL 9.5 in \n> Windows7 environment to fetch the tables of SQL server.\n> \n> Please help on this.\n\nhttps://github.com/tds-fdw/tds_fdw\n\nJD\n\n> \n> Regards,\n> \n> Daulat\n> \n> \n> ------------------------------------------------------------------------\n> \n> DISCLAIMER:\n> \n> This email message is for the sole use of the intended recipient(s) and \n> may contain confidential and privileged information. Any unauthorized \n> review, use, disclosure or distribution is prohibited. If you are not \n> the intended recipient, please contact the sender by reply email and \n> destroy all copies of the original message. Check all attachments for \n> viruses before opening them. All views or opinions presented in this \n> e-mail are those of the author and may not reflect the opinion of Cyient \n> or those of our affiliates.\n\n\n-- \nCommand Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc\n\nPostgreSQL Centered full stack support, consulting and development.\nAdvocate: @amplifypostgres || Learn: https://pgconf.us\n***** Unless otherwise stated, opinions are my own. *****\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 1 Aug 2017 07:43:27 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to access data of SQL server database from\n PostgreSQL"
}
] |
[
{
"msg_contents": "Hi,\n\nI have 2 PG servers with same h/w and configuration and they are not in\nreplication.\n\nOn server A it takes 20 minutes to execute the script.\nOn server B it takes more than 20 hours. (Seems to be stuck with create\nindex and and create foreign key steps)\n\nAny guidance to troubleshoot this would be highly appreciated.\n\nThanks & Regards,\nSumeet Shukla\n\nHi,I have 2 PG servers with same h/w and configuration and they are not in replication.On server A it takes 20 minutes to execute the script.On server B it takes more than 20 hours. (Seems to be stuck with create index and and create foreign key steps)Any guidance to troubleshoot this would be highly appreciated.Thanks & Regards,Sumeet Shukla",
"msg_date": "Tue, 1 Aug 2017 19:11:51 +0530",
"msg_from": "Sumeet Shukla <[email protected]>",
"msg_from_op": true,
"msg_subject": "2 server with same configuration but huge difference in performance"
},
{
"msg_contents": "On Tue, Aug 1, 2017 at 9:41 AM, Sumeet Shukla <[email protected]>\nwrote:\n\n> Hi,\n>\n> I have 2 PG servers with same h/w and configuration and they are not in\n> replication.\n>\n> On server A it takes 20 minutes to execute the script.\n> On server B it takes more than 20 hours. (Seems to be stuck with create\n> index and and create foreign key steps)\n>\n> Any guidance to troubleshoot this would be highly appreciated.\n>\n> Thanks & Regards,\n> Sumeet Shukla\n>\n>\nCheck for long running queries on the server that is taking longer. If it's\nthings like CREATE INDEX or ALTER TABLE statements that are being blocked,\na transaction running on the table involved will cause those commands to be\nheld until those transactions complete.\n\nIf it's normal read/write queries to that are taking longer, ensure the\ndatabase statistics are up to date by running an analyze.\n\nKeith\n\nOn Tue, Aug 1, 2017 at 9:41 AM, Sumeet Shukla <[email protected]> wrote:Hi,I have 2 PG servers with same h/w and configuration and they are not in replication.On server A it takes 20 minutes to execute the script.On server B it takes more than 20 hours. (Seems to be stuck with create index and and create foreign key steps)Any guidance to troubleshoot this would be highly appreciated.Thanks & Regards,Sumeet Shukla\n\nCheck for long running queries on the server that is taking longer. If\n it's things like CREATE INDEX or ALTER TABLE statements that are being \nblocked, a transaction running on the table involved will cause those \ncommands to be held until those transactions complete.If \nit's normal read/write queries to that are taking longer, ensure the \ndatabase statistics are up to date by running an analyze.Keith",
"msg_date": "Tue, 1 Aug 2017 10:16:04 -0400",
"msg_from": "Keith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 2 server with same configuration but huge difference in\n performance"
},
{
"msg_contents": "Hi Sumeet Shukla\n\nWhile script is running check the pg_stat_activity, this view can be util\n\n\nRegards\n\nAnthony\n\n\nOn 01/08/17 10:16, Keith wrote:\n>\n> On Tue, Aug 1, 2017 at 9:41 AM, Sumeet Shukla \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> Hi,\n>\n> I have 2 PG servers with same h/w and configuration and they are\n> not in replication.\n>\n> On server A it takes 20 minutes to execute the script.\n> On server B it takes more than 20 hours. (Seems to be stuck with\n> create index and and create foreign key steps)\n>\n> Any guidance to troubleshoot this would be highly appreciated.\n>\n> Thanks & Regards,\n> Sumeet Shukla\n>\n>\n> Check for long running queries on the server that is taking longer. If \n> it's things like CREATE INDEX or ALTER TABLE statements that are being \n> blocked, a transaction running on the table involved will cause those \n> commands to be held until those transactions complete.\n>\n> If it's normal read/write queries to that are taking longer, ensure \n> the database statistics are up to date by running an analyze.\n>\n> Keith\n\n\n\n\n\n\n\nHi Sumeet Shukla\nWhile script is running check the pg_stat_activity, this view can\n be util\n\n\nRegards\nAnthony\n\n\n\nOn 01/08/17 10:16, Keith wrote:\n\n\n\n\nOn Tue, Aug 1, 2017 at 9:41 AM,\n Sumeet Shukla <[email protected]>\n wrote:\n\nHi,\n\n\nI have 2\n PG servers with same h/w and configuration and they\n are not in replication.\n\n\nOn server\n A it takes 20 minutes to execute the script.\nOn server\n B it takes more than 20 hours. (Seems to be stuck\n with create index and and create foreign key steps)\n\n\nAny\n guidance to troubleshoot this would be highly\n appreciated.\n\n\n\n\n\n\nThanks\n & Regards,\nSumeet\n Shukla\n\n\n\n\n\n\n\n\n\n\n\n\nCheck for long running queries on the server that is\n taking longer. If it's things like CREATE INDEX or ALTER\n TABLE statements that are being blocked, a transaction\n running on the table involved will cause those commands to\n be held until those transactions complete.\n\n\n If it's normal read/write queries to that are taking longer,\n ensure the database statistics are up to date by running an\n analyze.\n\n\n Keith",
"msg_date": "Tue, 1 Aug 2017 10:21:02 -0400",
"msg_from": "Anthony Sotolongo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] 2 server with same configuration but huge\n difference in performance"
},
{
"msg_contents": "On Tue, Aug 1, 2017 at 6:41 AM, Sumeet Shukla <[email protected]> wrote:\n> Hi,\n>\n> I have 2 PG servers with same h/w and configuration and they are not in\n> replication.\n>\n> On server A it takes 20 minutes to execute the script.\n> On server B it takes more than 20 hours. (Seems to be stuck with create\n> index and and create foreign key steps)\n>\n> Any guidance to troubleshoot this would be highly appreciated.\n\nThere's lots of areas where you could be running into problems. I\nsuggest reading this wiki page on reporting performance problems.\nIt'll help you gather more evidence of where and what the problem is.\n\nhttps://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\n\n-- \nSent via pgsql-admin mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-admin\n",
"msg_date": "Tue, 1 Aug 2017 08:35:20 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] 2 server with same configuration but huge difference in\n performance"
},
{
"msg_contents": "It seems that it is happening because of the way the database is created.\nOn an old database it runs perfectly fine or if I use the old DB as\ntemplate to create the new one, it runs fine. But if I create a new DB\nwith same settings and permissions it hangs. I'm now trying to find the\ndifference between these 2 databases.\n\nThanks & Regards,\nSumeet Shukla\n\n\nOn Tue, Aug 1, 2017 at 9:05 PM, Scott Marlowe <[email protected]>\nwrote:\n\n> On Tue, Aug 1, 2017 at 6:41 AM, Sumeet Shukla <[email protected]>\n> wrote:\n> > Hi,\n> >\n> > I have 2 PG servers with same h/w and configuration and they are not in\n> > replication.\n> >\n> > On server A it takes 20 minutes to execute the script.\n> > On server B it takes more than 20 hours. (Seems to be stuck with create\n> > index and and create foreign key steps)\n> >\n> > Any guidance to troubleshoot this would be highly appreciated.\n>\n> There's lots of areas where you could be running into problems. I\n> suggest reading this wiki page on reporting performance problems.\n> It'll help you gather more evidence of where and what the problem is.\n>\n> https://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n>\n\nIt seems that it is happening because of the way the database is created. On an old database it runs perfectly fine or if I use the old DB as template to create the new one, it runs fine. But if I create a new DB with same settings and permissions it hangs. I'm now trying to find the difference between these 2 databases.Thanks & Regards,Sumeet Shukla\nOn Tue, Aug 1, 2017 at 9:05 PM, Scott Marlowe <[email protected]> wrote:On Tue, Aug 1, 2017 at 6:41 AM, Sumeet Shukla <[email protected]> wrote:\n> Hi,\n>\n> I have 2 PG servers with same h/w and configuration and they are not in\n> replication.\n>\n> On server A it takes 20 minutes to execute the script.\n> On server B it takes more than 20 hours. (Seems to be stuck with create\n> index and and create foreign key steps)\n>\n> Any guidance to troubleshoot this would be highly appreciated.\n\nThere's lots of areas where you could be running into problems. I\nsuggest reading this wiki page on reporting performance problems.\nIt'll help you gather more evidence of where and what the problem is.\n\nhttps://wiki.postgresql.org/wiki/Guide_to_reporting_problems",
"msg_date": "Tue, 1 Aug 2017 21:15:14 +0530",
"msg_from": "Sumeet Shukla <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] 2 server with same configuration but huge difference in\n performance"
},
{
"msg_contents": "On Tue, Aug 1, 2017 at 8:45 AM, Sumeet Shukla <[email protected]> wrote:\n> It seems that it is happening because of the way the database is created. On\n> an old database it runs perfectly fine or if I use the old DB as template to\n> create the new one, it runs fine. But if I create a new DB with same\n> settings and permissions it hangs. I'm now trying to find the difference\n> between these 2 databases.\n\nLikely a difference in encoding or collation. What does \\l show you\n(that's a lower case L btw)\n\nsmarlowe=> \\l\n List of databases\n Name | Owner | Encoding | Collate | Ctype |\nAccess privileges\n-----------+----------+----------+-------------+-------------+-----------------------\n postgres | smarlowe | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n smarlowe | smarlowe | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n\nUTF8 and en_US are much more expensive than SQL_ASCII and C would be\nfor text and such. Basically indexes either don't work or work as well\nunder en_US if you're comparing or sorting text.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 1 Aug 2017 09:13:47 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 2 server with same configuration but huge difference in\n performance"
},
{
"msg_contents": "Hello Sumeet,\n\nCompare schema on both database to make sure there is no mismatches. And\nverify LOCKs. if all looks good,\ndo analyze on newly created database before start execution. This will help\nyou. New database doesn't have any stats for generate execution plan.\n\n\n\nThanks & Regards,\nNaveen Kumar .M,\nSr. PostgreSQL Database Administrator,\nMobile: 7755929449.\n*My attitude will always be based on how you treat me. *\n\n\nOn Tue, Aug 1, 2017 at 9:43 PM, Scott Marlowe <[email protected]>\nwrote:\n\n> On Tue, Aug 1, 2017 at 8:45 AM, Sumeet Shukla <[email protected]>\n> wrote:\n> > It seems that it is happening because of the way the database is\n> created. On\n> > an old database it runs perfectly fine or if I use the old DB as\n> template to\n> > create the new one, it runs fine. But if I create a new DB with same\n> > settings and permissions it hangs. I'm now trying to find the difference\n> > between these 2 databases.\n>\n> Likely a difference in encoding or collation. What does \\l show you\n> (that's a lower case L btw)\n>\n> smarlowe=> \\l\n> List of databases\n> Name | Owner | Encoding | Collate | Ctype |\n> Access privileges\n> -----------+----------+----------+-------------+------------\n> -+-----------------------\n> postgres | smarlowe | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n> smarlowe | smarlowe | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n>\n> UTF8 and en_US are much more expensive than SQL_ASCII and C would be\n> for text and such. Basically indexes either don't work or work as well\n> under en_US if you're comparing or sorting text.\n>\n>\n> --\n> Sent via pgsql-admin mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-admin\n>\n\nHello Sumeet,Compare schema on both database to make sure there is no mismatches. And verify LOCKs. if all looks good,do analyze on newly created database before start execution. This will help you. New database doesn't have any stats for generate execution plan. Thanks & Regards,Naveen Kumar .M,Sr. PostgreSQL Database Administrator,Mobile: 7755929449.My attitude will always be based on how you treat me. \nOn Tue, Aug 1, 2017 at 9:43 PM, Scott Marlowe <[email protected]> wrote:On Tue, Aug 1, 2017 at 8:45 AM, Sumeet Shukla <[email protected]> wrote:\n> It seems that it is happening because of the way the database is created. On\n> an old database it runs perfectly fine or if I use the old DB as template to\n> create the new one, it runs fine. But if I create a new DB with same\n> settings and permissions it hangs. I'm now trying to find the difference\n> between these 2 databases.\n\nLikely a difference in encoding or collation. What does \\l show you\n(that's a lower case L btw)\n\nsmarlowe=> \\l\n List of databases\n Name | Owner | Encoding | Collate | Ctype |\nAccess privileges\n-----------+----------+----------+-------------+-------------+-----------------------\n postgres | smarlowe | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n smarlowe | smarlowe | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n\nUTF8 and en_US are much more expensive than SQL_ASCII and C would be\nfor text and such. Basically indexes either don't work or work as well\nunder en_US if you're comparing or sorting text.\n\n\n--\nSent via pgsql-admin mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-admin",
"msg_date": "Wed, 2 Aug 2017 03:44:46 +0530",
"msg_from": "Naveen Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] 2 server with same configuration but huge\n difference in performance"
}
] |
[
{
"msg_contents": "Dear team,\n\nCan you please let me know how we can create a view using db link,\nA base table column having serial datatype. And we want to create a view of that table on server B. But unable to create and getting the below issue.\n\nError:\n\nERROR: type \"serial\" does not exist\nLINE 17: as roaster_test ( roaster_id serial,\n ^\n********** Error **********\n\nERROR: type \"serial\" does not exist\nSQL state: 42704\nCharacter: 432\n\nScript:\n\ncreate or replace view roaster_test as\nselect * from dblink('port=5433 host=INN14U-DW1427 dbname=postgres user=postgres password=postgres94',\n'select\n roaster_id, roaster_date, pickdrop, roaster_state, cab_id, shift_key, roaster_creation_date,\n status integer,\n notificationcount, totaltraveldistance, start_trip, end_trip, trip_duration from public.roaster')\nas roaster_test ( roaster_id serial,\n roaster_date date,\n pickdrop \"char\",\n roaster_state character varying,\n cab_id character varying,\n shift_key integer,\n roaster_creation_date date,\n status integer,\n notificationcount integer,\n totaltraveldistance double precision,\n start_trip text,\n end_trip text,\n trip_duration text)\n\n\n\nSuggest me if there is any alternate way for the same.\n\nRegards,\nDaulat\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n\n\nDear team, \n \nCan you please let me know how we can create a view using db link,\nA base table column having serial datatype. And we want to create a view of that table on server B. But unable to create and getting the below issue.\n \nError:\n \nERROR: type \"serial\" does not exist\nLINE 17: as roaster_test ( roaster_id serial,\n ^\n********** Error **********\n \nERROR: type \"serial\" does not exist\nSQL state: 42704\nCharacter: 432\n \nScript:\n \ncreate or replace view roaster_test as \nselect * from dblink('port=5433 host=INN14U-DW1427 dbname=postgres user=postgres password=postgres94',\n\n'select \n roaster_id, roaster_date, pickdrop, roaster_state, cab_id, shift_key, roaster_creation_date,\n status integer,\n notificationcount, totaltraveldistance, start_trip, end_trip, trip_duration from public.roaster')\n\nas roaster_test ( roaster_id serial,\n roaster_date date,\n pickdrop \"char\",\n roaster_state character varying,\n cab_id character varying,\n shift_key integer,\n roaster_creation_date date,\n status integer,\n notificationcount integer,\n totaltraveldistance double precision,\n start_trip text,\n end_trip text,\n trip_duration text)\n \n \n \nSuggest me if there is any alternate way for the same.\n \nRegards,\nDaulat\n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Thu, 3 Aug 2017 07:18:20 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "Create view "
},
{
"msg_contents": "Hi\n\nThis is wrong mailing list for this question - please, use pgsql-general\nfor similar questions. I don't see any relation to performance.\n\n2017-08-03 9:18 GMT+02:00 Daulat Ram <[email protected]>:\n\n> Dear team,\n>\n>\n>\n> Can you please let me know how we can create a view using db link,\n>\n> A base table column having serial datatype. And we want to create a view\n> of that table on server B. But unable to create and getting the below issue.\n>\n>\n>\n> *Error*:\n>\n>\n>\n> ERROR: type \"serial\" does not exist\n>\n> LINE 17: as roaster_test ( roaster_id serial,\n>\n> ^\n>\n> ********** Error **********\n>\n>\n>\n> ERROR: type \"serial\" does not exist\n>\n> SQL state: 42704\n>\n> Character: 432\n>\n>\n>\n> *Script*:\n>\n>\n>\n> create or replace view roaster_test as\n>\n> select * from dblink('port=5433 host=INN14U-DW1427 dbname=postgres\n> user=postgres password=postgres94',\n>\n> 'select\n>\n> roaster_id, roaster_date, pickdrop, roaster_state, cab_id,\n> shift_key, roaster_creation_date,\n>\n> status integer,\n>\n> notificationcount, totaltraveldistance, start_trip, end_trip,\n> trip_duration from public.roaster')\n>\n> *as roaster_test* ( roaster_id serial,\n>\n> roaster_date date,\n>\n> pickdrop \"char\",\n>\n> roaster_state character varying,\n>\n> cab_id character varying,\n>\n> shift_key integer,\n>\n> roaster_creation_date date,\n>\n> status integer,\n>\n> notificationcount integer,\n>\n> totaltraveldistance double precision,\n>\n> start_trip text,\n>\n> end_trip text,\n>\n> trip_duration text)\n>\n>\n>\n>\n>\n>\n>\n> Suggest me if there is any alternate way for the same.\n>\n\nSerial is \"pseudotype\" and can be used only for CREATE TABLE command. This\npseudotype is translated to \"int DEFAULT nextval(automatic_sequence)\"\n\nUse int instead in your case.\n\nRegards\n\nPavel Stehule\n\n>\n>\n> Regards,\n>\n> Daulat\n>\n> ------------------------------\n>\n> DISCLAIMER:\n>\n> This email message is for the sole use of the intended recipient(s) and\n> may contain confidential and privileged information. Any unauthorized\n> review, use, disclosure or distribution is prohibited. If you are not the\n> intended recipient, please contact the sender by reply email and destroy\n> all copies of the original message. Check all attachments for viruses\n> before opening them. All views or opinions presented in this e-mail are\n> those of the author and may not reflect the opinion of Cyient or those of\n> our affiliates.\n>\n\nHiThis is wrong mailing list for this question - please, use pgsql-general for similar questions. I don't see any relation to performance.2017-08-03 9:18 GMT+02:00 Daulat Ram <[email protected]>:\n\n\nDear team, \n \nCan you please let me know how we can create a view using db link,\nA base table column having serial datatype. And we want to create a view of that table on server B. But unable to create and getting the below issue.\n \nError:\n \nERROR: type \"serial\" does not exist\nLINE 17: as roaster_test ( roaster_id serial,\n ^\n********** Error **********\n \nERROR: type \"serial\" does not exist\nSQL state: 42704\nCharacter: 432\n \nScript:\n \ncreate or replace view roaster_test as \nselect * from dblink('port=5433 host=INN14U-DW1427 dbname=postgres user=postgres password=postgres94',\n\n'select \n roaster_id, roaster_date, pickdrop, roaster_state, cab_id, shift_key, roaster_creation_date, \n status integer,\n notificationcount, totaltraveldistance, start_trip, end_trip, trip_duration from public.roaster')\n\nas roaster_test ( roaster_id serial,\n roaster_date date,\n pickdrop \"char\",\n roaster_state character varying,\n cab_id character varying,\n shift_key integer,\n roaster_creation_date date,\n status integer,\n notificationcount integer,\n totaltraveldistance double precision,\n start_trip text,\n end_trip text,\n trip_duration text)\n \n \n \nSuggest me if there is any alternate way for the same.Serial is \"pseudotype\" and can be used only for CREATE TABLE command. This pseudotype is translated to \"int DEFAULT nextval(automatic_sequence)\"Use int instead in your case.RegardsPavel Stehule\n \nRegards,\nDaulat\n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Thu, 3 Aug 2017 09:25:15 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Create view"
}
] |
[
{
"msg_contents": "Hi,\n\n We recently upgraded our database from 9.1 to 9.6. We are seeing some\nunusual slow queries after the upgrade.\nSometimes the queries are faster after vacuum analyze, but not consistent.\nWe tried with different settings of random_page_cost, work_mem,\neffective_cache_size but the query results are the same. I am trying to\nunderstand if changing the queries/indexes would give us better\nperformance. Please provide your suggestions. Below is our table,index\ndefinition. \n\nTable : cm_ci_relations\n\n Column | Type | Modifiers\n | Storage | Stats target | Description\n---------------------+-----------------------------+-----------------------\n-+----------+--------------+-------------\n ci_relation_id | bigint | not null\n | plain | |\n ns_id | bigint | not null\n | plain | 200 |\n from_ci_id | bigint | not null\n | plain | |\n relation_goid | character varying(256) | not null\n | extended | |\n relation_id | integer | not null\n | plain | |\n to_ci_id | bigint | not null\n | plain | |\n ci_state_id | integer | not null\n | plain | |\n last_applied_rfc_id | bigint |\n | plain | |\n comments | character varying(2000) |\n | extended | |\n created_by | character varying(200) |\n | extended | |\n update_by | character varying(200) |\n | extended | |\n created | timestamp without time zone | not null default\nnow() | plain | |\n updated | timestamp without time zone | not null default\nnow() | plain | |\nIndexes:\n \"cm_ci_relations_pk\" PRIMARY KEY, btree (ci_relation_id)\n \"cm_ci_relations_goid_idx\" UNIQUE, btree (relation_goid)\n \"cm_ci_relations_uniq_idx\" UNIQUE, btree (from_ci_id, relation_id,\nto_ci_id)\n \"cm_ci_relations_fromci_idx\" btree (from_ci_id)\n \"cm_ci_relations_ns_idx\" btree (ns_id)\n \"cm_ci_relations_r_ns_idx\" btree (relation_id, ns_id)\n \"cm_ci_relations_toci_idx\" btree (to_ci_id)\n\n\nTable : ns_namespaces\n\n Column | Type | Modifiers | Storage\n| Stats target | Description\n---------+-----------------------------+------------------------+----------\n+--------------+-------------\n ns_id | bigint | not null | plain\n| |\n ns_path | character varying(200) | not null | extended\n| 300 |\n created | timestamp without time zone | not null default now() | plain\n| |\nIndexes:\n \"ns_namespaces_pk\" PRIMARY KEY, btree (ns_id)\n \"ns_namespaces_ak\" UNIQUE, btree (ns_path)\n \"ns_namespaces_vpo\" btree (ns_path varchar_pattern_ops)\n\n\nTable : cm_ci\n\n Column | Type | Modifiers\n | Storage | Stats target | Description\n---------------------+-----------------------------+-----------------------\n-+----------+--------------+-------------\n ci_id | bigint | not null\n | plain | |\n ns_id | bigint | not null\n | plain | |\n class_id | integer | not null\n | plain | |\n ci_name | character varying(200) | not null\n | extended | |\n ci_goid | character varying(256) | not null\n | extended | |\n comments | character varying(2000) |\n | extended | |\n ci_state_id | integer | not null\n | plain | |\n last_applied_rfc_id | bigint |\n | plain | |\n created_by | character varying(200) |\n | extended | |\n updated_by | character varying(200) |\n | extended | |\n created | timestamp without time zone | not null default\nnow() | plain | |\n updated | timestamp without time zone | not null default\nnow() | plain | |\nIndexes:\n \"cm_ci_pk\" PRIMARY KEY, btree (ci_id)\n \"cm_ci_3cols_idx\" UNIQUE, btree (ns_id, class_id, ci_name)\n \"df_ci_goid_idx\" UNIQUE, btree (ci_goid)\n \"cm_ci_cl_idx\" btree (class_id)\n \"cm_ci_ns_idx\" btree (ns_id)\n\n\nTable : md_relations\n\n Column | Type | Modifiers\n | Storage | Stats target | Description\n---------------------+-----------------------------+-----------------------\n-+----------+--------------+-------------\n relation_id | integer | not null\n | plain | |\n relation_name | character varying(200) | not null\n | extended | |\n short_relation_name | character varying(200) | not null\n | extended | |\n description | text | not null\n | extended | |\n created | timestamp without time zone | not null default\nnow() | plain | |\nIndexes:\n \"md_relations_pk\" PRIMARY KEY, btree (relation_id)\n \"md_relations_rln_idx\" UNIQUE, btree (relation_name)\n \"md_relations_srn_idx\" btree (short_relation_name)\n\n\n\nTable : md_classes\n Table \"kloopzcm.md_classes\"\n Column | Type | Modifiers |\nStorage | Stats target | Description\n------------------+-----------------------------+------------------------+-\n---------+--------------+-------------\n class_id | integer | not null |\nplain | |\n class_name | character varying(200) | not null |\nextended | |\n short_class_name | character varying(200) | not null |\nextended | |\n super_class_id | integer | |\nplain | |\n is_namespace | boolean | not null |\nplain | |\n flags | integer | not null default 0 |\nplain | |\n impl | character varying(200) | |\nextended | |\n access_level | character varying(200) | |\nextended | |\n description | text | |\nextended | |\n format | text | |\nextended | |\n created | timestamp without time zone | not null default now() |\nplain | |\nIndexes:\n \"md_classes_pk\" PRIMARY KEY, btree (class_id)\n \"md_classes_cln_idx\" UNIQUE, btree (class_name)\n \"md_classes_comp_names_idx\" btree (class_name, short_class_name)\n \"md_classes_scln_idx\" btree (short_class_name)\n\n\n\nTable : cm_ci_state\n\n Column | Type | Modifiers | Storage | Stats target\n| Description\n-------------+-----------------------+-----------+----------+--------------\n+-------------\n ci_state_id | integer | not null | plain |\n|\n state_name | character varying(64) | not null | extended |\n|\nIndexes:\n \"cm_ci_state_pk\" PRIMARY KEY, btree (ci_state_id)\n\n\n\nThe below query has been really slow after the upgrade, the explain plan\nshows that it uses the cm_ci_relations_fromci_idx index on the\ncm_ci_relations table. But when another set of parameters are used for the\nns_path the query plan is better. In general I expect the ns_namespaces,\nmd_relations being queried first and then the results are further used on\nthe cm_ci_relations_r_ns_idx index (cm_ci_relations table) and then cm_ci\ntable. That would filter out a lot of records and will be much faster.\n\nTable Data\n----------\nThe ns_namespaces table contains data like a folder structure and can go\nupto five levels separated by slash\n/f1/f2/f3/f4/f5\n/f1/f2/a1\n/f1/f2/b1\n/f1/c1\n/g1/b1\n\nThere would be a lot of duplicates matching the beginning section of the\npath.\n\ncm_ci is the instances table with around 3 million records;\ncm_ci_relations is the relations between instances table, with around 7.5\nmillion records. this table is the largest in this query.\nmd_classes contains around 2k records\nmd_relations contains around 100+ records\n\nIts not that the longer the ns_path parameter provided, the query is\nfaster. In some cases where the ns_path parameter is very much focused\nlike (/a/b/c/d/e) with different relation names and class names the query\nwas still slow as the planner was not using the best possible index\ncm_ci_relations_r_ns_idx.\n\n\nslow performing query:\n\nexplain (buffers, analyze) select\n cir.ci_relation_id as ciRelationId,\n cir.ns_id as nsId,\n ns.ns_path as nsPath,\n cir.from_ci_id as fromCiId,\n cir.relation_goid as relationGoid,\n cir.relation_id as relationId,\n mdr.relation_name as relationName,\n cir.to_ci_id toCiId,\n cir.ci_state_id as relationStateId,\n cis.state_name as relationState,\n cir.last_applied_rfc_id as lastAppliedRfcId,\n cir.comments,\n cir.created,\n cir.updated\n from cm_ci_relations cir, md_relations mdr, cm_ci_state cis, cm_ci\nfrom_ci, md_classes from_mdc, cm_ci to_ci, md_classes to_mdc,\nns_namespaces ns\n where (ns.ns_path like '/test1/%' or ns.ns_path = '/test1')\n and cir.ns_id = ns.ns_id\n and cir.ci_state_id = cis.ci_state_id\n and cir.relation_id = mdr.relation_id\n and (mdr.relation_name = 'base.DeployedTo')\n and cir.from_ci_id = from_ci.ci_id\n and from_ci.class_id = from_mdc.class_id\n and ( from_mdc.class_name = 'bom.Compute')\n and cir.to_ci_id = to_ci.ci_id\n and to_ci.class_id = to_mdc.class_id;\n\n\nbelow is the explain plan for this query\n \n QUERY PLAN\n\n---------------------------------------------------------------------------\n-----------------------------------------------------------------\n-----------------------------------------------\n Nested Loop (cost=139.97..18932.15 rows=1 width=288) (actual\ntime=63.741..7213.251 rows=276 loops=1)\n Buffers: shared hit=552715 read=6114\n -> Nested Loop (cost=139.69..18931.84 rows=1 width=292) (actual\ntime=63.675..7211.745 rows=276 loops=1)\n Buffers: shared hit=552162 read=6114\n -> Nested Loop (cost=139.26..18931.35 rows=1 width=288) (actual\ntime=63.646..7206.066 rows=276 loops=1)\n Buffers: shared hit=551058 read=6114\n -> Nested Loop (cost=139.12..18931.19 rows=1 width=277)\n(actual time=63.637..7199.116 rows=276 loops=1)\n Buffers: shared hit=550506 read=6114\n -> Nested Loop (cost=138.70..18919.38 rows=26\nwidth=228) (actual time=58.446..6620.992 rows=62689 loops=1)\n Join Filter: (cir.relation_id = mdr.relation_id)\n Rows Removed by Join Filter: 125384\n Buffers: shared hit=299270 read=6114\n -> Seq Scan on md_relations mdr\n(cost=0.00..7.59 rows=1 width=22) (actual time=0.017..0.060 rows=1 loops=1)\n Filter: ((relation_name)::text =\n'base.DeployedTo'::text)\n Rows Removed by Filter: 126\n Buffers: shared hit=6\n -> Nested Loop (cost=138.70..18869.86\nrows=3355 width=210) (actual time=58.418..6551.520 rows=188073 loops=1)\n Buffers: shared hit=299264 read=6114\n -> Nested Loop (cost=138.27..17306.08\nrows=1271 width=8) (actual time=58.367..1012.918 rows=62710 loops=1\n)\n Buffers: shared hit=28631\n -> Index Scan using\nmd_classes_comp_names_idx on md_classes from_mdc (cost=0.28..8.30 rows=1\nwidth=\n4) (actual time=0.031..0.037 rows=1 loops=1)\n Index Cond:\n((class_name)::text = 'bom.Compute'::text)\n Buffers: shared hit=3\n -> Bitmap Heap Scan on cm_ci\nfrom_ci (cost=137.99..17238.99 rows=5879 width=12) (actual time=58.332\n..980.258 rows=62710 loops=1)\n Recheck Cond: (class_id =\nfrom_mdc.class_id)\n Heap Blocks: exact=28001\n Buffers: shared hit=28628\n -> Bitmap Index Scan on\ncm_ci_cl_idx (cost=0.00..136.52 rows=5879 width=0) (actual time=52.52\n0..52.520 rows=63497 loops=1)\n Index Cond: (class_id =\nfrom_mdc.class_id)\n Buffers: shared hit=627\n -> Index Scan using\ncm_ci_relations_fromci_idx on cm_ci_relations cir (cost=0.43..1.07\nrows=16 width=210)\n (actual time=0.067..0.084 rows=3 loops=62710)\n Index Cond: (from_ci_id =\nfrom_ci.ci_id)\n Buffers: shared hit=270633 read=6114\n -> Index Scan using ns_namespaces_pk on\nns_namespaces ns (cost=0.42..0.44 rows=1 width=57) (actual\ntime=0.008..0.008\nrows=0 loops=62689)\n Index Cond: (ns_id = cir.ns_id)\n Filter: (((ns_path)::text ~~ '/test1/%'::text)\nOR ((ns_path)::text = '/test1'::text))\n Rows Removed by Filter: 1\n Buffers: shared hit=251236\n -> Index Scan using cm_ci_state_pk on cm_ci_state cis\n(cost=0.13..0.15 rows=1 width=15) (actual time=0.002..0.003 rows=1 lo\nops=276)\n Index Cond: (ci_state_id = cir.ci_state_id)\n Buffers: shared hit=552\n -> Index Scan using cm_ci_pk on cm_ci to_ci (cost=0.43..0.48\nrows=1 width=12) (actual time=0.015..0.016 rows=1 loops=276)\n Index Cond: (ci_id = cir.to_ci_id)\n Buffers: shared hit=1104\n -> Index Only Scan using md_classes_pk on md_classes to_mdc\n(cost=0.28..0.30 rows=1 width=4) (actual time=0.003..0.004 rows=1 loops=276\n)\n Index Cond: (class_id = to_ci.class_id)\n Heap Fetches: 0\n Buffers: shared hit=553\n Planning time: 12.641 ms\n Execution time: 7214.707 ms\n\n\n\nsimilar query with different parameters, this gets executed much faster\n\n explain (buffers, analyze) select\n cir.ci_relation_id as ciRelationId,\n cir.ns_id as nsId,\n ns.ns_path as nsPath,\n cir.from_ci_id as fromCiId,\n cir.relation_goid as relationGoid,\n cir.relation_id as relationId,\n mdr.relation_name as relationName,\n cir.to_ci_id toCiId,\n cir.ci_state_id as relationStateId,\n cis.state_name as relationState,\n cir.last_applied_rfc_id as lastAppliedRfcId,\n cir.comments,\n cir.created,\n cir.updated\n from cm_ci_relations cir, md_relations mdr, cm_ci_state cis, cm_ci\nfrom_ci, md_classes from_mdc, cm_ci to_ci, md_classes to_mdc,\nns_namespaces ns\n where (ns.ns_path like '/test1/test2/%' or ns.ns_path = '/test1/test2')\n and cir.ns_id = ns.ns_id\n and cir.ci_state_id = cis.ci_state_id\n and cir.relation_id = mdr.relation_id\n and (mdr.relation_name = 'base.DeployedTo')\n and cir.from_ci_id = from_ci.ci_id\n and from_ci.class_id = from_mdc.class_id\n and ( from_mdc.class_name = 'bom.Compute')\n and cir.to_ci_id = to_ci.ci_id\n and to_ci.class_id = to_mdc.class_id;\n\n\n \n QUERY PLAN\n\n---------------------------------------------------------------------------\n-----------------------------------------------------------------\n--------------------------------------------------\n Nested Loop (cost=10.72..479.62 rows=1 width=288) (actual\ntime=5.101..98.016 rows=114 loops=1)\n Buffers: shared hit=13321 read=31\n -> Nested Loop (cost=10.44..479.31 rows=1 width=292) (actual\ntime=5.068..97.647 rows=114 loops=1)\n Buffers: shared hit=13092 read=31\n -> Nested Loop (cost=10.01..478.82 rows=1 width=288) (actual\ntime=5.037..94.108 rows=114 loops=1)\n Join Filter: (cir.ci_state_id = cis.ci_state_id)\n Rows Removed by Join Filter: 456\n Buffers: shared hit=12636 read=31\n -> Nested Loop (cost=10.01..477.71 rows=1 width=277)\n(actual time=5.030..93.568 rows=114 loops=1)\n Buffers: shared hit=12522 read=31\n -> Nested Loop (cost=9.73..475.54 rows=7 width=281)\n(actual time=0.383..87.509 rows=1578 loops=1)\n Buffers: shared hit=7788 read=31\n -> Nested Loop (cost=9.30..472.10 rows=7\nwidth=277) (actual time=0.362..53.412 rows=1578 loops=1)\n Buffers: shared hit=1475 read=26\n -> Seq Scan on md_relations mdr\n(cost=0.00..7.59 rows=1 width=22) (actual time=0.014..0.037 rows=1 loops=\n1)\n Filter: ((relation_name)::text =\n'base.DeployedTo'::text)\n Rows Removed by Filter: 126\n Buffers: shared hit=6\n -> Nested Loop (cost=9.30..463.63\nrows=88 width=259) (actual time=0.333..52.719 rows=1578 loops=1)\n Buffers: shared hit=1469 read=26\n -> Bitmap Heap Scan on\nns_namespaces ns (cost=8.87..12.88 rows=22 width=57) (actual\ntime=0.202..0.4\n51 rows=119 loops=1)\n Recheck Cond:\n(((ns_path)::text ~~ '/test1/test2/%'::text) OR ((ns_path)::text =\n'/test1/test2'\n::text))\n Filter: (((ns_path)::text ~~\n'/test1/test2/%'::text) OR ((ns_path)::text = '/test1/test2'::text\n))\n Heap Blocks: exact=48\n Buffers: shared hit=54 read=1\n -> BitmapOr\n(cost=8.87..8.87 rows=1 width=0) (actual time=0.187..0.187 rows=0 loops=1)\n Buffers: shared hit=6\nread=1\n -> Bitmap Index Scan\non ns_namespaces_vpo (cost=0.00..4.43 rows=1 width=0) (actual time\n=0.181..0.181 rows=118 loops=1)\n Index Cond:\n(((ns_path)::text ~>=~ '/test1/test2/'::text) AND ((ns_path)::text ~<~\n'/test1/test20'::text))\n Buffers: shared\nhit=3 read=1\n -> Bitmap Index Scan\non ns_namespaces_vpo (cost=0.00..4.43 rows=1 width=0) (actual time\n=0.004..0.004 rows=1 loops=1)\n Index Cond:\n((ns_path)::text = '/test1/test2'::text)\n Buffers: shared\nhit=3\n -> Index Scan using\ncm_ci_relations_r_ns_idx on cm_ci_relations cir (cost=0.43..20.45 rows=4\nwidth=\n210) (actual time=0.010..0.429 rows=13 loops=119)\n Index Cond: ((relation_id =\nmdr.relation_id) AND (ns_id = ns.ns_id))\n Buffers: shared hit=1415\nread=25\n -> Index Scan using cm_ci_pk on cm_ci from_ci\n(cost=0.43..0.48 rows=1 width=12) (actual time=0.020..0.020 rows=\n1 loops=1578)\n Index Cond: (ci_id = cir.from_ci_id)\n Buffers: shared hit=6313 read=5\n -> Index Scan using md_classes_pk on md_classes\nfrom_mdc (cost=0.28..0.30 rows=1 width=4) (actual time=0.003..0.003 r\nows=0 loops=1578)\n Index Cond: (class_id = from_ci.class_id)\n Filter: ((class_name)::text =\n'bom.Compute'::text)\n Rows Removed by Filter: 1\n Buffers: shared hit=4734\n -> Seq Scan on cm_ci_state cis (cost=0.00..1.05 rows=5\nwidth=15) (actual time=0.001..0.002 rows=5 loops=114)\n Buffers: shared hit=114\n -> Index Scan using cm_ci_pk on cm_ci to_ci (cost=0.43..0.48\nrows=1 width=12) (actual time=0.030..0.030 rows=1 loops=114)\n Index Cond: (ci_id = cir.to_ci_id)\n Buffers: shared hit=456\n -> Index Only Scan using md_classes_pk on md_classes to_mdc\n(cost=0.28..0.30 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=114\n)\n Index Cond: (class_id = to_ci.class_id)\n Heap Fetches: 0\n Buffers: shared hit=229\n Planning time: 8.468 ms\n Execution time: 98.223 ms\n\n\n\n\nThanks,\nBhaskar\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 4 Aug 2017 06:09:33 +0000",
"msg_from": "Bhaskar Annamalai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow queries after db upgrade to 9.6"
}
] |
[
{
"msg_contents": "Hello,\n\n\nWe have a fairly large static dataset that we load into Postgres. We made the tables UNLOGGED and saw a pretty significant performance improvement for the loading. This was all fantastic until the server crashed and we were surprised to see during a follow up demo that the data had disappeared... Of course, it's all our fault for not understanding the implications of UNLOGGED proprely.\n\n\nHowever, our scenario is truly a set of tables with 100's of millions of rows that are effectively WORMs: we write them once only, and then only read from them afterwards. As such, they could not be possibly corrupted post-load (i think) during a server crash (short of physical disk defects...).\n\n\nI'd like to have the performance improvement during a initial batch insert, and then make sure the table remains after \"unclean\" shutdowns, which, as you might have it, includes a regular Windows server shut down during patching for example. So unlogged tables in practice are pretty flimsy. I tried to ALTER ... SET LOGGED, but that takes a VERY long time and pretty much negates the initial performance boost of loading into an unlogged table.\n\n\nIs there a way to get my cake and eat it too?\n\n\nThank you,\n\nLaurent Hasson\n\n\n\n\n\n\n\n\n\n\n\nHello,\n\n\nWe have a fairly large static dataset that we load into Postgres. We made the tables UNLOGGED and saw a pretty significant performance improvement for the loading. This was all fantastic until the server crashed and we were surprised to see during a follow\n up demo that the data had disappeared... Of course, it's all our fault for not understanding the implications of UNLOGGED proprely.\n\n\nHowever, our scenario is truly a set of tables with 100's of millions of rows that are effectively WORMs: we write them once only, and then only read from them afterwards. As such, they could not be possibly corrupted post-load (i think) during a server\n crash (short of physical disk defects...).\n\n\n\nI'd like to have the performance improvement during a initial batch insert, and then make sure the table remains after \"unclean\" shutdowns, which, as you might have it, includes a regular Windows server shut down during patching for example. So unlogged\n tables in practice are pretty flimsy. I tried to ALTER ... SET LOGGED, but that takes a VERY long time and pretty much negates the initial performance boost of loading into an unlogged table.\n\n\nIs there a way to get my cake and eat it too?\n\n\nThank you,\nLaurent Hasson",
"msg_date": "Wed, 9 Aug 2017 03:20:19 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unlogged tables"
},
{
"msg_contents": "On Wed, Aug 9, 2017 at 5:20 AM, [email protected]\n<[email protected]> wrote:\n> We have a fairly large static dataset that we load into Postgres. We made\n> the tables UNLOGGED and saw a pretty significant performance improvement for\n> the loading. This was all fantastic until the server crashed and we were\n> surprised to see during a follow up demo that the data had disappeared... Of\n> course, it's all our fault for not understanding the implications of\n> UNLOGGED proprely.\n\nThis is documented.\n\n> However, our scenario is truly a set of tables with 100's of millions of\n> rows that are effectively WORMs: we write them once only, and then only read\n> from them afterwards. As such, they could not be possibly corrupted\n> post-load (i think) during a server crash (short of physical disk\n> defects...).\n>\n> I'd like to have the performance improvement during a initial batch insert,\n> and then make sure the table remains after \"unclean\" shutdowns, which, as\n> you might have it, includes a regular Windows server shut down during\n> patching for example. So unlogged tables in practice are pretty flimsy.\n\nAll the data that you want to keep needs to be durable anyway, so you\nwill need to WAL-log it, and full page writes of those relation pages\nwill need to be created at least once. After you get past the\ncheckpoint the data will still be around. If you want to improve the\nperformance once, there are a couple of tricks, like switching\nwal_level to minimal, preferring COPY over multi-value INSERT, batch a\nlot of them in the same transaction. Of course you can as well\nincrease wal_max_size to trigger less checkpoints, or use\nsynchronous_commit = off to reduce fsync costs.\n\n> I tried to ALTER ... SET LOGGED, but that takes a VERY long time and pretty\n> much negates the initial performance boost of loading into an unlogged\n> table.\n\nThis triggers a table rewrite and makes sure that all the data gets\nWAL-logged. The cost to pay for durability.\n\n> Is there a way to get my cake and eat it too?\n\nNot completely. Making data durable will have a cost at the end, but\nyou can leverage it.\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 9 Aug 2017 12:39:12 +0200",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unlogged tables"
},
{
"msg_contents": "On Wed, Aug 9, 2017 at 3:39 AM, Michael Paquier <[email protected]>\nwrote:\n\n> This triggers a table rewrite and makes sure that all the data gets\n> WAL-logged. The cost to pay for durability.\n>\n> > Is there a way to get my cake and eat it too?\n>\n> Not completely. Making data durable will have a cost at the end, but\n> you can leverage it.\n>\n>\nAren't you over-playing the role of the WAL in providing durability. An\nunlogged table remains intact after a clean shutdown and so is \"durable\" if\none considers the primary \"permanence\" aspect of the word.\n\nThe trade-off the OP wishes for is \"lose crash-safety to gain write-once\n(to the data files) performance\". Seeming having this on a per-table basis\nwould be part of the desirability. It sounds like OP would be willing to\nplace the table into \"read only\" mode in order to ensure this - which is\nsomething that is not presently possible. I could envision that putting an\nunlogged table into read-only mode would cause the system to ensure that\nthe data files are fully populated and then set a flag in the catalog that\ninforms the crash recovery process to go ahead and omit truncating that\nparticular unlogged table since the data files are known to be accurate.\n\nDavid J.\n\nOn Wed, Aug 9, 2017 at 3:39 AM, Michael Paquier <[email protected]> wrote:This triggers a table rewrite and makes sure that all the data gets\nWAL-logged. The cost to pay for durability.\n\n> Is there a way to get my cake and eat it too?\n\nNot completely. Making data durable will have a cost at the end, but\nyou can leverage it.Aren't you over-playing the role of the WAL in providing durability. An unlogged table remains intact after a clean shutdown and so is \"durable\" if one considers the primary \"permanence\" aspect of the word.The trade-off the OP wishes for is \"lose crash-safety to gain write-once (to the data files) performance\". Seeming having this on a per-table basis would be part of the desirability. It sounds like OP would be willing to place the table into \"read only\" mode in order to ensure this - which is something that is not presently possible. I could envision that putting an unlogged table into read-only mode would cause the system to ensure that the data files are fully populated and then set a flag in the catalog that informs the crash recovery process to go ahead and omit truncating that particular unlogged table since the data files are known to be accurate.David J.",
"msg_date": "Wed, 9 Aug 2017 07:37:29 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unlogged tables"
},
{
"msg_contents": "David, all,\n\n* David G. Johnston ([email protected]) wrote:\n> On Wed, Aug 9, 2017 at 3:39 AM, Michael Paquier <[email protected]>\n> wrote:\n> \n> > This triggers a table rewrite and makes sure that all the data gets\n> > WAL-logged. The cost to pay for durability.\n\nThat's not entirely accurate- there are certain cases where we don't\nhave to WAL-log the data, in fact we've got a specific optimization to\navoid WAL logging when it isn't necessary (see\nsrc/backend/commands/copy.c:2392 or so), and the data will still be\ndurable once the transaction commits. There are limitations there\nthough, of course, but it sounds like those are ones the OP may be happy\nto live with in this case.\n\n> > > Is there a way to get my cake and eat it too?\n> >\n> > Not completely. Making data durable will have a cost at the end, but\n> > you can leverage it.\n>\n> Aren't you over-playing the role of the WAL in providing durability. An\n> unlogged table remains intact after a clean shutdown and so is \"durable\" if\n> one considers the primary \"permanence\" aspect of the word.\n\nIn database terms, however, durable is intended to be in the face of a\ncrash and not just a clean shutdown, otherwise we wouldn't need to bother\nwith this whole WAL thing at all.\n\n> The trade-off the OP wishes for is \"lose crash-safety to gain write-once\n> (to the data files) performance\". Seeming having this on a per-table basis\n> would be part of the desirability. It sounds like OP would be willing to\n> place the table into \"read only\" mode in order to ensure this - which is\n> something that is not presently possible. I could envision that putting an\n> unlogged table into read-only mode would cause the system to ensure that\n> the data files are fully populated and then set a flag in the catalog that\n> informs the crash recovery process to go ahead and omit truncating that\n> particular unlogged table since the data files are known to be accurate.\n\nThis does sound like a pretty interesting idea, though not really\nnecessary unless OP has a mix of data that needs to be WAL-log'd and\ndata that doesn't.\n\nWhat I believe OP is really looking for here, specifically, is using\nwal_level = minimal while creating the table (or truncating it) within\nthe same transaction as the data load is done. That will avoid having\nthe table's contents written into the WAL, and PG will treat it as a\nregular table post-commit, meaning that it won't be truncated on a\ndatabase crash.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 9 Aug 2017 11:12:26 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unlogged tables"
},
{
"msg_contents": "On Tue, Aug 8, 2017 at 8:20 PM, [email protected] <\[email protected]> wrote:\n\n> Hello,\n>\n>\n> We have a fairly large static dataset that we load into Postgres. We made\n> the tables UNLOGGED and saw a pretty significant performance improvement\n> for the loading. This was all fantastic until the server crashed and we\n> were surprised to see during a follow up demo that the data had\n> disappeared... Of course, it's all our fault for not understanding the\n> implications of UNLOGGED proprely.\n>\n>\n> However, our scenario is truly a set of tables with 100's of millions of\n> rows that are effectively WORMs: we write them once only, and then only\n> read from them afterwards. As such, they could not be possibly corrupted\n> post-load (i think) during a server crash (short of physical disk\n> defects...).\n>\n\nYes, this is a feature many people have wanted. You'd have to somehow\nmark the unlogged table as immutable and then do a checkpoint, after which\nit would no longer need to be truncated after a crash. Alternatively, it\ncould be done automatically where the system would somehow know which\nunlogged tables were possibly touched since the last successful checkpoint,\nand truncate only those one. But, no one has implemented such a thing.\n\n>\n> I'd like to have the performance improvement during a initial batch\n> insert, and then make sure the table remains after \"unclean\" shutdowns,\n> which, as you might have it, includes a regular Windows server shut down\n> during patching for example.\n>\n\nWhy doesn't the Windows scheduled shutdown signal postgres to shutdown\ncleanly and wait for it to do so? That is what is supposed to happen.\n\n\n> So unlogged tables in practice are pretty flimsy. I tried to ALTER ... SET\n> LOGGED, but that takes a VERY long time and pretty much negates the initial\n> performance boost of loading into an unlogged table.\n>\n\nAre you using streaming or wal logging?\n\nCheers,\n\nJeff\n\nOn Tue, Aug 8, 2017 at 8:20 PM, [email protected] <[email protected]> wrote:\n\n\nHello,\n\n\nWe have a fairly large static dataset that we load into Postgres. We made the tables UNLOGGED and saw a pretty significant performance improvement for the loading. This was all fantastic until the server crashed and we were surprised to see during a follow\n up demo that the data had disappeared... Of course, it's all our fault for not understanding the implications of UNLOGGED proprely.\n\n\nHowever, our scenario is truly a set of tables with 100's of millions of rows that are effectively WORMs: we write them once only, and then only read from them afterwards. As such, they could not be possibly corrupted post-load (i think) during a server\n crash (short of physical disk defects...).Yes, this is a feature many people have wanted. You'd have to somehow mark the unlogged table as immutable and then do a checkpoint, after which it would no longer need to be truncated after a crash. Alternatively, it could be done automatically where the system would somehow know which unlogged tables were possibly touched since the last successful checkpoint, and truncate only those one. But, no one has implemented such a thing.\n\n\n\nI'd like to have the performance improvement during a initial batch insert, and then make sure the table remains after \"unclean\" shutdowns, which, as you might have it, includes a regular Windows server shut down during patching for example. Why doesn't the Windows scheduled shutdown signal postgres to shutdown cleanly and wait for it to do so? That is what is supposed to happen. So unlogged\n tables in practice are pretty flimsy. I tried to ALTER ... SET LOGGED, but that takes a VERY long time and pretty much negates the initial performance boost of loading into an unlogged table.Are you using streaming or wal logging?Cheers,Jeff",
"msg_date": "Wed, 9 Aug 2017 09:14:48 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unlogged tables"
}
] |
[
{
"msg_contents": "On Wed, 9 Aug 2017 09:14:48 -0700, Jeff Janes <[email protected]> wrote:\n\n >Why doesn't the Windows scheduled shutdown signal postgres to shutdown\n >cleanly and wait for it to do so? That is what is supposed to happen.\n\nWindows *does* signal shutdown (and sleep and hibernate and wakeup). \npg_ctl can catch these signals only when running as a service ... it \nwill not catch any system signals when run as an application.\n\nGeorge\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 9 Aug 2017 14:15:37 -0400",
"msg_from": "George Neuner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unlogged tables"
},
{
"msg_contents": "Ok, I am not sure. I run Postgres as a service, and when my Windows rebooted after a patch, UNLOGGED tables were cleaned... maybe the patch process in Windows messed something up, I don't know.\r\n\r\nFrom: [email protected]\r\nSent: August 9, 2017 13:17\r\nTo: [email protected]\r\nSubject: Re: [PERFORM] Unlogged tables\r\n\r\n\r\nOn Wed, 9 Aug 2017 09:14:48 -0700, Jeff Janes <[email protected]> wrote:\r\n\r\n >Why doesn't the Windows scheduled shutdown signal postgres to shutdown\r\n >cleanly and wait for it to do so? That is what is supposed to happen.\r\n\r\nWindows *does* signal shutdown (and sleep and hibernate and wakeup).\r\npg_ctl can catch these signals only when running as a service ... it\r\nwill not catch any system signals when run as an application.\r\n\r\nGeorge\r\n\r\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\n\n\n\n\n\n\n\n\n\r\nOk, I am not sure. I run Postgres as a service, and when my Windows rebooted after a patch, UNLOGGED tables were cleaned... maybe the patch process in Windows messed something up, I don't know.\n\n\n\n\n\n\n\n\n\n\nFrom: [email protected]\nSent: August 9, 2017 13:17\nTo: [email protected]\nSubject: Re: [PERFORM] Unlogged tables\n\n\n\n\n\n\n\n\n\n\n\n\n\nOn Wed, 9 Aug 2017 09:14:48 -0700, Jeff Janes <[email protected]> wrote:\n\r\n >Why doesn't the Windows scheduled shutdown signal postgres to shutdown\r\n >cleanly and wait for it to do so? That is what is supposed to happen.\n\r\nWindows *does* signal shutdown (and sleep and hibernate and wakeup). \r\npg_ctl can catch these signals only when running as a service ... it \r\nwill not catch any system signals when run as an application.\n\r\nGeorge\n\n\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 9 Aug 2017 18:30:10 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unlogged tables"
},
{
"msg_contents": "Please don't top post.\n\nOn 8/9/2017 2:30 PM, [email protected] wrote:\n> > On 8/9/2017 2:17 PM, [email protected] wrote:\n>\n> >> On Wed, 9 Aug 2017 09:14:48 -0700, Jeff Janes <[email protected]> wrote:\n>\n> >> Why doesn't the Windows scheduled shutdown signal postgres to shutdown\n> >> cleanly and wait for it to do so? That is what is supposed to happen.\n>\n> > Windows *does* signal shutdown (and sleep and hibernate and wakeup).\n> > pg_ctl can catch these signals only when running as a service ... it\n> > will not catch any system signals when run as an application.\n>\n> Ok, I am not sure. I run Postgres as a service, and when my Windows \n> rebooted after a patch, UNLOGGED tables were cleaned... maybe the \n> patch process in Windows messed something up, I don't know.\n\nHmm. Do you have checkpoint intervals set very long? Or do you have \nthe Windows shutdown delay(s) set short?\n\nData in unlogged tables persists only AFTER a checkpoint ... if the \ntables had been written to and were \"dirty\", and the system went down \nbefore the shutdown checkpoint (or before the shutdown checkpoint \ncompleted), then the tables would be truncated at the next startup.\n\n\nService control in Windows is very different from Unix/Linux, and \nWindows is not completely POSIX compatible. I develop software for \nWindows and Linux, but I only use Postgresql. Postgresql was written \noriginally for Unix and it is possible that the Windows version is not \ndoing something quite right.\n\nI took a quick glance at the source for pg_ctl: SERVICE_CONTROL_SHUTDOWN \nand SERVICE_CONTROL_STOP both just set an shared event to notify the \nwriter processes to terminate. Offhand I don't see where pg_ctl - \nrunning as a service - is waiting for the writer processes to actually \nterminate ( it does wait if run from the command line ). It's possible \nthat your system shut down too quickly and the WAL writer was killed \ninstead of terminating cleanly.\n\n\nJust FYI, re: Postgresql as a user application.\n\nWindows doesn't send *signals* (ala Unix) at all ... it is message \nbased. The control messages are different for applications and services \n- e.g., WM_SHUTDOWN is sent to applications, SERVICE_CONTROL_SHUTDOWN is \nsent to services. In order for an application to catch a message, it \nmust create a window.\n\npg_ctl is a command line program which does not create any windows (in \nany mode). It was designed to enable it to run as a service, but when \nrun as a user application it will can't receive any system messages. \nThe user *must* manually stop a running database cluster before shutting \ndown or sleeping.\n\nGeorge\n\n\n\n\n\n\n\n Please don't top post.\n\nOn 8/9/2017 2:30 PM,\n [email protected] wrote:\n\n\n\n\n\n> On 8/9/2017 2:17\n PM, [email protected] wrote:\n\n\n\n\n\n\n\n\n>> On Wed, 9 Aug 2017\n 09:14:48 -0700, Jeff Janes\n <[email protected]> wrote:\n\n >> Why doesn't the Windows scheduled shutdown\n signal postgres to shutdown\n >> cleanly and wait for it to do so?� That is\n what is supposed to happen.\n\n > Windows *does* signal shutdown (and sleep and\n hibernate and wakeup).� \n > pg_ctl can catch these signals only when running\n as a service ... it \n > will not catch any system signals when run as an\n application.\n\n\n\n\n\n\n\n\n\n\n Ok, I am not sure. I run Postgres as a service, and when my\n Windows rebooted after a patch, UNLOGGED tables were\n cleaned... maybe the patch process in Windows messed\n something up, I don't know.\n\n\n\n\n Hmm.� Do you have checkpoint intervals set very long?� Or do you\n have the Windows shutdown delay(s) set short?\n\n Data in unlogged tables persists only AFTER a checkpoint ... if the\n tables had been written to and were \"dirty\", and the system went\n down before the shutdown checkpoint (or before the shutdown\n checkpoint completed), then the tables would be truncated at the\n next startup.� \n\n\n Service control in Windows is very different from Unix/Linux, and\n Windows is not completely POSIX compatible.� I develop software for\n Windows and Linux, but I only use Postgresql.� Postgresql was\n written originally for Unix and it is possible that the Windows\n version is not doing something quite right.\n\n I took a quick glance at the source for pg_ctl:�\n SERVICE_CONTROL_SHUTDOWN and SERVICE_CONTROL_STOP both just set an\n shared event to notify the writer processes to terminate.� Offhand I\n don't see where pg_ctl - running as a service - is waiting for the\n writer processes to actually terminate ( it does wait if run from\n the command line ).�� It's possible that your system shut down too\n quickly and the WAL writer was killed instead of terminating\n cleanly.\n\n\n Just FYI, re: Postgresql as a user application.� \n\n Windows doesn't send *signals* (ala Unix) at all ... it is message\n based.� The control messages are different for applications and\n services - e.g., WM_SHUTDOWN is sent to applications,\n SERVICE_CONTROL_SHUTDOWN is sent to services.� In order for an\n application to catch a message, it must create a window.� \n\n pg_ctl is a command line program which does not create any windows\n (in any mode).� It was designed to enable it to run as a service,\n but when run as a user application it will can't receive any system\n messages.� The user *must* manually stop a running database cluster\n before shutting down or sleeping.\n\n George",
"msg_date": "Wed, 9 Aug 2017 15:52:26 -0400",
"msg_from": "George Neuner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unlogged tables"
},
{
"msg_contents": "Sent from my BlackBerry - the most secure mobile device\r\nFrom: [email protected]\r\nSent: August 9, 2017 14:52\r\nTo: [email protected]\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Unlogged tables\r\n\r\n\r\nPlease don't top post.\r\n\r\nOn 8/9/2017 2:30 PM, [email protected]<mailto:[email protected]> wrote:\r\n> On 8/9/2017 2:17 PM, [email protected]<mailto:[email protected]> wrote:\r\n\r\n>> On Wed, 9 Aug 2017 09:14:48 -0700, Jeff Janes <[email protected]><mailto:[email protected]> wrote:\r\n\r\n>> Why doesn't the Windows scheduled shutdown signal postgres to shutdown\r\n>> cleanly and wait for it to do so? That is what is supposed to happen.\r\n\r\n> Windows *does* signal shutdown (and sleep and hibernate and wakeup).\r\n> pg_ctl can catch these signals only when running as a service ... it\r\n> will not catch any system signals when run as an application.\r\n\r\nOk, I am not sure. I run Postgres as a service, and when my Windows rebooted after a patch, UNLOGGED tables were cleaned... maybe the patch process in Windows messed something up, I don't know.\r\n\r\nHmm. Do you have checkpoint intervals set very long? Or do you have the Windows shutdown delay(s) set short?\r\n\r\nData in unlogged tables persists only AFTER a checkpoint ... if the tables had been written to and were \"dirty\", and the system went down before the shutdown checkpoint (or before the shutdown checkpoint completed), then the tables would be truncated at the next startup.\r\n\r\n\r\nService control in Windows is very different from Unix/Linux, and Windows is not completely POSIX compatible. I develop software for Windows and Linux, but I only use Postgresql. Postgresql was written originally for Unix and it is possible that the Windows version is not doing something quite right.\r\n\r\nI took a quick glance at the source for pg_ctl: SERVICE_CONTROL_SHUTDOWN and SERVICE_CONTROL_STOP both just set an shared event to notify the writer processes to terminate. Offhand I don't see where pg_ctl - running as a service - is waiting for the writer processes to actually terminate ( it does wait if run from the command line ). It's possible that your system shut down too quickly and the WAL writer was killed instead of terminating cleanly.\r\n\r\n\r\nJust FYI, re: Postgresql as a user application.\r\n\r\nWindows doesn't send *signals* (ala Unix) at all ... it is message based. The control messages are different for applications and services - e.g., WM_SHUTDOWN is sent to applications, SERVICE_CONTROL_SHUTDOWN is sent to services. In order for an application to catch a message, it must create a window.\r\n\r\npg_ctl is a command line program which does not create any windows (in any mode). It was designed to enable it to run as a service, but when run as a user application it will can't receive any system messages. The user *must* manually stop a running database cluster before shutting down or sleeping.\r\n\r\nGeorge\r\n\r\n\r\nHello George... I know about not doing top posting but was emailing from my phone, and just recently moved to Android. I think I am still not configured right.\r\n\r\nSomewhat orthogonal, but any particular reason why top posts == bad, or just convention?\r\n\r\nI will try a few scenarios and report back. I do not believe I have long cp intervals and I do not believe the windows machine shuts down faster than 'normal'\r\n\r\nFinally, my true question was whether Postgres would support something like worm with the performance benefits of UNLOGGED, but not the inconveniences of auto truncates.\r\n\r\nThanks.\r\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\r\nSent from my BlackBerry - the most secure mobile device\n\n\n\n\n\n\n\n\nFrom: [email protected]\nSent: August 9, 2017 14:52\nTo: [email protected]\nCc: [email protected]\nSubject: Re: [PERFORM] Unlogged tables\n\n\n\n\n\n\n\n\n\n\n\n\n\r\nPlease don't top post.\n\nOn 8/9/2017 2:30 PM, \r\[email protected] wrote:\n\n\n\n> On 8/9/2017 2:17 PM,\r\[email protected] wrote:\n\n\n\n\n\n\n>> On Wed, 9 Aug 2017 09:14:48 -0700, Jeff Janes \r\n<[email protected]> wrote:\n\r\n>> Why doesn't the Windows scheduled shutdown signal postgres to shutdown\r\n>> cleanly and wait for it to do so? That is what is supposed to happen.\n\r\n> Windows *does* signal shutdown (and sleep and hibernate and wakeup). \r\n> pg_ctl can catch these signals only when running as a service ... it \r\n> will not catch any system signals when run as an application.\n\n\n\n\n\n\n\n\n\n\r\nOk, I am not sure. I run Postgres as a service, and when my Windows rebooted after a patch, UNLOGGED tables were cleaned... maybe the patch process in Windows messed something up, I don't know.\n\n\n\n\r\nHmm. Do you have checkpoint intervals set very long? Or do you have the Windows shutdown delay(s) set short?\n\r\nData in unlogged tables persists only AFTER a checkpoint ... if the tables had been written to and were \"dirty\", and the system went down before the shutdown checkpoint (or before the shutdown checkpoint completed), then the tables would be truncated at the\r\n next startup. \n\n\r\nService control in Windows is very different from Unix/Linux, and Windows is not completely POSIX compatible. I develop software for Windows and Linux, but I only use Postgresql. Postgresql was written originally for Unix and it is possible that the Windows\r\n version is not doing something quite right.\n\r\nI took a quick glance at the source for pg_ctl: SERVICE_CONTROL_SHUTDOWN and SERVICE_CONTROL_STOP both just set an shared event to notify the writer processes to terminate. Offhand I don't see where pg_ctl - running as a service - is waiting for the writer\r\n processes to actually terminate ( it does wait if run from the command line ). It's possible that your system shut down too quickly and the WAL writer was killed instead of terminating cleanly.\n\n\r\nJust FYI, re: Postgresql as a user application. \n\r\nWindows doesn't send *signals* (ala Unix) at all ... it is message based. The control messages are different for applications and services - e.g., WM_SHUTDOWN is sent to applications, SERVICE_CONTROL_SHUTDOWN is sent to services. In order for an application\r\n to catch a message, it must create a window. \n\r\npg_ctl is a command line program which does not create any windows (in any mode). It was designed to enable it to run as a service, but when run as a user application it will can't receive any system messages. The user *must* manually stop a running database\r\n cluster before shutting down or sleeping.\n\r\nGeorge\n\n\n\n\nHello George... I know about not doing top posting but was emailing from my phone, and just recently moved to Android. I think I am still not configured right.\n\n\nSomewhat orthogonal, but any particular reason why top posts == bad, or just convention? \n\n\nI will try a few scenarios and report back. I do not believe I have long cp intervals and I do not believe the windows machine shuts down faster than 'normal'\n\n\nFinally, my true question was whether Postgres would support something like worm with the performance benefits of UNLOGGED, but not the inconveniences of auto truncates.\n\n\nThanks.",
"msg_date": "Thu, 10 Aug 2017 05:29:35 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unlogged tables"
},
{
"msg_contents": "On 8/10/2017 1:29 AM, [email protected] wrote:\n> Hello George... I know about not doing top posting but was emailing \n> from my phone, and just recently moved to Android. I think I am still \n> not configured right.\n>\n> Somewhat orthogonal, but any particular reason why top posts == bad, \n> or just convention?\n\nThe standard joke reply is:\n\n Because it messes up the order in which people normally read.\n > Why is top-posting such a bad thing?\n >> Top-posting.\n >>> What is the most annoying thing in e-mail?\n\n<grin>\n\nIt is just convention, but with a good reason: most posts in groups are \npart of a discussion, and it's hard to follow a discussion when replies \nare far from the comment or question that provoked them. The convention \nfor discussion is \"interleaved\" style.\n\nThe email top posting convention serves a different purpose: to preserve \na record of the communication. Polite people often use a mix of \nstyles: copying the [latest portion of the ] quoted message to the top \nand replying to it inline (as with a discussion).\n\nsee https://en.wikipedia.org/wiki/Posting_style\n\n\nThen too, there is the issue of editing. With an email, typically only \na handful of people will receive it. With a public group or mailing \nlist, all of the participants - perhaps thousands - will receive the \npost. When lots of people in a popular discussion quote the entire \nmessage, it quickly grows to an unwieldy size and eventually will be \nrejected by the servers.\n\nThe polite thing when replying is to edit the original message to \ninclude just information relevant to your reply, and then reply inline. \nLeave archiving of the discussion to the servers.\n\n\n> I will try a few scenarios and report back. I do not believe I have \n> long cp intervals and I do not believe the windows machine shuts down \n> faster than 'normal'\n\nYour problem still may be related to the shutdown delay.\n\nThe way it works is: Windows sends a shutdown message to the service, \nand the service replies with an estimate of how long it will take to \nstop. Until the service terminates, Windows waits and periodically \npolls the service asking for its progress. Windows continues to wait \nuntil the service process either terminates, or until the system \nconfigured \"drop-dead\" timeout occurs, at which time Windows forcibly \nkills the service and continues with the shutdown.\n\nThe problem is that Postgresql is not a single process: pg_ctl spawns a \nbunch of children. Looking further at the source, I believe pg_ctl is \nwaiting for the children to terminate before stopping itself - but it is \nNOT responding to Windows progress messages, so Windows has no idea \nwhether it is making headway or needs more time to complete.\n\nWindows has no idea that those other processes are connected to the \nPostgresql service, so if it times out and kills pg_ctl, it assumes it \nis done with Postgresql. The other processes then may be killed whether \nor not they are finished.\n\n\n> Finally, my true question was whether Postgres would support something \n> like worm with the performance benefits of UNLOGGED, but not the \n> inconveniences of auto truncates.\n\nI saw some of the other responses re: that issue.\n\nAs I mentioned previously, an unlogged table will be truncated on \nstartup if it is dirty - i.e. there were any updates that haven't \nsurvived at least one checkpoint. The only thing you could try to do is \nforce a checkpoint immediately following an unlogged table write. But \nthat is expensive performance wise and is not encouraged.\n\nhttps://www.postgresql.org/docs/current/static/sql-checkpoint.html\n\nGeorge\n\n\n\n\n\n\n\n\n\nOn 8/10/2017 1:29 AM,\n [email protected] wrote:\n\n\n\n\n\nHello George... I know about\n not doing top posting but was emailing from my phone, and just\n recently moved to Android. I think I am still not configured\n right.\n\n\nSomewhat orthogonal, but any\n particular reason why top posts == bad, or just convention? \n\n\n\n\n The standard joke reply is:\n\n �� Because it messes up the order in which people normally read.\n ��\n > Why is top-posting such a bad thing?� \n �� >> Top-posting.\n �� >>> What is the most annoying thing in e-mail?\n\n <grin>\n\n It is just convention, but with a good reason:� most posts in groups\n are part of a discussion, and it's hard to follow a discussion when\n replies are far from the comment or question that provoked them.��\n The convention for discussion is \"interleaved\" style.\n\n The email top posting convention serves a different purpose: to\n preserve a record of the communication.� Polite people often use a\n mix of styles:� copying the [latest portion of the ] quoted message\n to the top and replying to it inline (as with a discussion).\n\n see https://en.wikipedia.org/wiki/Posting_style\n\n\n Then too, there is the issue of editing.� With an email, typically\n only a handful of people will receive it.� With a public group or\n mailing list, all of the participants� - perhaps thousands - will\n receive the post.� When lots of people in a popular discussion quote\n the entire message, it quickly grows to an unwieldy size and\n eventually will be rejected by the servers.\n\n The polite thing when replying is to edit the original message to\n include just information relevant to your reply, and then reply\n inline.� Leave archiving of the discussion to the servers.\n\n\n\n\nI will try a few scenarios and\n report back. I do not believe I have long cp intervals and I\n do not believe the windows machine shuts down faster than\n 'normal'\n\n\n\n Your problem still may be related to the shutdown delay.� \n\n The way it works is: Windows sends a shutdown message to the\n service, and the service replies with an estimate of how long it\n will take to stop.� Until the service terminates, Windows waits and\n periodically polls the service asking for its progress.� Windows\n continues to wait until the service process either terminates, or\n until the system configured \"drop-dead\" timeout occurs, at which\n time Windows forcibly kills the service and continues with the\n shutdown.\n\n The problem is that Postgresql is not a single process: pg_ctl\n spawns a bunch of children.� Looking further at the source, I\n believe pg_ctl is waiting for the children to terminate before\n stopping itself - but it is NOT responding to Windows progress\n messages, so Windows has no idea whether it is making headway or\n needs more time to complete.� \n\n Windows has no idea that those other processes are connected to the\n Postgresql service, so if it times out and kills pg_ctl, it assumes\n it is done with Postgresql.� The other processes then may be killed\n whether or not they are finished.\n\n\n\n\nFinally, my true question was\n whether Postgres would support something like worm with the\n performance benefits of UNLOGGED, but not the inconveniences\n of auto truncates.\n \n\n\n\n I saw some of the other responses re: that issue.� \n\n As I mentioned previously, an unlogged table will be truncated on\n startup if it is dirty - i.e. there were any updates that haven't\n survived at least one checkpoint.� The only thing you could try to\n do is force a checkpoint immediately following an unlogged table\n write.� But that is expensive performance wise and is not\n encouraged.\n\nhttps://www.postgresql.org/docs/current/static/sql-checkpoint.html\n\n George",
"msg_date": "Thu, 10 Aug 2017 18:28:37 -0400",
"msg_from": "George Neuner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unlogged tables"
},
{
"msg_contents": "On 8/10/2017 1:29 AM, [email protected] wrote:\n>\n> Finally, my true question was whether Postgres would support something \n> like worm with the performance benefits of UNLOGGED, but not the \n> inconveniences of auto truncates.\n>\n\nIf you can live with the limitations, one other thing you might try is \nstoring WORM data in the filesystem and accessing it via file_fdw.\nhttps://www.postgresql.org/docs/current/static/file-fdw.html\n\nThere are a lot of downsides to this: file_fdw tables are read-only, so \nyou have to update the external file through some other means. Also, \nI've never used file_fdw, so I'm not sure whether you can create indexes \non the tables - and even if you can, you would need to manually recreate \nthe indexes periodically because Postgresql won't see your updates.\n\nGeorge\n\n\n\n\n\n\n\n\n\nOn 8/10/2017 1:29 AM,\n [email protected] wrote:\n\n\n\n\n\nFinally, my true question was\n whether Postgres would support something like worm with the\n performance benefits of UNLOGGED, but not the inconveniences\n of auto truncates.\n\n\n\n\n If you can live with the limitations, one other thing you might try\n is storing WORM data in the filesystem and accessing it via\n file_fdw.\nhttps://www.postgresql.org/docs/current/static/file-fdw.html\n\n There are a lot of downsides to this:� file_fdw tables are\n read-only, so you have to update the external file through some\n other means.� Also, I've never used file_fdw, so I'm not sure\n whether you can create indexes on the tables - and even if you can,\n you would need to manually recreate the indexes periodically because\n Postgresql won't see your updates.\n\n George",
"msg_date": "Thu, 10 Aug 2017 18:52:46 -0400",
"msg_from": "George Neuner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unlogged tables"
}
] |
[
{
"msg_contents": "Kaufen Pass, Führerschein und Ausweis,COUNTERFEIT DOLLARS, EURO, AND POUNDS:\n--(contact--Whatsapp: +44 7572 012958)\n\nDO you want to travel, study or work abroad, ACQUIRE; TOEIC,TIEP NEBOSH,\nIELTS, TOEFL, ESOL, GMAT, \nCERTIFICATES Without Attending Exam. # WE can also help you to get valid\nPASSPORT, VISA,DRIVERS LICENSE and ID cards\n\nEmail: [email protected]\n\nWir bieten nur Originalpässe, Führerscheine, Personalausweise, Briefmarken,\nVisa, Schulabschlüsse und andere Produkte für eine Reihe von Ländern an:\nUSA, Australien, Belgien, Brasilien, Kanada, Italien, Finnland, Frankreich,\nDeutschland, Israel, Mexiko, Niederlande, Südafrika, Spanien, Vereinigtes\nKönigreich. Diese Liste ist nicht voll.\n\nBUY PASSPORT BRITISH (UK) ZU VERKAUFEN DIPLOMATISCHE KANADISCHE ID-KARTE\nONLINE VEREINIGTE STAATEN (US) ID-KARTE VERKAUFS-FAHRER-LIZENZ\n\nKAUFEN Original PASSPORTS, TREIBER LIZENZ, MARRIAGE ZERTIFIKATE\nWir bieten nur Original-IDs und Passport, Visa, Driving\nLizenz, ID KARTEN, etc kaufen Original-Dokumente-Wir\nHaben die besten HOLOGRAMME UND DUPLIKIERMASCHINEN Mit über\n13 Millionen von Dokumenten, die über die Welt zirkulieren.\n-IDs Scan-ja ...\n-HOLOGRAMME: IDENTISCH\nBAR CODES: IDS SCAN\n-UV: JA\n\n\nWir produzieren die echte ID online wir verkaufen UK / EU ID, kanadische ID,\nAustralische ID und IDs für viele andere Länder aus echten\nPässe zu echten Führerscheinen zu echten Bankaussagen haben wir Ihre\nIdentifikationsbedarf abgedeckt Unsere echten IDs beinhalten alle\nSicherheitsmerkmale\nWie echte Hologramme, ultraviolette Wasserzeichen, Tiefdruck,\nSpezialpapier, Fluoreszenzfarbstoffe, RFID-Chips, Barcodes entsprechend\nIhre Details und mehr. Unsere IDs sind identisch mit der realen Nr\nAndere Seite bietet Ihnen diese Qualität, die wir am besten sind.\nNeue Identität schützt Ihre Privatsphäre und nimmt Ihre Freiheit zurück.\n\nFür weitere Details kontaktieren Sie uns unter:\n\nKontakt e-mail: ............... [email protected]\nAllgemeines [email protected]\nkontakt--Whatsapp: +44 7572 012958\n\n\n\n--\nView this message in context: http://www.postgresql-archive.org/Kaufen-Pass-Fuhrerschein-und-Ausweis-COUNTERFEIT-DOLLARS-EURO-AND-POUNDS-contact-Whatsapp-44-7572-01-tp5977381.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 10 Aug 2017 22:26:34 -0700 (MST)",
"msg_from": "tianazavee <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Kaufen_Pass,_F=C3=BChrerschein_und_Ausweis,COUNTERFEIT_DOLLARS?=\n =?UTF-8?Q?,_EURO,_AND_POUNDS:_--(contact--Whatsapp:_+44_7572_012958)?="
}
] |
[
{
"msg_contents": "Kaufen Pass, Führerschein und Ausweis,COUNTERFEIT DOLLARS, EURO, AND\nPOUNDS: --(contact--Whatsapp: +44 7572 012958)\n\nDO you want to travel, study or work abroad, ACQUIRE; TOEIC,TIEP NEBOSH,\nIELTS, TOEFL, ESOL, GMAT,\nCERTIFICATES Without Attending Exam. # WE can also help you to get valid\nPASSPORT, VISA,DRIVERS LICENSE and ID cards\n\nEmail: [email protected]\n\nWir bieten nur Originalpässe, Führerscheine, Personalausweise, Briefmarken,\nVisa, Schulabschlüsse und andere Produkte für eine Reihe von Ländern an:\nUSA, Australien, Belgien, Brasilien, Kanada, Italien, Finnland, Frankreich,\nDeutschland, Israel, Mexiko, Niederlande, Südafrika, Spanien, Vereinigtes\nKönigreich. Diese Liste ist nicht voll.\n\nBUY PASSPORT BRITISH (UK) ZU VERKAUFEN DIPLOMATISCHE KANADISCHE ID-KARTE\nONLINE VEREINIGTE STAATEN (US) ID-KARTE VERKAUFS-FAHRER-LIZENZ\n\nKAUFEN Original PASSPORTS, TREIBER LIZENZ, MARRIAGE ZERTIFIKATE\nWir bieten nur Original-IDs und Passport, Visa, Driving\nLizenz, ID KARTEN, etc kaufen Original-Dokumente-Wir\nHaben die besten HOLOGRAMME UND DUPLIKIERMASCHINEN Mit über\n13 Millionen von Dokumenten, die über die Welt zirkulieren.\n-IDs Scan-ja ...\n-HOLOGRAMME: IDENTISCH\nBAR CODES: IDS SCAN\n-UV: JA\n\n\nWir produzieren die echte ID online wir verkaufen UK / EU ID, kanadische ID,\nAustralische ID und IDs für viele andere Länder aus echten\nPässe zu echten Führerscheinen zu echten Bankaussagen haben wir Ihre\nIdentifikationsbedarf abgedeckt Unsere echten IDs beinhalten alle\nSicherheitsmerkmale\nWie echte Hologramme, ultraviolette Wasserzeichen, Tiefdruck,\nSpezialpapier, Fluoreszenzfarbstoffe, RFID-Chips, Barcodes entsprechend\nIhre Details und mehr. Unsere IDs sind identisch mit der realen Nr\nAndere Seite bietet Ihnen diese Qualität, die wir am besten sind.\nNeue Identität schützt Ihre Privatsphäre und nimmt Ihre Freiheit zurück.\n\nFür weitere Details kontaktieren Sie uns unter:\n\nKontakt e-mail: ............... [email protected]\nAllgemeines [email protected]\nkontakt--Whatsapp: +44 7572 012958\n\nKaufen Pass, Führerschein und Ausweis,COUNTERFEIT DOLLARS, EURO, AND POUNDS: --(contact--Whatsapp: +44 7572 012958)DO you want to travel, study or work abroad, ACQUIRE; TOEIC,TIEP NEBOSH, IELTS, TOEFL, ESOL, GMAT, CERTIFICATES Without Attending Exam. # WE can also help you to get valid PASSPORT, VISA,DRIVERS LICENSE and ID cardsEmail: [email protected] bieten nur Originalpässe, Führerscheine, Personalausweise, Briefmarken, Visa, Schulabschlüsse und andere Produkte für eine Reihe von Ländern an: USA, Australien, Belgien, Brasilien, Kanada, Italien, Finnland, Frankreich, Deutschland, Israel, Mexiko, Niederlande, Südafrika, Spanien, Vereinigtes Königreich. Diese Liste ist nicht voll.BUY PASSPORT BRITISH (UK) ZU VERKAUFEN DIPLOMATISCHE KANADISCHE ID-KARTE ONLINE VEREINIGTE STAATEN (US) ID-KARTE VERKAUFS-FAHRER-LIZENZKAUFEN Original PASSPORTS, TREIBER LIZENZ, MARRIAGE ZERTIFIKATEWir bieten nur Original-IDs und Passport, Visa, DrivingLizenz, ID KARTEN, etc kaufen Original-Dokumente-WirHaben die besten HOLOGRAMME UND DUPLIKIERMASCHINEN Mit über13 Millionen von Dokumenten, die über die Welt zirkulieren.-IDs Scan-ja ...-HOLOGRAMME: IDENTISCHBAR CODES: IDS SCAN-UV: JAWir produzieren die echte ID online wir verkaufen UK / EU ID, kanadische ID,Australische ID und IDs für viele andere Länder aus echtenPässe zu echten Führerscheinen zu echten Bankaussagen haben wir IhreIdentifikationsbedarf abgedeckt Unsere echten IDs beinhalten alle SicherheitsmerkmaleWie echte Hologramme, ultraviolette Wasserzeichen, Tiefdruck,Spezialpapier, Fluoreszenzfarbstoffe, RFID-Chips, Barcodes entsprechendIhre Details und mehr. Unsere IDs sind identisch mit der realen NrAndere Seite bietet Ihnen diese Qualität, die wir am besten sind.Neue Identität schützt Ihre Privatsphäre und nimmt Ihre Freiheit zurück.Für weitere Details kontaktieren Sie uns unter:Kontakt e-mail: ............... [email protected] Support.....................azel4fakepassportsndocs@gmail.comkontakt--Whatsapp: +44 7572 012958",
"msg_date": "Thu, 10 Aug 2017 22:38:00 -0700",
"msg_from": "tiana zavee <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Kaufen_Pass=2C_F=C3=BChrerschein_und_Ausweis=2CCOUNTERFEIT_D?=\n\t=?UTF-8?Q?OLLARS=2C_EURO=2C_AND_POUNDS=3A_=2D=2D=28contact=2D=2DWhatsapp=3A_=2B44_7572_012?=\n\t=?UTF-8?Q?958=29?="
}
] |
[
{
"msg_contents": "I have performance issues with two big tables. Those tables are located on\nan oracle remote database. I'm running the quert : insert into\nlocal_postgresql_table select * from oracle_remote_table.\n\nThe first table has 45M records and its size is 23G. The import of the data\nfrom the oracle remote database is taking 1 hour and 38 minutes. After that\nI create 13 regular indexes on the table and it takes 10 minutes per table\n->2 hours and 10 minutes in total.\n\nThe second table has 29M records and its size is 26G. The import of the\ndata from the oracle remote database is taking 2 hours and 30 minutes. The\ncreation of the indexes takes 1 hours and 30 minutes (some are indexes on\none column and the creation takes 5 min and some are indexes on multiples\ncolumn and it takes 11 min.\n\nThose operation are very problematic for me and I'm searching for a\nsolution to improve the performance. The parameters I assigned :\n\nmin_parallel_relation_size = 200MB\nmax_parallel_workers_per_gather = 5\nmax_worker_processes = 8\neffective_cache_size = 2500MB\nwork_mem = 16MB\nmaintenance_work_mem = 1500MB\nshared_buffers = 2000MB\nRAM : 5G\nCPU CORES : 8\n\n*-I tried running select count(*) from table in oracle and in postgresql\nthe running time is almost equal.*\n\n*-Before importing the data I drop the indexes and the constraints.*\n\n*-I tried to copy a 23G file from the oracle server to the postgresql\nserver and it took me 12 minutes.*\n\nPlease advice how can I continue ? How can I improve something in this\noperation ?\n\nI have performance issues with two big tables. Those tables are located on an oracle remote database. I'm running the quert : insert into local_postgresql_table select * from oracle_remote_table.The first table has 45M records and its size is 23G. The import of the data from the oracle remote database is taking 1 hour and 38 minutes. After that I create 13 regular indexes on the table and it takes 10 minutes per table ->2 hours and 10 minutes in total.The second table has 29M records and its size is 26G. The import of the data from the oracle remote database is taking 2 hours and 30 minutes. The creation of the indexes takes 1 hours and 30 minutes (some are indexes on one column and the creation takes 5 min and some are indexes on multiples column and it takes 11 min.Those operation are very problematic for me and I'm searching for a solution to improve the performance. The parameters I assigned :min_parallel_relation_size = 200MBmax_parallel_workers_per_gather = 5 max_worker_processes = 8 effective_cache_size = 2500MBwork_mem = 16MBmaintenance_work_mem = 1500MBshared_buffers = 2000MBRAM : 5GCPU CORES : 8-I tried running select count(*) from table in oracle and in postgresql the running time is almost equal.-Before importing the data I drop the indexes and the constraints.-I tried to copy a 23G file from the oracle server to the postgresql server and it took me 12 minutes.Please advice how can I continue ? How can I improve something in this operation ?",
"msg_date": "Mon, 14 Aug 2017 16:24:24 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance problem on big tables"
},
{
"msg_contents": "Total RAM on your host is 5GB, really? Before touching anything else, \nincrease your RAM. That will be your big performance boost right \nthere. Then, you can \"up\" your effective_cache_size and \nmaintenance_work_mem.\n\nRegards,\nMichael Vitale\n\n> Mariel Cherkassky <mailto:[email protected]>\n> Monday, August 14, 2017 9:24 AM\n>\n> I have performance issues with two big tables. Those tables are \n> located on an oracle remote database. I'm running the quert : |insert \n> into local_postgresql_table select * from oracle_remote_table.|\n>\n> The first table has 45M records and its size is 23G. The import of the \n> data from the oracle remote database is taking 1 hour and 38 minutes. \n> After that I create 13 regular indexes on the table and it takes 10 \n> minutes per table ->2 hours and 10 minutes in total.\n>\n> The second table has 29M records and its size is 26G. The import of \n> the data from the oracle remote database is taking 2 hours and 30 \n> minutes. The creation of the indexes takes 1 hours and 30 minutes \n> (some are indexes on one column and the creation takes 5 min and some \n> are indexes on multiples column and it takes 11 min.\n>\n> Those operation are very problematic for me and I'm searching for a \n> solution to improve the performance. The parameters I assigned :\n>\n> min_parallel_relation_size =200MB\n> ||\n> max_parallel_workers_per_gather =5\n> max_worker_processes =8\n> effective_cache_size =2500MB\n> work_mem =16MB\n> maintenance_work_mem =1500MB\n> shared_buffers =2000MB\n> RAM :5G\n> CPU CORES :8\n>\n> *-I tried running select count(*) from table in oracle and in \n> postgresql the running time is almost equal.*\n>\n> *-Before importing the data I drop the indexes and the constraints.*\n>\n> *-I tried to copy a 23G file from the oracle server to the postgresql \n> server and it took me 12 minutes.*\n>\n> Please advice how can I continue ? How can I improve something in this \n> operation ?\n>\n\n\n\n\nTotal RAM on your host is \n5GB, really? Before touching anything else, \nincrease your RAM. That will be your big performance boost right \nthere. Then, you can \"up\" your effective_cache_size and \nmaintenance_work_mem.\n\nRegards,\nMichael Vitale\n\n\n\n \nMariel Cherkassky Monday,\n August 14, 2017 9:24 AM \nI\n have performance issues with two big tables. Those tables are located \non an oracle remote database. I'm running the quert : insert\n into local_postgresql_table select * from oracle_remote_table.The\n first table has 45M records and its size is 23G. The import of the data\n from the oracle remote database is taking 1 hour and 38 minutes. After \nthat I create 13 regular indexes on the table and it takes 10 minutes \nper table ->2 hours and 10 minutes in total.The\n second table has 29M records and its size is 26G. The import of the \ndata from the oracle remote database is taking 2 hours and 30 minutes. \nThe creation of the indexes takes 1 hours and 30 minutes (some are \nindexes on one column and the creation takes 5 min and some are indexes \non multiples column and it takes 11 min.Those\n operation are very problematic for me and I'm searching for a solution \nto improve the performance. The parameters I assigned :min_parallel_relation_size = 200MBmax_parallel_workers_per_gather = 5 max_worker_processes = 8 effective_cache_size = 2500MBwork_mem = 16MBmaintenance_work_mem = 1500MBshared_buffers = 2000MBRAM : 5GCPU CORES : 8-I\n tried running select count(*) from table in oracle and in postgresql \nthe running time is almost equal.-Before\n importing the data I drop the indexes and the constraints.-I\n tried to copy a 23G file from the oracle server to the postgresql \nserver and it took me 12 minutes.Please\n advice how can I continue ? How can I improve something in this \noperation ?",
"msg_date": "Mon, 14 Aug 2017 11:10:51 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "Hi.\n\nIn general using COPY is *much faster* than anything else. You can even split the data load and run it in parallel, start with as many jobs as processors you have. Same with indexes, run them in parallel. With parallel I mean various psql running at the same time.\n\nTuning postgres will help too, but not as much as using COPY.\n\nhttps://www.postgresql.org/docs/9.6/static/performance-tips.html <https://www.postgresql.org/docs/9.6/static/performance-tips.html>\n\nhttps://www.postgresql.org/docs/9.6/static/populate.html <https://www.postgresql.org/docs/9.6/static/populate.html>\n\nhttps://www.postgresql.org/docs/9.6/static/populate.html#POPULATE-COPY-FROM\n\nRegards,\n\nDaniel Blanch..\n\n\n\n> El 14 ago 2017, a las 15:24, Mariel Cherkassky <[email protected]> escribió:\n> \n> I have performance issues with two big tables. Those tables are located on an oracle remote database. I'm running the quert : insert into local_postgresql_table select * from oracle_remote_table.\n> \n> The first table has 45M records and its size is 23G. The import of the data from the oracle remote database is taking 1 hour and 38 minutes. After that I create 13 regular indexes on the table and it takes 10 minutes per table ->2 hours and 10 minutes in total.\n> \n> The second table has 29M records and its size is 26G. The import of the data from the oracle remote database is taking 2 hours and 30 minutes. The creation of the indexes takes 1 hours and 30 minutes (some are indexes on one column and the creation takes 5 min and some are indexes on multiples column and it takes 11 min.\n> \n> Those operation are very problematic for me and I'm searching for a solution to improve the performance. The parameters I assigned :\n> \n> min_parallel_relation_size = 200MB\n> max_parallel_workers_per_gather = 5 \n> max_worker_processes = 8 \n> effective_cache_size = 2500MB\n> work_mem = 16MB\n> maintenance_work_mem = 1500MB\n> shared_buffers = 2000MB\n> RAM : 5G\n> CPU CORES : 8\n> -I tried running select count(*) from table in oracle and in postgresql the running time is almost equal.\n> \n> -Before importing the data I drop the indexes and the constraints.\n> \n> -I tried to copy a 23G file from the oracle server to the postgresql server and it took me 12 minutes.\n> \n> Please advice how can I continue ? How can I improve something in this operation ?\n> \n\n\nHi.In general using COPY is *much faster* than anything else. You can even split the data load and run it in parallel, start with as many jobs as processors you have. Same with indexes, run them in parallel. With parallel I mean various psql running at the same time.Tuning postgres will help too, but not as much as using COPY.https://www.postgresql.org/docs/9.6/static/performance-tips.htmlhttps://www.postgresql.org/docs/9.6/static/populate.htmlhttps://www.postgresql.org/docs/9.6/static/populate.html#POPULATE-COPY-FROMRegards,Daniel Blanch..El 14 ago 2017, a las 15:24, Mariel Cherkassky <[email protected]> escribió:I have performance issues with two big tables. Those tables are located on an oracle remote database. I'm running the quert : insert into local_postgresql_table select * from oracle_remote_table.The first table has 45M records and its size is 23G. The import of the data from the oracle remote database is taking 1 hour and 38 minutes. After that I create 13 regular indexes on the table and it takes 10 minutes per table ->2 hours and 10 minutes in total.The second table has 29M records and its size is 26G. The import of the data from the oracle remote database is taking 2 hours and 30 minutes. The creation of the indexes takes 1 hours and 30 minutes (some are indexes on one column and the creation takes 5 min and some are indexes on multiples column and it takes 11 min.Those operation are very problematic for me and I'm searching for a solution to improve the performance. The parameters I assigned :min_parallel_relation_size = 200MBmax_parallel_workers_per_gather = 5 max_worker_processes = 8 effective_cache_size = 2500MBwork_mem = 16MBmaintenance_work_mem = 1500MBshared_buffers = 2000MBRAM : 5GCPU CORES : 8-I tried running select count(*) from table in oracle and in postgresql the running time is almost equal.-Before importing the data I drop the indexes and the constraints.-I tried to copy a 23G file from the oracle server to the postgresql server and it took me 12 minutes.Please advice how can I continue ? How can I improve something in this operation ?",
"msg_date": "Mon, 14 Aug 2017 17:11:55 +0200",
"msg_from": "Daniel Blanch Bataller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "Moving that many gigs of data across your network could also take a long\ntime simply depending on your network configuration. Before spending a\nhuge amount of energy tuning postgresql, I'd probably look at how long it\ntakes to simply copy 20 or 30 G of data between the two machines.\n\n\n\n> El 14 ago 2017, a las 15:24, Mariel Cherkassky <\n> [email protected]> escribió:\n>\n> I have performance issues with two big tables. Those tables are located on\n> an oracle remote database. I'm running the quert : insert into\n> local_postgresql_table select * from oracle_remote_table.\n>\n> The first table has 45M records and its size is 23G. The import of the\n> data from the oracle remote database is taking 1 hour and 38 minutes. After\n> that I create 13 regular indexes on the table and it takes 10 minutes per\n> table ->2 hours and 10 minutes in total.\n>\n> The second table has 29M records and its size is 26G. The import of the\n> data from the oracle remote database is taking 2 hours and 30 minutes. The\n> creation of the indexes takes 1 hours and 30 minutes (some are indexes on\n> one column and the creation takes 5 min and some are indexes on multiples\n> column and it takes 11 min.\n>\n>\n>\n\nMoving that many gigs of data across your network could also take a long time simply depending on your network configuration. Before spending a huge amount of energy tuning postgresql, I'd probably look at how long it takes to simply copy 20 or 30 G of data between the two machines.El 14 ago 2017, a las 15:24, Mariel Cherkassky <[email protected]> escribió:I have performance issues with two big tables. Those tables are located on an oracle remote database. I'm running the quert : insert into local_postgresql_table select * from oracle_remote_table.The first table has 45M records and its size is 23G. The import of the data from the oracle remote database is taking 1 hour and 38 minutes. After that I create 13 regular indexes on the table and it takes 10 minutes per table ->2 hours and 10 minutes in total.The second table has 29M records and its size is 26G. The import of the data from the oracle remote database is taking 2 hours and 30 minutes. The creation of the indexes takes 1 hours and 30 minutes (some are indexes on one column and the creation takes 5 min and some are indexes on multiples column and it takes 11 min.",
"msg_date": "Mon, 14 Aug 2017 11:45:01 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "On Mon, Aug 14, 2017 at 6:24 AM, Mariel Cherkassky <\[email protected]> wrote:\n\n> I have performance issues with two big tables. Those tables are located on\n> an oracle remote database. I'm running the quert : insert into\n> local_postgresql_table select * from oracle_remote_table.\n>\n> The first table has 45M records and its size is 23G. The import of the\n> data from the oracle remote database is taking 1 hour and 38 minutes.\n>\nTo investigate this, I'd decouple the two steps and see how long each one\ntakes:\n\n\\copy (select * from oracle_remote_table) to /tmp/tmp with binary\n\\copy local_postresql_table from /tmp/tmp with binary\n\nCheers,\n\nJeff\n\nOn Mon, Aug 14, 2017 at 6:24 AM, Mariel Cherkassky <[email protected]> wrote:I have performance issues with two big tables. Those tables are located on an oracle remote database. I'm running the quert : insert into local_postgresql_table select * from oracle_remote_table.The first table has 45M records and its size is 23G. The import of the data from the oracle remote database is taking 1 hour and 38 minutes.To investigate this, I'd decouple the two steps and see how long each one takes:\\copy (select * from oracle_remote_table) to /tmp/tmp with binary\\copy local_postresql_table from /tmp/tmp with binaryCheers,Jeff",
"msg_date": "Mon, 14 Aug 2017 09:39:06 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "Hi,\nSo I I run the cheks that jeff mentioned :\n\\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour\nand 35 minutes\n\\copy local_postresql_table from /tmp/tmp with binary - Didnt run because\nthe remote oracle database is currently under maintenance work.\n\nSo I decided to follow MichaelDBA tips and I set the ram on my machine to\n16G and I configured the effective_cache memory to 14G,tshared_buffer to be\n2G and maintenance_work_mem to 4G.\n\nI started running the copy checks again and for now it coppied 5G in 10\nminutes. I have some questions :\n1)When I run insert into local_postresql_table select * from\nremote_oracle_table I insert that data as bulk to the local table or row by\nrow ? If the answer as bulk than why copy is a better option for this case\n?\n2)The copy from dump into the postgresql database should take less time\nthan the copy to dump ?\n3)What do you think about the new memory parameters that I cofigured ?\n\n\n\n\n\n\n2017-08-14 16:24 GMT+03:00 Mariel Cherkassky <[email protected]>:\n\n> I have performance issues with two big tables. Those tables are located on\n> an oracle remote database. I'm running the quert : insert into\n> local_postgresql_table select * from oracle_remote_table.\n>\n> The first table has 45M records and its size is 23G. The import of the\n> data from the oracle remote database is taking 1 hour and 38 minutes. After\n> that I create 13 regular indexes on the table and it takes 10 minutes per\n> table ->2 hours and 10 minutes in total.\n>\n> The second table has 29M records and its size is 26G. The import of the\n> data from the oracle remote database is taking 2 hours and 30 minutes. The\n> creation of the indexes takes 1 hours and 30 minutes (some are indexes on\n> one column and the creation takes 5 min and some are indexes on multiples\n> column and it takes 11 min.\n>\n> Those operation are very problematic for me and I'm searching for a\n> solution to improve the performance. The parameters I assigned :\n>\n> min_parallel_relation_size = 200MB\n> max_parallel_workers_per_gather = 5\n> max_worker_processes = 8\n> effective_cache_size = 2500MB\n> work_mem = 16MB\n> maintenance_work_mem = 1500MB\n> shared_buffers = 2000MB\n> RAM : 5G\n> CPU CORES : 8\n>\n> *-I tried running select count(*) from table in oracle and in postgresql\n> the running time is almost equal.*\n>\n> *-Before importing the data I drop the indexes and the constraints.*\n>\n> *-I tried to copy a 23G file from the oracle server to the postgresql\n> server and it took me 12 minutes.*\n>\n> Please advice how can I continue ? How can I improve something in this\n> operation ?\n>\n\nHi,So I I run the cheks that jeff mentioned : \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour and 35 minutes\\copy local_postresql_table from /tmp/tmp with binary - Didnt run because the remote oracle database is currently under maintenance work.So I decided to follow MichaelDBA tips and I set the ram on my machine to 16G and I configured the effective_cache memory to 14G,tshared_buffer to be 2G and maintenance_work_mem to 4G.I started running the copy checks again and for now it coppied 5G in 10 minutes. I have some questions : 1)When I run insert into local_postresql_table select * from remote_oracle_table I insert that data as bulk to the local table or row by row ? If the answer as bulk than why copy is a better option for this case ? 2)The copy from dump into the postgresql database should take less time than the copy to dump ?3)What do you think about the new memory parameters that I cofigured ?2017-08-14 16:24 GMT+03:00 Mariel Cherkassky <[email protected]>:I have performance issues with two big tables. Those tables are located on an oracle remote database. I'm running the quert : insert into local_postgresql_table select * from oracle_remote_table.The first table has 45M records and its size is 23G. The import of the data from the oracle remote database is taking 1 hour and 38 minutes. After that I create 13 regular indexes on the table and it takes 10 minutes per table ->2 hours and 10 minutes in total.The second table has 29M records and its size is 26G. The import of the data from the oracle remote database is taking 2 hours and 30 minutes. The creation of the indexes takes 1 hours and 30 minutes (some are indexes on one column and the creation takes 5 min and some are indexes on multiples column and it takes 11 min.Those operation are very problematic for me and I'm searching for a solution to improve the performance. The parameters I assigned :min_parallel_relation_size = 200MBmax_parallel_workers_per_gather = 5 max_worker_processes = 8 effective_cache_size = 2500MBwork_mem = 16MBmaintenance_work_mem = 1500MBshared_buffers = 2000MBRAM : 5GCPU CORES : 8-I tried running select count(*) from table in oracle and in postgresql the running time is almost equal.-Before importing the data I drop the indexes and the constraints.-I tried to copy a 23G file from the oracle server to the postgresql server and it took me 12 minutes.Please advice how can I continue ? How can I improve something in this operation ?",
"msg_date": "Tue, 15 Aug 2017 13:06:40 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "On Tue, Aug 15, 2017 at 3:06 AM, Mariel Cherkassky <\[email protected]> wrote:\n\n> Hi,\n> So I I run the cheks that jeff mentioned :\n> \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour\n> and 35 minutes\n> \\copy local_postresql_table from /tmp/tmp with binary - Didnt run because\n> the remote oracle database is currently under maintenance work.\n>\n\nThe \"\\copy...from\" doesn't depend on oracle, it would be only depend on\nlocal file system (/tmp/tmp), provided that the \"\\copy...to\" finished.\nAnyway, given the length of time it took, I think you can conclude the\nbottleneck is in oracle_fdw itself, or in Oracle, or the network.\n\nCheers,\n\nJeff\n\nOn Tue, Aug 15, 2017 at 3:06 AM, Mariel Cherkassky <[email protected]> wrote:Hi,So I I run the cheks that jeff mentioned : \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour and 35 minutes\\copy local_postresql_table from /tmp/tmp with binary - Didnt run because the remote oracle database is currently under maintenance work.The \"\\copy...from\" doesn't depend on oracle, it would be only depend on local file system (/tmp/tmp), provided that the \"\\copy...to\" finished. Anyway, given the length of time it took, I think you can conclude the bottleneck is in oracle_fdw itself, or in Oracle, or the network.Cheers,Jeff",
"msg_date": "Tue, 15 Aug 2017 09:13:30 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> Hi,\n> So I I run the cheks that jeff mentioned :\n> \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour\n> and 35 minutes\n\nSo 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound about\nright (it's early, I haven't had enough coffee please check my math).\nThat's pretty slow unless you're working across pretty big distances\nwith mediocre connections. My home internet downloads about 100MB/s\nby comparison.\n\n> \\copy local_postresql_table from /tmp/tmp with binary - Didnt run because\n> the remote oracle database is currently under maintenance work.\n\nYou shouldn't need the remote oracle server if you've already copied\nit over, you're just copying from local disk into the local pgsql db.\nUnless I'm missing something.\n\n> So I decided to follow MichaelDBA tips and I set the ram on my machine to\n> 16G and I configured the effective_cache memory to 14G,tshared_buffer to be\n> 2G and maintenance_work_mem to 4G.\n\nGood settings. Maybe set work_mem to 128MB or so while you're at it.\n\n> I started running the copy checks again and for now it coppied 5G in 10\n> minutes. I have some questions :\n> 1)When I run insert into local_postresql_table select * from\n> remote_oracle_table I insert that data as bulk to the local table or row by\n> row ? If the answer as bulk than why copy is a better option for this case\n> ?\n\ninsert into select from oracle remote is one big copy, but it will\ntake at least as long as copying from oracle to the local network\ntook. Compare that to the same thing but use file_fdw on the file\nlocally.\n\n> 2)The copy from dump into the postgresql database should take less time than\n> the copy to dump ?\n\nYes. The copy from Oracle to your local drive is painfully slow for a\nmodern network connection.\n\n> 3)What do you think about the new memory parameters that I cofigured ?\n\nThey should be OK. I'm more worried about the performance of the io\nsubsystem tbh.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 15 Aug 2017 11:14:14 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "2017-08-15 18:13 GMT+02:00 Jeff Janes <[email protected]>:\n\n> On Tue, Aug 15, 2017 at 3:06 AM, Mariel Cherkassky <\n> [email protected]> wrote:\n>\n>> Hi,\n>> So I I run the cheks that jeff mentioned :\n>> \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1\n>> hour and 35 minutes\n>> \\copy local_postresql_table from /tmp/tmp with binary - Didnt run because\n>> the remote oracle database is currently under maintenance work.\n>>\n>\n> The \"\\copy...from\" doesn't depend on oracle, it would be only depend on\n> local file system (/tmp/tmp), provided that the \"\\copy...to\" finished.\n> Anyway, given the length of time it took, I think you can conclude the\n> bottleneck is in oracle_fdw itself, or in Oracle, or the network.\n>\n\ndumping from Oracle is not fast - I seen it when oracle_fdw or ora2pg cases.\n\nRegards\n\nPavel\n\n\n\n>\n> Cheers,\n>\n> Jeff\n>\n\n2017-08-15 18:13 GMT+02:00 Jeff Janes <[email protected]>:On Tue, Aug 15, 2017 at 3:06 AM, Mariel Cherkassky <[email protected]> wrote:Hi,So I I run the cheks that jeff mentioned : \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour and 35 minutes\\copy local_postresql_table from /tmp/tmp with binary - Didnt run because the remote oracle database is currently under maintenance work.The \"\\copy...from\" doesn't depend on oracle, it would be only depend on local file system (/tmp/tmp), provided that the \"\\copy...to\" finished. Anyway, given the length of time it took, I think you can conclude the bottleneck is in oracle_fdw itself, or in Oracle, or the network.dumping from Oracle is not fast - I seen it when oracle_fdw or ora2pg cases.RegardsPavel Cheers,Jeff",
"msg_date": "Tue, 15 Aug 2017 19:44:51 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "After all the changes of the memory parameters the same operation(without\nthe copy utility) didnt run much faster - it took one minute less. I made\na test with the copy command (without the 'with binary') and it took 1.5\nhours to create the dumpfile in my local postgresql server. Then I tried to\nrun the copy from the local dump and it is already running two hours and it\ndidnt even finish. I looked at the server log and I saw that I run the copy\ncommand at 13:18:05, 3 minutes later checkpoint started and completed and\nthere are no messages in the log after that. What can I do ? Improving the\nmemory parameters and the memory on the server didnt help and for now the\ncopy command doesnt help either.\n\n\n\n\n2017-08-15 20:14 GMT+03:00 Scott Marlowe <[email protected]>:\n\n> On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky\n> <[email protected]> wrote:\n> > Hi,\n> > So I I run the cheks that jeff mentioned :\n> > \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1\n> hour\n> > and 35 minutes\n>\n> So 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound about\n> right (it's early, I haven't had enough coffee please check my math).\n> That's pretty slow unless you're working across pretty big distances\n> with mediocre connections. My home internet downloads about 100MB/s\n> by comparison.\n>\n> > \\copy local_postresql_table from /tmp/tmp with binary - Didnt run because\n> > the remote oracle database is currently under maintenance work.\n>\n> You shouldn't need the remote oracle server if you've already copied\n> it over, you're just copying from local disk into the local pgsql db.\n> Unless I'm missing something.\n>\n> > So I decided to follow MichaelDBA tips and I set the ram on my machine to\n> > 16G and I configured the effective_cache memory to 14G,tshared_buffer to\n> be\n> > 2G and maintenance_work_mem to 4G.\n>\n> Good settings. Maybe set work_mem to 128MB or so while you're at it.\n>\n> > I started running the copy checks again and for now it coppied 5G in 10\n> > minutes. I have some questions :\n> > 1)When I run insert into local_postresql_table select * from\n> > remote_oracle_table I insert that data as bulk to the local table or row\n> by\n> > row ? If the answer as bulk than why copy is a better option for this\n> case\n> > ?\n>\n> insert into select from oracle remote is one big copy, but it will\n> take at least as long as copying from oracle to the local network\n> took. Compare that to the same thing but use file_fdw on the file\n> locally.\n>\n> > 2)The copy from dump into the postgresql database should take less time\n> than\n> > the copy to dump ?\n>\n> Yes. The copy from Oracle to your local drive is painfully slow for a\n> modern network connection.\n>\n> > 3)What do you think about the new memory parameters that I cofigured ?\n>\n> They should be OK. I'm more worried about the performance of the io\n> subsystem tbh.\n>\n\nAfter all the changes of the memory parameters the same operation(without the copy utility) didnt run much faster - it took one minute less. I made a test with the copy command (without the 'with binary') and it took 1.5 hours to create the dumpfile in my local postgresql server. Then I tried to run the copy from the local dump and it is already running two hours and it didnt even finish. I looked at the server log and I saw that I run the copy command at 13:18:05, 3 minutes later checkpoint started and completed and there are no messages in the log after that. What can I do ? Improving the memory parameters and the memory on the server didnt help and for now the copy command doesnt help either.2017-08-15 20:14 GMT+03:00 Scott Marlowe <[email protected]>:On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> Hi,\n> So I I run the cheks that jeff mentioned :\n> \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour\n> and 35 minutes\n\nSo 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound about\nright (it's early, I haven't had enough coffee please check my math).\nThat's pretty slow unless you're working across pretty big distances\nwith mediocre connections. My home internet downloads about 100MB/s\nby comparison.\n\n> \\copy local_postresql_table from /tmp/tmp with binary - Didnt run because\n> the remote oracle database is currently under maintenance work.\n\nYou shouldn't need the remote oracle server if you've already copied\nit over, you're just copying from local disk into the local pgsql db.\nUnless I'm missing something.\n\n> So I decided to follow MichaelDBA tips and I set the ram on my machine to\n> 16G and I configured the effective_cache memory to 14G,tshared_buffer to be\n> 2G and maintenance_work_mem to 4G.\n\nGood settings. Maybe set work_mem to 128MB or so while you're at it.\n\n> I started running the copy checks again and for now it coppied 5G in 10\n> minutes. I have some questions :\n> 1)When I run insert into local_postresql_table select * from\n> remote_oracle_table I insert that data as bulk to the local table or row by\n> row ? If the answer as bulk than why copy is a better option for this case\n> ?\n\ninsert into select from oracle remote is one big copy, but it will\ntake at least as long as copying from oracle to the local network\ntook. Compare that to the same thing but use file_fdw on the file\nlocally.\n\n> 2)The copy from dump into the postgresql database should take less time than\n> the copy to dump ?\n\nYes. The copy from Oracle to your local drive is painfully slow for a\nmodern network connection.\n\n> 3)What do you think about the new memory parameters that I cofigured ?\n\nThey should be OK. I'm more worried about the performance of the io\nsubsystem tbh.",
"msg_date": "Wed, 16 Aug 2017 15:26:29 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "See if the copy command is actually working, copy should be very fast from your local disk.\n\n\n> El 16 ago 2017, a las 14:26, Mariel Cherkassky <[email protected]> escribió:\n> \n> \n> After all the changes of the memory parameters the same operation(without the copy utility) didnt run much faster - it took one minute less. I made a test with the copy command (without the 'with binary') and it took 1.5 hours to create the dumpfile in my local postgresql server. Then I tried to run the copy from the local dump and it is already running two hours and it didnt even finish. I looked at the server log and I saw that I run the copy command at 13:18:05, 3 minutes later checkpoint started and completed and there are no messages in the log after that. What can I do ? Improving the memory parameters and the memory on the server didnt help and for now the copy command doesnt help either.\n> \n> \n> \n> \n> 2017-08-15 20:14 GMT+03:00 Scott Marlowe <[email protected] <mailto:[email protected]>>:\n> On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky\n> <[email protected] <mailto:[email protected]>> wrote:\n> > Hi,\n> > So I I run the cheks that jeff mentioned :\n> > \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour\n> > and 35 minutes\n> \n> So 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound about\n> right (it's early, I haven't had enough coffee please check my math).\n> That's pretty slow unless you're working across pretty big distances\n> with mediocre connections. My home internet downloads about 100MB/s\n> by comparison.\n> \n> > \\copy local_postresql_table from /tmp/tmp with binary - Didnt run because\n> > the remote oracle database is currently under maintenance work.\n> \n> You shouldn't need the remote oracle server if you've already copied\n> it over, you're just copying from local disk into the local pgsql db.\n> Unless I'm missing something.\n> \n> > So I decided to follow MichaelDBA tips and I set the ram on my machine to\n> > 16G and I configured the effective_cache memory to 14G,tshared_buffer to be\n> > 2G and maintenance_work_mem to 4G.\n> \n> Good settings. Maybe set work_mem to 128MB or so while you're at it.\n> \n> > I started running the copy checks again and for now it coppied 5G in 10\n> > minutes. I have some questions :\n> > 1)When I run insert into local_postresql_table select * from\n> > remote_oracle_table I insert that data as bulk to the local table or row by\n> > row ? If the answer as bulk than why copy is a better option for this case\n> > ?\n> \n> insert into select from oracle remote is one big copy, but it will\n> take at least as long as copying from oracle to the local network\n> took. Compare that to the same thing but use file_fdw on the file\n> locally.\n> \n> > 2)The copy from dump into the postgresql database should take less time than\n> > the copy to dump ?\n> \n> Yes. The copy from Oracle to your local drive is painfully slow for a\n> modern network connection.\n> \n> > 3)What do you think about the new memory parameters that I cofigured ?\n> \n> They should be OK. I'm more worried about the performance of the io\n> subsystem tbh.\n> \n\n\nSee if the copy command is actually working, copy should be very fast from your local disk.El 16 ago 2017, a las 14:26, Mariel Cherkassky <[email protected]> escribió:After all the changes of the memory parameters the same operation(without the copy utility) didnt run much faster - it took one minute less. I made a test with the copy command (without the 'with binary') and it took 1.5 hours to create the dumpfile in my local postgresql server. Then I tried to run the copy from the local dump and it is already running two hours and it didnt even finish. I looked at the server log and I saw that I run the copy command at 13:18:05, 3 minutes later checkpoint started and completed and there are no messages in the log after that. What can I do ? Improving the memory parameters and the memory on the server didnt help and for now the copy command doesnt help either.2017-08-15 20:14 GMT+03:00 Scott Marlowe <[email protected]>:On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> Hi,\n> So I I run the cheks that jeff mentioned :\n> \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour\n> and 35 minutes\n\nSo 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound about\nright (it's early, I haven't had enough coffee please check my math).\nThat's pretty slow unless you're working across pretty big distances\nwith mediocre connections. My home internet downloads about 100MB/s\nby comparison.\n\n> \\copy local_postresql_table from /tmp/tmp with binary - Didnt run because\n> the remote oracle database is currently under maintenance work.\n\nYou shouldn't need the remote oracle server if you've already copied\nit over, you're just copying from local disk into the local pgsql db.\nUnless I'm missing something.\n\n> So I decided to follow MichaelDBA tips and I set the ram on my machine to\n> 16G and I configured the effective_cache memory to 14G,tshared_buffer to be\n> 2G and maintenance_work_mem to 4G.\n\nGood settings. Maybe set work_mem to 128MB or so while you're at it.\n\n> I started running the copy checks again and for now it coppied 5G in 10\n> minutes. I have some questions :\n> 1)When I run insert into local_postresql_table select * from\n> remote_oracle_table I insert that data as bulk to the local table or row by\n> row ? If the answer as bulk than why copy is a better option for this case\n> ?\n\ninsert into select from oracle remote is one big copy, but it will\ntake at least as long as copying from oracle to the local network\ntook. Compare that to the same thing but use file_fdw on the file\nlocally.\n\n> 2)The copy from dump into the postgresql database should take less time than\n> the copy to dump ?\n\nYes. The copy from Oracle to your local drive is painfully slow for a\nmodern network connection.\n\n> 3)What do you think about the new memory parameters that I cofigured ?\n\nThey should be OK. I'm more worried about the performance of the io\nsubsystem tbh.",
"msg_date": "Wed, 16 Aug 2017 15:08:56 +0200",
"msg_from": "Daniel Blanch Bataller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "I run the copy command via psql to create a local dump of a 3G table and it\ntook me 134059.732ms =~2 minutes. After that I imported the data via copy\nand it took 458648.677ms =~7 minutes. So the copy command works but pretty\nslow.\n\n2017-08-16 16:08 GMT+03:00 Daniel Blanch Bataller <\[email protected]>:\n\n> See if the copy command is actually working, copy should be very fast from\n> your local disk.\n>\n>\n> El 16 ago 2017, a las 14:26, Mariel Cherkassky <\n> [email protected]> escribió:\n>\n>\n> After all the changes of the memory parameters the same operation(without\n> the copy utility) didnt run much faster - it took one minute less. I made\n> a test with the copy command (without the 'with binary') and it took 1.5\n> hours to create the dumpfile in my local postgresql server. Then I tried to\n> run the copy from the local dump and it is already running two hours and it\n> didnt even finish. I looked at the server log and I saw that I run the copy\n> command at 13:18:05, 3 minutes later checkpoint started and completed and\n> there are no messages in the log after that. What can I do ? Improving the\n> memory parameters and the memory on the server didnt help and for now the\n> copy command doesnt help either.\n>\n>\n>\n>\n> 2017-08-15 20:14 GMT+03:00 Scott Marlowe <[email protected]>:\n>\n>> On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky\n>> <[email protected]> wrote:\n>> > Hi,\n>> > So I I run the cheks that jeff mentioned :\n>> > \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1\n>> hour\n>> > and 35 minutes\n>>\n>> So 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound about\n>> right (it's early, I haven't had enough coffee please check my math).\n>> That's pretty slow unless you're working across pretty big distances\n>> with mediocre connections. My home internet downloads about 100MB/s\n>> by comparison.\n>>\n>> > \\copy local_postresql_table from /tmp/tmp with binary - Didnt run\n>> because\n>> > the remote oracle database is currently under maintenance work.\n>>\n>> You shouldn't need the remote oracle server if you've already copied\n>> it over, you're just copying from local disk into the local pgsql db.\n>> Unless I'm missing something.\n>>\n>> > So I decided to follow MichaelDBA tips and I set the ram on my machine\n>> to\n>> > 16G and I configured the effective_cache memory to 14G,tshared_buffer\n>> to be\n>> > 2G and maintenance_work_mem to 4G.\n>>\n>> Good settings. Maybe set work_mem to 128MB or so while you're at it.\n>>\n>> > I started running the copy checks again and for now it coppied 5G in 10\n>> > minutes. I have some questions :\n>> > 1)When I run insert into local_postresql_table select * from\n>> > remote_oracle_table I insert that data as bulk to the local table or\n>> row by\n>> > row ? If the answer as bulk than why copy is a better option for this\n>> case\n>> > ?\n>>\n>> insert into select from oracle remote is one big copy, but it will\n>> take at least as long as copying from oracle to the local network\n>> took. Compare that to the same thing but use file_fdw on the file\n>> locally.\n>>\n>> > 2)The copy from dump into the postgresql database should take less time\n>> than\n>> > the copy to dump ?\n>>\n>> Yes. The copy from Oracle to your local drive is painfully slow for a\n>> modern network connection.\n>>\n>> > 3)What do you think about the new memory parameters that I cofigured ?\n>>\n>> They should be OK. I'm more worried about the performance of the io\n>> subsystem tbh.\n>>\n>\n>\n>\n\nI run the copy command via psql to create a local dump of a 3G table and it took me 134059.732ms =~2 minutes. After that I imported the data via copy and it took 458648.677ms =~7 minutes. So the copy command works but pretty slow. 2017-08-16 16:08 GMT+03:00 Daniel Blanch Bataller <[email protected]>:See if the copy command is actually working, copy should be very fast from your local disk.El 16 ago 2017, a las 14:26, Mariel Cherkassky <[email protected]> escribió:After all the changes of the memory parameters the same operation(without the copy utility) didnt run much faster - it took one minute less. I made a test with the copy command (without the 'with binary') and it took 1.5 hours to create the dumpfile in my local postgresql server. Then I tried to run the copy from the local dump and it is already running two hours and it didnt even finish. I looked at the server log and I saw that I run the copy command at 13:18:05, 3 minutes later checkpoint started and completed and there are no messages in the log after that. What can I do ? Improving the memory parameters and the memory on the server didnt help and for now the copy command doesnt help either.2017-08-15 20:14 GMT+03:00 Scott Marlowe <[email protected]>:On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> Hi,\n> So I I run the cheks that jeff mentioned :\n> \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour\n> and 35 minutes\n\nSo 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound about\nright (it's early, I haven't had enough coffee please check my math).\nThat's pretty slow unless you're working across pretty big distances\nwith mediocre connections. My home internet downloads about 100MB/s\nby comparison.\n\n> \\copy local_postresql_table from /tmp/tmp with binary - Didnt run because\n> the remote oracle database is currently under maintenance work.\n\nYou shouldn't need the remote oracle server if you've already copied\nit over, you're just copying from local disk into the local pgsql db.\nUnless I'm missing something.\n\n> So I decided to follow MichaelDBA tips and I set the ram on my machine to\n> 16G and I configured the effective_cache memory to 14G,tshared_buffer to be\n> 2G and maintenance_work_mem to 4G.\n\nGood settings. Maybe set work_mem to 128MB or so while you're at it.\n\n> I started running the copy checks again and for now it coppied 5G in 10\n> minutes. I have some questions :\n> 1)When I run insert into local_postresql_table select * from\n> remote_oracle_table I insert that data as bulk to the local table or row by\n> row ? If the answer as bulk than why copy is a better option for this case\n> ?\n\ninsert into select from oracle remote is one big copy, but it will\ntake at least as long as copying from oracle to the local network\ntook. Compare that to the same thing but use file_fdw on the file\nlocally.\n\n> 2)The copy from dump into the postgresql database should take less time than\n> the copy to dump ?\n\nYes. The copy from Oracle to your local drive is painfully slow for a\nmodern network connection.\n\n> 3)What do you think about the new memory parameters that I cofigured ?\n\nThey should be OK. I'm more worried about the performance of the io\nsubsystem tbh.",
"msg_date": "Wed, 16 Aug 2017 16:54:06 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "Considering it has to write logs and data at checkpoints I don’t see it particularly slow compared to the extract phase. What kind of disks you have SSD or regular disks? Different disks for ltransaction logs and data?\n\n\n> El 16 ago 2017, a las 15:54, Mariel Cherkassky <[email protected]> escribió:\n> \n> I run the copy command via psql to create a local dump of a 3G table and it took me 134059.732ms =~2 minutes. After that I imported the data via copy and it took 458648.677ms =~7 minutes. So the copy command works but pretty slow. \n> \n> 2017-08-16 16:08 GMT+03:00 Daniel Blanch Bataller <[email protected] <mailto:[email protected]>>:\n> See if the copy command is actually working, copy should be very fast from your local disk.\n> \n> \n>> El 16 ago 2017, a las 14:26, Mariel Cherkassky <[email protected] <mailto:[email protected]>> escribió:\n>> \n>> \n>> After all the changes of the memory parameters the same operation(without the copy utility) didnt run much faster - it took one minute less. I made a test with the copy command (without the 'with binary') and it took 1.5 hours to create the dumpfile in my local postgresql server. Then I tried to run the copy from the local dump and it is already running two hours and it didnt even finish. I looked at the server log and I saw that I run the copy command at 13:18:05, 3 minutes later checkpoint started and completed and there are no messages in the log after that. What can I do ? Improving the memory parameters and the memory on the server didnt help and for now the copy command doesnt help either.\n>> \n>> \n>> \n>> \n>> 2017-08-15 20:14 GMT+03:00 Scott Marlowe <[email protected] <mailto:[email protected]>>:\n>> On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky\n>> <[email protected] <mailto:[email protected]>> wrote:\n>> > Hi,\n>> > So I I run the cheks that jeff mentioned :\n>> > \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour\n>> > and 35 minutes\n>> \n>> So 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound about\n>> right (it's early, I haven't had enough coffee please check my math).\n>> That's pretty slow unless you're working across pretty big distances\n>> with mediocre connections. My home internet downloads about 100MB/s\n>> by comparison.\n>> \n>> > \\copy local_postresql_table from /tmp/tmp with binary - Didnt run because\n>> > the remote oracle database is currently under maintenance work.\n>> \n>> You shouldn't need the remote oracle server if you've already copied\n>> it over, you're just copying from local disk into the local pgsql db.\n>> Unless I'm missing something.\n>> \n>> > So I decided to follow MichaelDBA tips and I set the ram on my machine to\n>> > 16G and I configured the effective_cache memory to 14G,tshared_buffer to be\n>> > 2G and maintenance_work_mem to 4G.\n>> \n>> Good settings. Maybe set work_mem to 128MB or so while you're at it.\n>> \n>> > I started running the copy checks again and for now it coppied 5G in 10\n>> > minutes. I have some questions :\n>> > 1)When I run insert into local_postresql_table select * from\n>> > remote_oracle_table I insert that data as bulk to the local table or row by\n>> > row ? If the answer as bulk than why copy is a better option for this case\n>> > ?\n>> \n>> insert into select from oracle remote is one big copy, but it will\n>> take at least as long as copying from oracle to the local network\n>> took. Compare that to the same thing but use file_fdw on the file\n>> locally.\n>> \n>> > 2)The copy from dump into the postgresql database should take less time than\n>> > the copy to dump ?\n>> \n>> Yes. The copy from Oracle to your local drive is painfully slow for a\n>> modern network connection.\n>> \n>> > 3)What do you think about the new memory parameters that I cofigured ?\n>> \n>> They should be OK. I'm more worried about the performance of the io\n>> subsystem tbh.\n>> \n> \n> \n\n\nConsidering it has to write logs and data at checkpoints I don’t see it particularly slow compared to the extract phase. What kind of disks you have SSD or regular disks? Different disks for ltransaction logs and data?El 16 ago 2017, a las 15:54, Mariel Cherkassky <[email protected]> escribió:I run the copy command via psql to create a local dump of a 3G table and it took me 134059.732ms =~2 minutes. After that I imported the data via copy and it took 458648.677ms =~7 minutes. So the copy command works but pretty slow. 2017-08-16 16:08 GMT+03:00 Daniel Blanch Bataller <[email protected]>:See if the copy command is actually working, copy should be very fast from your local disk.El 16 ago 2017, a las 14:26, Mariel Cherkassky <[email protected]> escribió:After all the changes of the memory parameters the same operation(without the copy utility) didnt run much faster - it took one minute less. I made a test with the copy command (without the 'with binary') and it took 1.5 hours to create the dumpfile in my local postgresql server. Then I tried to run the copy from the local dump and it is already running two hours and it didnt even finish. I looked at the server log and I saw that I run the copy command at 13:18:05, 3 minutes later checkpoint started and completed and there are no messages in the log after that. What can I do ? Improving the memory parameters and the memory on the server didnt help and for now the copy command doesnt help either.2017-08-15 20:14 GMT+03:00 Scott Marlowe <[email protected]>:On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> Hi,\n> So I I run the cheks that jeff mentioned :\n> \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour\n> and 35 minutes\n\nSo 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound about\nright (it's early, I haven't had enough coffee please check my math).\nThat's pretty slow unless you're working across pretty big distances\nwith mediocre connections. My home internet downloads about 100MB/s\nby comparison.\n\n> \\copy local_postresql_table from /tmp/tmp with binary - Didnt run because\n> the remote oracle database is currently under maintenance work.\n\nYou shouldn't need the remote oracle server if you've already copied\nit over, you're just copying from local disk into the local pgsql db.\nUnless I'm missing something.\n\n> So I decided to follow MichaelDBA tips and I set the ram on my machine to\n> 16G and I configured the effective_cache memory to 14G,tshared_buffer to be\n> 2G and maintenance_work_mem to 4G.\n\nGood settings. Maybe set work_mem to 128MB or so while you're at it.\n\n> I started running the copy checks again and for now it coppied 5G in 10\n> minutes. I have some questions :\n> 1)When I run insert into local_postresql_table select * from\n> remote_oracle_table I insert that data as bulk to the local table or row by\n> row ? If the answer as bulk than why copy is a better option for this case\n> ?\n\ninsert into select from oracle remote is one big copy, but it will\ntake at least as long as copying from oracle to the local network\ntook. Compare that to the same thing but use file_fdw on the file\nlocally.\n\n> 2)The copy from dump into the postgresql database should take less time than\n> the copy to dump ?\n\nYes. The copy from Oracle to your local drive is painfully slow for a\nmodern network connection.\n\n> 3)What do you think about the new memory parameters that I cofigured ?\n\nThey should be OK. I'm more worried about the performance of the io\nsubsystem tbh.",
"msg_date": "Wed, 16 Aug 2017 16:04:30 +0200",
"msg_from": "Daniel Blanch Bataller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "My server is virtual and it have virtual hd from a vnx storage machine. The\nlogs and the data are on the same disk.\n\n2017-08-16 17:04 GMT+03:00 Daniel Blanch Bataller <\[email protected]>:\n\n> Considering it has to write logs and data at checkpoints I don’t see it\n> particularly slow compared to the extract phase. What kind of disks you\n> have SSD or regular disks? Different disks for ltransaction logs and data?\n>\n>\n> El 16 ago 2017, a las 15:54, Mariel Cherkassky <\n> [email protected]> escribió:\n>\n> I run the copy command via psql to create a local dump of a 3G table and\n> it took me 134059.732ms =~2 minutes. After that I imported the data via\n> copy and it took 458648.677ms =~7 minutes. So the copy command works but\n> pretty slow.\n>\n> 2017-08-16 16:08 GMT+03:00 Daniel Blanch Bataller <\n> [email protected]>:\n>\n>> See if the copy command is actually working, copy should be very fast\n>> from your local disk.\n>>\n>>\n>> El 16 ago 2017, a las 14:26, Mariel Cherkassky <\n>> [email protected]> escribió:\n>>\n>>\n>> After all the changes of the memory parameters the same operation(without\n>> the copy utility) didnt run much faster - it took one minute less. I made\n>> a test with the copy command (without the 'with binary') and it took 1.5\n>> hours to create the dumpfile in my local postgresql server. Then I tried to\n>> run the copy from the local dump and it is already running two hours and it\n>> didnt even finish. I looked at the server log and I saw that I run the copy\n>> command at 13:18:05, 3 minutes later checkpoint started and completed and\n>> there are no messages in the log after that. What can I do ? Improving the\n>> memory parameters and the memory on the server didnt help and for now the\n>> copy command doesnt help either.\n>>\n>>\n>>\n>>\n>> 2017-08-15 20:14 GMT+03:00 Scott Marlowe <[email protected]>:\n>>\n>>> On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky\n>>> <[email protected]> wrote:\n>>> > Hi,\n>>> > So I I run the cheks that jeff mentioned :\n>>> > \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1\n>>> hour\n>>> > and 35 minutes\n>>>\n>>> So 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound about\n>>> right (it's early, I haven't had enough coffee please check my math).\n>>> That's pretty slow unless you're working across pretty big distances\n>>> with mediocre connections. My home internet downloads about 100MB/s\n>>> by comparison.\n>>>\n>>> > \\copy local_postresql_table from /tmp/tmp with binary - Didnt run\n>>> because\n>>> > the remote oracle database is currently under maintenance work.\n>>>\n>>> You shouldn't need the remote oracle server if you've already copied\n>>> it over, you're just copying from local disk into the local pgsql db.\n>>> Unless I'm missing something.\n>>>\n>>> > So I decided to follow MichaelDBA tips and I set the ram on my machine\n>>> to\n>>> > 16G and I configured the effective_cache memory to 14G,tshared_buffer\n>>> to be\n>>> > 2G and maintenance_work_mem to 4G.\n>>>\n>>> Good settings. Maybe set work_mem to 128MB or so while you're at it.\n>>>\n>>> > I started running the copy checks again and for now it coppied 5G in 10\n>>> > minutes. I have some questions :\n>>> > 1)When I run insert into local_postresql_table select * from\n>>> > remote_oracle_table I insert that data as bulk to the local table or\n>>> row by\n>>> > row ? If the answer as bulk than why copy is a better option for this\n>>> case\n>>> > ?\n>>>\n>>> insert into select from oracle remote is one big copy, but it will\n>>> take at least as long as copying from oracle to the local network\n>>> took. Compare that to the same thing but use file_fdw on the file\n>>> locally.\n>>>\n>>> > 2)The copy from dump into the postgresql database should take less\n>>> time than\n>>> > the copy to dump ?\n>>>\n>>> Yes. The copy from Oracle to your local drive is painfully slow for a\n>>> modern network connection.\n>>>\n>>> > 3)What do you think about the new memory parameters that I cofigured ?\n>>>\n>>> They should be OK. I'm more worried about the performance of the io\n>>> subsystem tbh.\n>>>\n>>\n>>\n>>\n>\n>\n\nMy server is virtual and it have virtual hd from a vnx storage machine. The logs and the data are on the same disk.2017-08-16 17:04 GMT+03:00 Daniel Blanch Bataller <[email protected]>:Considering it has to write logs and data at checkpoints I don’t see it particularly slow compared to the extract phase. What kind of disks you have SSD or regular disks? Different disks for ltransaction logs and data?El 16 ago 2017, a las 15:54, Mariel Cherkassky <[email protected]> escribió:I run the copy command via psql to create a local dump of a 3G table and it took me 134059.732ms =~2 minutes. After that I imported the data via copy and it took 458648.677ms =~7 minutes. So the copy command works but pretty slow. 2017-08-16 16:08 GMT+03:00 Daniel Blanch Bataller <[email protected]>:See if the copy command is actually working, copy should be very fast from your local disk.El 16 ago 2017, a las 14:26, Mariel Cherkassky <[email protected]> escribió:After all the changes of the memory parameters the same operation(without the copy utility) didnt run much faster - it took one minute less. I made a test with the copy command (without the 'with binary') and it took 1.5 hours to create the dumpfile in my local postgresql server. Then I tried to run the copy from the local dump and it is already running two hours and it didnt even finish. I looked at the server log and I saw that I run the copy command at 13:18:05, 3 minutes later checkpoint started and completed and there are no messages in the log after that. What can I do ? Improving the memory parameters and the memory on the server didnt help and for now the copy command doesnt help either.2017-08-15 20:14 GMT+03:00 Scott Marlowe <[email protected]>:On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> Hi,\n> So I I run the cheks that jeff mentioned :\n> \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour\n> and 35 minutes\n\nSo 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound about\nright (it's early, I haven't had enough coffee please check my math).\nThat's pretty slow unless you're working across pretty big distances\nwith mediocre connections. My home internet downloads about 100MB/s\nby comparison.\n\n> \\copy local_postresql_table from /tmp/tmp with binary - Didnt run because\n> the remote oracle database is currently under maintenance work.\n\nYou shouldn't need the remote oracle server if you've already copied\nit over, you're just copying from local disk into the local pgsql db.\nUnless I'm missing something.\n\n> So I decided to follow MichaelDBA tips and I set the ram on my machine to\n> 16G and I configured the effective_cache memory to 14G,tshared_buffer to be\n> 2G and maintenance_work_mem to 4G.\n\nGood settings. Maybe set work_mem to 128MB or so while you're at it.\n\n> I started running the copy checks again and for now it coppied 5G in 10\n> minutes. I have some questions :\n> 1)When I run insert into local_postresql_table select * from\n> remote_oracle_table I insert that data as bulk to the local table or row by\n> row ? If the answer as bulk than why copy is a better option for this case\n> ?\n\ninsert into select from oracle remote is one big copy, but it will\ntake at least as long as copying from oracle to the local network\ntook. Compare that to the same thing but use file_fdw on the file\nlocally.\n\n> 2)The copy from dump into the postgresql database should take less time than\n> the copy to dump ?\n\nYes. The copy from Oracle to your local drive is painfully slow for a\nmodern network connection.\n\n> 3)What do you think about the new memory parameters that I cofigured ?\n\nThey should be OK. I'm more worried about the performance of the io\nsubsystem tbh.",
"msg_date": "Wed, 16 Aug 2017 17:32:25 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "Seems your disks are too slow. On my laptop (nothing special, just one disk) using COPY I can dump 3G in ~ 20 secs, loading takes 120 secs, bare copying 3G takes 10 secs. \n\nSimilar proportion you had, but much faster. \n\nconfirm I/O is your bottleneck, and tell us how you solved your problem\n\nAnyway, You can cut import time by half if you set your destination table to unlogged (postgres will write half the data, it will save the transaction log writing). Remember to set it to logged when finished!!\n\n\nRegards,\n\nDaniel\n\n> El 16 ago 2017, a las 16:32, Mariel Cherkassky <[email protected]> escribió:\n> \n> My server is virtual and it have virtual hd from a vnx storage machine. The logs and the data are on the same disk.\n> \n> 2017-08-16 17:04 GMT+03:00 Daniel Blanch Bataller <[email protected] <mailto:[email protected]>>:\n> Considering it has to write logs and data at checkpoints I don’t see it particularly slow compared to the extract phase. What kind of disks you have SSD or regular disks? Different disks for ltransaction logs and data?\n> \n> \n>> El 16 ago 2017, a las 15:54, Mariel Cherkassky <[email protected] <mailto:[email protected]>> escribió:\n>> \n>> I run the copy command via psql to create a local dump of a 3G table and it took me 134059.732ms =~2 minutes. After that I imported the data via copy and it took 458648.677ms =~7 minutes. So the copy command works but pretty slow. \n>> \n>> 2017-08-16 16:08 GMT+03:00 Daniel Blanch Bataller <[email protected] <mailto:[email protected]>>:\n>> See if the copy command is actually working, copy should be very fast from your local disk.\n>> \n>> \n>>> El 16 ago 2017, a las 14:26, Mariel Cherkassky <[email protected] <mailto:[email protected]>> escribió:\n>>> \n>>> \n>>> After all the changes of the memory parameters the same operation(without the copy utility) didnt run much faster - it took one minute less. I made a test with the copy command (without the 'with binary') and it took 1.5 hours to create the dumpfile in my local postgresql server. Then I tried to run the copy from the local dump and it is already running two hours and it didnt even finish. I looked at the server log and I saw that I run the copy command at 13:18:05, 3 minutes later checkpoint started and completed and there are no messages in the log after that. What can I do ? Improving the memory parameters and the memory on the server didnt help and for now the copy command doesnt help either.\n>>> \n>>> \n>>> \n>>> \n>>> 2017-08-15 20:14 GMT+03:00 Scott Marlowe <[email protected] <mailto:[email protected]>>:\n>>> On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky\n>>> <[email protected] <mailto:[email protected]>> wrote:\n>>> > Hi,\n>>> > So I I run the cheks that jeff mentioned :\n>>> > \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour\n>>> > and 35 minutes\n>>> \n>>> So 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound about\n>>> right (it's early, I haven't had enough coffee please check my math).\n>>> That's pretty slow unless you're working across pretty big distances\n>>> with mediocre connections. My home internet downloads about 100MB/s\n>>> by comparison.\n>>> \n>>> > \\copy local_postresql_table from /tmp/tmp with binary - Didnt run because\n>>> > the remote oracle database is currently under maintenance work.\n>>> \n>>> You shouldn't need the remote oracle server if you've already copied\n>>> it over, you're just copying from local disk into the local pgsql db.\n>>> Unless I'm missing something.\n>>> \n>>> > So I decided to follow MichaelDBA tips and I set the ram on my machine to\n>>> > 16G and I configured the effective_cache memory to 14G,tshared_buffer to be\n>>> > 2G and maintenance_work_mem to 4G.\n>>> \n>>> Good settings. Maybe set work_mem to 128MB or so while you're at it.\n>>> \n>>> > I started running the copy checks again and for now it coppied 5G in 10\n>>> > minutes. I have some questions :\n>>> > 1)When I run insert into local_postresql_table select * from\n>>> > remote_oracle_table I insert that data as bulk to the local table or row by\n>>> > row ? If the answer as bulk than why copy is a better option for this case\n>>> > ?\n>>> \n>>> insert into select from oracle remote is one big copy, but it will\n>>> take at least as long as copying from oracle to the local network\n>>> took. Compare that to the same thing but use file_fdw on the file\n>>> locally.\n>>> \n>>> > 2)The copy from dump into the postgresql database should take less time than\n>>> > the copy to dump ?\n>>> \n>>> Yes. The copy from Oracle to your local drive is painfully slow for a\n>>> modern network connection.\n>>> \n>>> > 3)What do you think about the new memory parameters that I cofigured ?\n>>> \n>>> They should be OK. I'm more worried about the performance of the io\n>>> subsystem tbh.\n>>> \n>> \n>> \n> \n> \n\n\nSeems your disks are too slow. On my laptop (nothing special, just one disk) using COPY I can dump 3G in ~ 20 secs, loading takes 120 secs, bare copying 3G takes 10 secs. Similar proportion you had, but much faster. confirm I/O is your bottleneck, and tell us how you solved your problemAnyway, You can cut import time by half if you set your destination table to unlogged (postgres will write half the data, it will save the transaction log writing). Remember to set it to logged when finished!!Regards,DanielEl 16 ago 2017, a las 16:32, Mariel Cherkassky <[email protected]> escribió:My server is virtual and it have virtual hd from a vnx storage machine. The logs and the data are on the same disk.2017-08-16 17:04 GMT+03:00 Daniel Blanch Bataller <[email protected]>:Considering it has to write logs and data at checkpoints I don’t see it particularly slow compared to the extract phase. What kind of disks you have SSD or regular disks? Different disks for ltransaction logs and data?El 16 ago 2017, a las 15:54, Mariel Cherkassky <[email protected]> escribió:I run the copy command via psql to create a local dump of a 3G table and it took me 134059.732ms =~2 minutes. After that I imported the data via copy and it took 458648.677ms =~7 minutes. So the copy command works but pretty slow. 2017-08-16 16:08 GMT+03:00 Daniel Blanch Bataller <[email protected]>:See if the copy command is actually working, copy should be very fast from your local disk.El 16 ago 2017, a las 14:26, Mariel Cherkassky <[email protected]> escribió:After all the changes of the memory parameters the same operation(without the copy utility) didnt run much faster - it took one minute less. I made a test with the copy command (without the 'with binary') and it took 1.5 hours to create the dumpfile in my local postgresql server. Then I tried to run the copy from the local dump and it is already running two hours and it didnt even finish. I looked at the server log and I saw that I run the copy command at 13:18:05, 3 minutes later checkpoint started and completed and there are no messages in the log after that. What can I do ? Improving the memory parameters and the memory on the server didnt help and for now the copy command doesnt help either.2017-08-15 20:14 GMT+03:00 Scott Marlowe <[email protected]>:On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> Hi,\n> So I I run the cheks that jeff mentioned :\n> \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour\n> and 35 minutes\n\nSo 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound about\nright (it's early, I haven't had enough coffee please check my math).\nThat's pretty slow unless you're working across pretty big distances\nwith mediocre connections. My home internet downloads about 100MB/s\nby comparison.\n\n> \\copy local_postresql_table from /tmp/tmp with binary - Didnt run because\n> the remote oracle database is currently under maintenance work.\n\nYou shouldn't need the remote oracle server if you've already copied\nit over, you're just copying from local disk into the local pgsql db.\nUnless I'm missing something.\n\n> So I decided to follow MichaelDBA tips and I set the ram on my machine to\n> 16G and I configured the effective_cache memory to 14G,tshared_buffer to be\n> 2G and maintenance_work_mem to 4G.\n\nGood settings. Maybe set work_mem to 128MB or so while you're at it.\n\n> I started running the copy checks again and for now it coppied 5G in 10\n> minutes. I have some questions :\n> 1)When I run insert into local_postresql_table select * from\n> remote_oracle_table I insert that data as bulk to the local table or row by\n> row ? If the answer as bulk than why copy is a better option for this case\n> ?\n\ninsert into select from oracle remote is one big copy, but it will\ntake at least as long as copying from oracle to the local network\ntook. Compare that to the same thing but use file_fdw on the file\nlocally.\n\n> 2)The copy from dump into the postgresql database should take less time than\n> the copy to dump ?\n\nYes. The copy from Oracle to your local drive is painfully slow for a\nmodern network connection.\n\n> 3)What do you think about the new memory parameters that I cofigured ?\n\nThey should be OK. I'm more worried about the performance of the io\nsubsystem tbh.",
"msg_date": "Wed, 16 Aug 2017 23:46:18 +0200",
"msg_from": "Daniel Blanch Bataller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "Hi Daniel,\nI already tried to set the destination table to unlogged - it improved the\nperformance slightly. Is there a way to make sure that I/O is the problem ?\n\n2017-08-17 0:46 GMT+03:00 Daniel Blanch Bataller <\[email protected]>:\n\n> Seems your disks are too slow. On my laptop (nothing special, just one\n> disk) using COPY I can dump 3G in ~ 20 secs, loading takes 120 secs, bare\n> copying 3G takes 10 secs.\n>\n> Similar proportion you had, but much faster.\n>\n> confirm I/O is your bottleneck, and tell us how you solved your problem\n>\n> Anyway, You can cut import time by half if you set your destination table\n> to unlogged (postgres will write half the data, it will save the\n> transaction log writing). Remember to set it to logged when finished!!\n>\n>\n> Regards,\n>\n> Daniel\n>\n> El 16 ago 2017, a las 16:32, Mariel Cherkassky <\n> [email protected]> escribió:\n>\n> My server is virtual and it have virtual hd from a vnx storage machine.\n> The logs and the data are on the same disk.\n>\n> 2017-08-16 17:04 GMT+03:00 Daniel Blanch Bataller <\n> [email protected]>:\n>\n>> Considering it has to write logs and data at checkpoints I don’t see it\n>> particularly slow compared to the extract phase. What kind of disks you\n>> have SSD or regular disks? Different disks for ltransaction logs and data?\n>>\n>>\n>> El 16 ago 2017, a las 15:54, Mariel Cherkassky <\n>> [email protected]> escribió:\n>>\n>> I run the copy command via psql to create a local dump of a 3G table and\n>> it took me 134059.732ms =~2 minutes. After that I imported the data via\n>> copy and it took 458648.677ms =~7 minutes. So the copy command works but\n>> pretty slow.\n>>\n>> 2017-08-16 16:08 GMT+03:00 Daniel Blanch Bataller <\n>> [email protected]>:\n>>\n>>> See if the copy command is actually working, copy should be very fast\n>>> from your local disk.\n>>>\n>>>\n>>> El 16 ago 2017, a las 14:26, Mariel Cherkassky <\n>>> [email protected]> escribió:\n>>>\n>>>\n>>> After all the changes of the memory parameters the same\n>>> operation(without the copy utility) didnt run much faster - it took one\n>>> minute less. I made a test with the copy command (without the 'with\n>>> binary') and it took 1.5 hours to create the dumpfile in my local\n>>> postgresql server. Then I tried to run the copy from the local dump and it\n>>> is already running two hours and it didnt even finish. I looked at the\n>>> server log and I saw that I run the copy command at 13:18:05, 3 minutes\n>>> later checkpoint started and completed and there are no messages in the log\n>>> after that. What can I do ? Improving the memory parameters and the memory\n>>> on the server didnt help and for now the copy command doesnt help either.\n>>>\n>>>\n>>>\n>>>\n>>> 2017-08-15 20:14 GMT+03:00 Scott Marlowe <[email protected]>:\n>>>\n>>>> On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky\n>>>> <[email protected]> wrote:\n>>>> > Hi,\n>>>> > So I I run the cheks that jeff mentioned :\n>>>> > \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1\n>>>> hour\n>>>> > and 35 minutes\n>>>>\n>>>> So 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound about\n>>>> right (it's early, I haven't had enough coffee please check my math).\n>>>> That's pretty slow unless you're working across pretty big distances\n>>>> with mediocre connections. My home internet downloads about 100MB/s\n>>>> by comparison.\n>>>>\n>>>> > \\copy local_postresql_table from /tmp/tmp with binary - Didnt run\n>>>> because\n>>>> > the remote oracle database is currently under maintenance work.\n>>>>\n>>>> You shouldn't need the remote oracle server if you've already copied\n>>>> it over, you're just copying from local disk into the local pgsql db.\n>>>> Unless I'm missing something.\n>>>>\n>>>> > So I decided to follow MichaelDBA tips and I set the ram on my\n>>>> machine to\n>>>> > 16G and I configured the effective_cache memory to 14G,tshared_buffer\n>>>> to be\n>>>> > 2G and maintenance_work_mem to 4G.\n>>>>\n>>>> Good settings. Maybe set work_mem to 128MB or so while you're at it.\n>>>>\n>>>> > I started running the copy checks again and for now it coppied 5G in\n>>>> 10\n>>>> > minutes. I have some questions :\n>>>> > 1)When I run insert into local_postresql_table select * from\n>>>> > remote_oracle_table I insert that data as bulk to the local table or\n>>>> row by\n>>>> > row ? If the answer as bulk than why copy is a better option for\n>>>> this case\n>>>> > ?\n>>>>\n>>>> insert into select from oracle remote is one big copy, but it will\n>>>> take at least as long as copying from oracle to the local network\n>>>> took. Compare that to the same thing but use file_fdw on the file\n>>>> locally.\n>>>>\n>>>> > 2)The copy from dump into the postgresql database should take less\n>>>> time than\n>>>> > the copy to dump ?\n>>>>\n>>>> Yes. The copy from Oracle to your local drive is painfully slow for a\n>>>> modern network connection.\n>>>>\n>>>> > 3)What do you think about the new memory parameters that I cofigured ?\n>>>>\n>>>> They should be OK. I'm more worried about the performance of the io\n>>>> subsystem tbh.\n>>>>\n>>>\n>>>\n>>>\n>>\n>>\n>\n>\n\nHi Daniel,I already tried to set the destination table to unlogged - it improved the performance slightly. Is there a way to make sure that I/O is the problem ? 2017-08-17 0:46 GMT+03:00 Daniel Blanch Bataller <[email protected]>:Seems your disks are too slow. On my laptop (nothing special, just one disk) using COPY I can dump 3G in ~ 20 secs, loading takes 120 secs, bare copying 3G takes 10 secs. Similar proportion you had, but much faster. confirm I/O is your bottleneck, and tell us how you solved your problemAnyway, You can cut import time by half if you set your destination table to unlogged (postgres will write half the data, it will save the transaction log writing). Remember to set it to logged when finished!!Regards,DanielEl 16 ago 2017, a las 16:32, Mariel Cherkassky <[email protected]> escribió:My server is virtual and it have virtual hd from a vnx storage machine. The logs and the data are on the same disk.2017-08-16 17:04 GMT+03:00 Daniel Blanch Bataller <[email protected]>:Considering it has to write logs and data at checkpoints I don’t see it particularly slow compared to the extract phase. What kind of disks you have SSD or regular disks? Different disks for ltransaction logs and data?El 16 ago 2017, a las 15:54, Mariel Cherkassky <[email protected]> escribió:I run the copy command via psql to create a local dump of a 3G table and it took me 134059.732ms =~2 minutes. After that I imported the data via copy and it took 458648.677ms =~7 minutes. So the copy command works but pretty slow. 2017-08-16 16:08 GMT+03:00 Daniel Blanch Bataller <[email protected]>:See if the copy command is actually working, copy should be very fast from your local disk.El 16 ago 2017, a las 14:26, Mariel Cherkassky <[email protected]> escribió:After all the changes of the memory parameters the same operation(without the copy utility) didnt run much faster - it took one minute less. I made a test with the copy command (without the 'with binary') and it took 1.5 hours to create the dumpfile in my local postgresql server. Then I tried to run the copy from the local dump and it is already running two hours and it didnt even finish. I looked at the server log and I saw that I run the copy command at 13:18:05, 3 minutes later checkpoint started and completed and there are no messages in the log after that. What can I do ? Improving the memory parameters and the memory on the server didnt help and for now the copy command doesnt help either.2017-08-15 20:14 GMT+03:00 Scott Marlowe <[email protected]>:On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> Hi,\n> So I I run the cheks that jeff mentioned :\n> \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour\n> and 35 minutes\n\nSo 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound about\nright (it's early, I haven't had enough coffee please check my math).\nThat's pretty slow unless you're working across pretty big distances\nwith mediocre connections. My home internet downloads about 100MB/s\nby comparison.\n\n> \\copy local_postresql_table from /tmp/tmp with binary - Didnt run because\n> the remote oracle database is currently under maintenance work.\n\nYou shouldn't need the remote oracle server if you've already copied\nit over, you're just copying from local disk into the local pgsql db.\nUnless I'm missing something.\n\n> So I decided to follow MichaelDBA tips and I set the ram on my machine to\n> 16G and I configured the effective_cache memory to 14G,tshared_buffer to be\n> 2G and maintenance_work_mem to 4G.\n\nGood settings. Maybe set work_mem to 128MB or so while you're at it.\n\n> I started running the copy checks again and for now it coppied 5G in 10\n> minutes. I have some questions :\n> 1)When I run insert into local_postresql_table select * from\n> remote_oracle_table I insert that data as bulk to the local table or row by\n> row ? If the answer as bulk than why copy is a better option for this case\n> ?\n\ninsert into select from oracle remote is one big copy, but it will\ntake at least as long as copying from oracle to the local network\ntook. Compare that to the same thing but use file_fdw on the file\nlocally.\n\n> 2)The copy from dump into the postgresql database should take less time than\n> the copy to dump ?\n\nYes. The copy from Oracle to your local drive is painfully slow for a\nmodern network connection.\n\n> 3)What do you think about the new memory parameters that I cofigured ?\n\nThey should be OK. I'm more worried about the performance of the io\nsubsystem tbh.",
"msg_date": "Thu, 17 Aug 2017 09:25:32 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "I checked with the storage team in the company and they saw that I have\nalot of io on the server. How should I reduce the io that the postgresql\nuses ?\n\n2017-08-17 9:25 GMT+03:00 Mariel Cherkassky <[email protected]>:\n\n> Hi Daniel,\n> I already tried to set the destination table to unlogged - it improved the\n> performance slightly. Is there a way to make sure that I/O is the problem ?\n>\n> 2017-08-17 0:46 GMT+03:00 Daniel Blanch Bataller <\n> [email protected]>:\n>\n>> Seems your disks are too slow. On my laptop (nothing special, just one\n>> disk) using COPY I can dump 3G in ~ 20 secs, loading takes 120 secs, bare\n>> copying 3G takes 10 secs.\n>>\n>> Similar proportion you had, but much faster.\n>>\n>> confirm I/O is your bottleneck, and tell us how you solved your problem\n>>\n>> Anyway, You can cut import time by half if you set your destination table\n>> to unlogged (postgres will write half the data, it will save the\n>> transaction log writing). Remember to set it to logged when finished!!\n>>\n>>\n>> Regards,\n>>\n>> Daniel\n>>\n>> El 16 ago 2017, a las 16:32, Mariel Cherkassky <\n>> [email protected]> escribió:\n>>\n>> My server is virtual and it have virtual hd from a vnx storage machine.\n>> The logs and the data are on the same disk.\n>>\n>> 2017-08-16 17:04 GMT+03:00 Daniel Blanch Bataller <\n>> [email protected]>:\n>>\n>>> Considering it has to write logs and data at checkpoints I don’t see it\n>>> particularly slow compared to the extract phase. What kind of disks you\n>>> have SSD or regular disks? Different disks for ltransaction logs and data?\n>>>\n>>>\n>>> El 16 ago 2017, a las 15:54, Mariel Cherkassky <\n>>> [email protected]> escribió:\n>>>\n>>> I run the copy command via psql to create a local dump of a 3G table and\n>>> it took me 134059.732ms =~2 minutes. After that I imported the data via\n>>> copy and it took 458648.677ms =~7 minutes. So the copy command works but\n>>> pretty slow.\n>>>\n>>> 2017-08-16 16:08 GMT+03:00 Daniel Blanch Bataller <\n>>> [email protected]>:\n>>>\n>>>> See if the copy command is actually working, copy should be very fast\n>>>> from your local disk.\n>>>>\n>>>>\n>>>> El 16 ago 2017, a las 14:26, Mariel Cherkassky <\n>>>> [email protected]> escribió:\n>>>>\n>>>>\n>>>> After all the changes of the memory parameters the same\n>>>> operation(without the copy utility) didnt run much faster - it took one\n>>>> minute less. I made a test with the copy command (without the 'with\n>>>> binary') and it took 1.5 hours to create the dumpfile in my local\n>>>> postgresql server. Then I tried to run the copy from the local dump and it\n>>>> is already running two hours and it didnt even finish. I looked at the\n>>>> server log and I saw that I run the copy command at 13:18:05, 3 minutes\n>>>> later checkpoint started and completed and there are no messages in the log\n>>>> after that. What can I do ? Improving the memory parameters and the memory\n>>>> on the server didnt help and for now the copy command doesnt help either.\n>>>>\n>>>>\n>>>>\n>>>>\n>>>> 2017-08-15 20:14 GMT+03:00 Scott Marlowe <[email protected]>:\n>>>>\n>>>>> On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky\n>>>>> <[email protected]> wrote:\n>>>>> > Hi,\n>>>>> > So I I run the cheks that jeff mentioned :\n>>>>> > \\copy (select * from oracle_remote_table) to /tmp/tmp with binary -\n>>>>> 1 hour\n>>>>> > and 35 minutes\n>>>>>\n>>>>> So 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound about\n>>>>> right (it's early, I haven't had enough coffee please check my math).\n>>>>> That's pretty slow unless you're working across pretty big distances\n>>>>> with mediocre connections. My home internet downloads about 100MB/s\n>>>>> by comparison.\n>>>>>\n>>>>> > \\copy local_postresql_table from /tmp/tmp with binary - Didnt run\n>>>>> because\n>>>>> > the remote oracle database is currently under maintenance work.\n>>>>>\n>>>>> You shouldn't need the remote oracle server if you've already copied\n>>>>> it over, you're just copying from local disk into the local pgsql db.\n>>>>> Unless I'm missing something.\n>>>>>\n>>>>> > So I decided to follow MichaelDBA tips and I set the ram on my\n>>>>> machine to\n>>>>> > 16G and I configured the effective_cache memory to\n>>>>> 14G,tshared_buffer to be\n>>>>> > 2G and maintenance_work_mem to 4G.\n>>>>>\n>>>>> Good settings. Maybe set work_mem to 128MB or so while you're at it.\n>>>>>\n>>>>> > I started running the copy checks again and for now it coppied 5G in\n>>>>> 10\n>>>>> > minutes. I have some questions :\n>>>>> > 1)When I run insert into local_postresql_table select * from\n>>>>> > remote_oracle_table I insert that data as bulk to the local table or\n>>>>> row by\n>>>>> > row ? If the answer as bulk than why copy is a better option for\n>>>>> this case\n>>>>> > ?\n>>>>>\n>>>>> insert into select from oracle remote is one big copy, but it will\n>>>>> take at least as long as copying from oracle to the local network\n>>>>> took. Compare that to the same thing but use file_fdw on the file\n>>>>> locally.\n>>>>>\n>>>>> > 2)The copy from dump into the postgresql database should take less\n>>>>> time than\n>>>>> > the copy to dump ?\n>>>>>\n>>>>> Yes. The copy from Oracle to your local drive is painfully slow for a\n>>>>> modern network connection.\n>>>>>\n>>>>> > 3)What do you think about the new memory parameters that I cofigured\n>>>>> ?\n>>>>>\n>>>>> They should be OK. I'm more worried about the performance of the io\n>>>>> subsystem tbh.\n>>>>>\n>>>>\n>>>>\n>>>>\n>>>\n>>>\n>>\n>>\n>\n\nI checked with the storage team in the company and they saw that I have alot of io on the server. How should I reduce the io that the postgresql uses ?2017-08-17 9:25 GMT+03:00 Mariel Cherkassky <[email protected]>:Hi Daniel,I already tried to set the destination table to unlogged - it improved the performance slightly. Is there a way to make sure that I/O is the problem ? 2017-08-17 0:46 GMT+03:00 Daniel Blanch Bataller <[email protected]>:Seems your disks are too slow. On my laptop (nothing special, just one disk) using COPY I can dump 3G in ~ 20 secs, loading takes 120 secs, bare copying 3G takes 10 secs. Similar proportion you had, but much faster. confirm I/O is your bottleneck, and tell us how you solved your problemAnyway, You can cut import time by half if you set your destination table to unlogged (postgres will write half the data, it will save the transaction log writing). Remember to set it to logged when finished!!Regards,DanielEl 16 ago 2017, a las 16:32, Mariel Cherkassky <[email protected]> escribió:My server is virtual and it have virtual hd from a vnx storage machine. The logs and the data are on the same disk.2017-08-16 17:04 GMT+03:00 Daniel Blanch Bataller <[email protected]>:Considering it has to write logs and data at checkpoints I don’t see it particularly slow compared to the extract phase. What kind of disks you have SSD or regular disks? Different disks for ltransaction logs and data?El 16 ago 2017, a las 15:54, Mariel Cherkassky <[email protected]> escribió:I run the copy command via psql to create a local dump of a 3G table and it took me 134059.732ms =~2 minutes. After that I imported the data via copy and it took 458648.677ms =~7 minutes. So the copy command works but pretty slow. 2017-08-16 16:08 GMT+03:00 Daniel Blanch Bataller <[email protected]>:See if the copy command is actually working, copy should be very fast from your local disk.El 16 ago 2017, a las 14:26, Mariel Cherkassky <[email protected]> escribió:After all the changes of the memory parameters the same operation(without the copy utility) didnt run much faster - it took one minute less. I made a test with the copy command (without the 'with binary') and it took 1.5 hours to create the dumpfile in my local postgresql server. Then I tried to run the copy from the local dump and it is already running two hours and it didnt even finish. I looked at the server log and I saw that I run the copy command at 13:18:05, 3 minutes later checkpoint started and completed and there are no messages in the log after that. What can I do ? Improving the memory parameters and the memory on the server didnt help and for now the copy command doesnt help either.2017-08-15 20:14 GMT+03:00 Scott Marlowe <[email protected]>:On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> Hi,\n> So I I run the cheks that jeff mentioned :\n> \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour\n> and 35 minutes\n\nSo 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound about\nright (it's early, I haven't had enough coffee please check my math).\nThat's pretty slow unless you're working across pretty big distances\nwith mediocre connections. My home internet downloads about 100MB/s\nby comparison.\n\n> \\copy local_postresql_table from /tmp/tmp with binary - Didnt run because\n> the remote oracle database is currently under maintenance work.\n\nYou shouldn't need the remote oracle server if you've already copied\nit over, you're just copying from local disk into the local pgsql db.\nUnless I'm missing something.\n\n> So I decided to follow MichaelDBA tips and I set the ram on my machine to\n> 16G and I configured the effective_cache memory to 14G,tshared_buffer to be\n> 2G and maintenance_work_mem to 4G.\n\nGood settings. Maybe set work_mem to 128MB or so while you're at it.\n\n> I started running the copy checks again and for now it coppied 5G in 10\n> minutes. I have some questions :\n> 1)When I run insert into local_postresql_table select * from\n> remote_oracle_table I insert that data as bulk to the local table or row by\n> row ? If the answer as bulk than why copy is a better option for this case\n> ?\n\ninsert into select from oracle remote is one big copy, but it will\ntake at least as long as copying from oracle to the local network\ntook. Compare that to the same thing but use file_fdw on the file\nlocally.\n\n> 2)The copy from dump into the postgresql database should take less time than\n> the copy to dump ?\n\nYes. The copy from Oracle to your local drive is painfully slow for a\nmodern network connection.\n\n> 3)What do you think about the new memory parameters that I cofigured ?\n\nThey should be OK. I'm more worried about the performance of the io\nsubsystem tbh.",
"msg_date": "Thu, 17 Aug 2017 12:00:18 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "I would just check how does it take to copy 3GB using an standard copy command. on my computer it took 10 secs. \n\n\n> El 17 ago 2017, a las 11:00, Mariel Cherkassky <[email protected]> escribió:\n> \n> I checked with the storage team in the company and they saw that I have alot of io on the server. How should I reduce the io that the postgresql uses ?\n> \n> 2017-08-17 9:25 GMT+03:00 Mariel Cherkassky <[email protected] <mailto:[email protected]>>:\n> Hi Daniel,\n> I already tried to set the destination table to unlogged - it improved the performance slightly. Is there a way to make sure that I/O is the problem ? \n> \n> 2017-08-17 0:46 GMT+03:00 Daniel Blanch Bataller <[email protected] <mailto:[email protected]>>:\n> Seems your disks are too slow. On my laptop (nothing special, just one disk) using COPY I can dump 3G in ~ 20 secs, loading takes 120 secs, bare copying 3G takes 10 secs. \n> \n> Similar proportion you had, but much faster. \n> \n> confirm I/O is your bottleneck, and tell us how you solved your problem\n> \n> Anyway, You can cut import time by half if you set your destination table to unlogged (postgres will write half the data, it will save the transaction log writing). Remember to set it to logged when finished!!\n> \n> \n> Regards,\n> \n> Daniel\n> \n>> El 16 ago 2017, a las 16:32, Mariel Cherkassky <[email protected] <mailto:[email protected]>> escribió:\n>> \n>> My server is virtual and it have virtual hd from a vnx storage machine. The logs and the data are on the same disk.\n>> \n>> 2017-08-16 17:04 GMT+03:00 Daniel Blanch Bataller <[email protected] <mailto:[email protected]>>:\n>> Considering it has to write logs and data at checkpoints I don’t see it particularly slow compared to the extract phase. What kind of disks you have SSD or regular disks? Different disks for ltransaction logs and data?\n>> \n>> \n>>> El 16 ago 2017, a las 15:54, Mariel Cherkassky <[email protected] <mailto:[email protected]>> escribió:\n>>> \n>>> I run the copy command via psql to create a local dump of a 3G table and it took me 134059.732ms =~2 minutes. After that I imported the data via copy and it took 458648.677ms =~7 minutes. So the copy command works but pretty slow. \n>>> \n>>> 2017-08-16 16:08 GMT+03:00 Daniel Blanch Bataller <[email protected] <mailto:[email protected]>>:\n>>> See if the copy command is actually working, copy should be very fast from your local disk.\n>>> \n>>> \n>>>> El 16 ago 2017, a las 14:26, Mariel Cherkassky <[email protected] <mailto:[email protected]>> escribió:\n>>>> \n>>>> \n>>>> After all the changes of the memory parameters the same operation(without the copy utility) didnt run much faster - it took one minute less. I made a test with the copy command (without the 'with binary') and it took 1.5 hours to create the dumpfile in my local postgresql server. Then I tried to run the copy from the local dump and it is already running two hours and it didnt even finish. I looked at the server log and I saw that I run the copy command at 13:18:05, 3 minutes later checkpoint started and completed and there are no messages in the log after that. What can I do ? Improving the memory parameters and the memory on the server didnt help and for now the copy command doesnt help either.\n>>>> \n>>>> \n>>>> \n>>>> \n>>>> 2017-08-15 20:14 GMT+03:00 Scott Marlowe <[email protected] <mailto:[email protected]>>:\n>>>> On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky\n>>>> <[email protected] <mailto:[email protected]>> wrote:\n>>>> > Hi,\n>>>> > So I I run the cheks that jeff mentioned :\n>>>> > \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour\n>>>> > and 35 minutes\n>>>> \n>>>> So 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound about\n>>>> right (it's early, I haven't had enough coffee please check my math).\n>>>> That's pretty slow unless you're working across pretty big distances\n>>>> with mediocre connections. My home internet downloads about 100MB/s\n>>>> by comparison.\n>>>> \n>>>> > \\copy local_postresql_table from /tmp/tmp with binary - Didnt run because\n>>>> > the remote oracle database is currently under maintenance work.\n>>>> \n>>>> You shouldn't need the remote oracle server if you've already copied\n>>>> it over, you're just copying from local disk into the local pgsql db.\n>>>> Unless I'm missing something.\n>>>> \n>>>> > So I decided to follow MichaelDBA tips and I set the ram on my machine to\n>>>> > 16G and I configured the effective_cache memory to 14G,tshared_buffer to be\n>>>> > 2G and maintenance_work_mem to 4G.\n>>>> \n>>>> Good settings. Maybe set work_mem to 128MB or so while you're at it.\n>>>> \n>>>> > I started running the copy checks again and for now it coppied 5G in 10\n>>>> > minutes. I have some questions :\n>>>> > 1)When I run insert into local_postresql_table select * from\n>>>> > remote_oracle_table I insert that data as bulk to the local table or row by\n>>>> > row ? If the answer as bulk than why copy is a better option for this case\n>>>> > ?\n>>>> \n>>>> insert into select from oracle remote is one big copy, but it will\n>>>> take at least as long as copying from oracle to the local network\n>>>> took. Compare that to the same thing but use file_fdw on the file\n>>>> locally.\n>>>> \n>>>> > 2)The copy from dump into the postgresql database should take less time than\n>>>> > the copy to dump ?\n>>>> \n>>>> Yes. The copy from Oracle to your local drive is painfully slow for a\n>>>> modern network connection.\n>>>> \n>>>> > 3)What do you think about the new memory parameters that I cofigured ?\n>>>> \n>>>> They should be OK. I'm more worried about the performance of the io\n>>>> subsystem tbh.\n>>>> \n>>> \n>>> \n>> \n>> \n> \n> \n\n\nI would just check how does it take to copy 3GB using an standard copy command. on my computer it took 10 secs. El 17 ago 2017, a las 11:00, Mariel Cherkassky <[email protected]> escribió:I checked with the storage team in the company and they saw that I have alot of io on the server. How should I reduce the io that the postgresql uses ?2017-08-17 9:25 GMT+03:00 Mariel Cherkassky <[email protected]>:Hi Daniel,I already tried to set the destination table to unlogged - it improved the performance slightly. Is there a way to make sure that I/O is the problem ? 2017-08-17 0:46 GMT+03:00 Daniel Blanch Bataller <[email protected]>:Seems your disks are too slow. On my laptop (nothing special, just one disk) using COPY I can dump 3G in ~ 20 secs, loading takes 120 secs, bare copying 3G takes 10 secs. Similar proportion you had, but much faster. confirm I/O is your bottleneck, and tell us how you solved your problemAnyway, You can cut import time by half if you set your destination table to unlogged (postgres will write half the data, it will save the transaction log writing). Remember to set it to logged when finished!!Regards,DanielEl 16 ago 2017, a las 16:32, Mariel Cherkassky <[email protected]> escribió:My server is virtual and it have virtual hd from a vnx storage machine. The logs and the data are on the same disk.2017-08-16 17:04 GMT+03:00 Daniel Blanch Bataller <[email protected]>:Considering it has to write logs and data at checkpoints I don’t see it particularly slow compared to the extract phase. What kind of disks you have SSD or regular disks? Different disks for ltransaction logs and data?El 16 ago 2017, a las 15:54, Mariel Cherkassky <[email protected]> escribió:I run the copy command via psql to create a local dump of a 3G table and it took me 134059.732ms =~2 minutes. After that I imported the data via copy and it took 458648.677ms =~7 minutes. So the copy command works but pretty slow. 2017-08-16 16:08 GMT+03:00 Daniel Blanch Bataller <[email protected]>:See if the copy command is actually working, copy should be very fast from your local disk.El 16 ago 2017, a las 14:26, Mariel Cherkassky <[email protected]> escribió:After all the changes of the memory parameters the same operation(without the copy utility) didnt run much faster - it took one minute less. I made a test with the copy command (without the 'with binary') and it took 1.5 hours to create the dumpfile in my local postgresql server. Then I tried to run the copy from the local dump and it is already running two hours and it didnt even finish. I looked at the server log and I saw that I run the copy command at 13:18:05, 3 minutes later checkpoint started and completed and there are no messages in the log after that. What can I do ? Improving the memory parameters and the memory on the server didnt help and for now the copy command doesnt help either.2017-08-15 20:14 GMT+03:00 Scott Marlowe <[email protected]>:On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> Hi,\n> So I I run the cheks that jeff mentioned :\n> \\copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour\n> and 35 minutes\n\nSo 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound about\nright (it's early, I haven't had enough coffee please check my math).\nThat's pretty slow unless you're working across pretty big distances\nwith mediocre connections. My home internet downloads about 100MB/s\nby comparison.\n\n> \\copy local_postresql_table from /tmp/tmp with binary - Didnt run because\n> the remote oracle database is currently under maintenance work.\n\nYou shouldn't need the remote oracle server if you've already copied\nit over, you're just copying from local disk into the local pgsql db.\nUnless I'm missing something.\n\n> So I decided to follow MichaelDBA tips and I set the ram on my machine to\n> 16G and I configured the effective_cache memory to 14G,tshared_buffer to be\n> 2G and maintenance_work_mem to 4G.\n\nGood settings. Maybe set work_mem to 128MB or so while you're at it.\n\n> I started running the copy checks again and for now it coppied 5G in 10\n> minutes. I have some questions :\n> 1)When I run insert into local_postresql_table select * from\n> remote_oracle_table I insert that data as bulk to the local table or row by\n> row ? If the answer as bulk than why copy is a better option for this case\n> ?\n\ninsert into select from oracle remote is one big copy, but it will\ntake at least as long as copying from oracle to the local network\ntook. Compare that to the same thing but use file_fdw on the file\nlocally.\n\n> 2)The copy from dump into the postgresql database should take less time than\n> the copy to dump ?\n\nYes. The copy from Oracle to your local drive is painfully slow for a\nmodern network connection.\n\n> 3)What do you think about the new memory parameters that I cofigured ?\n\nThey should be OK. I'm more worried about the performance of the io\nsubsystem tbh.",
"msg_date": "Thu, 17 Aug 2017 12:06:39 +0200",
"msg_from": "Daniel Blanch Bataller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> I checked with the storage team in the company and they saw that I have alot\n> of io on the server. How should I reduce the io that the postgresql uses ?\n\nDo you have concurrent activity on that server?\n\nWhat filesystem are you using wherever the data is sitting?\n\nIf you've got concurrent fsyncs happening, some filesystems handle\nthat poorly. When you've got WAL and data mixed in a single disk, or\nworse, filesystem, it happens often that the filesystem won't handle\nthe write barriers for the WAL efficiently. I/O gets intermingled with\nbulk operations, and even small fsyncs will have to flush writes from\nbulk operations, which makes a mess of things.\n\nIt is a very good idea, and in fact a recommended practice, to put WAL\non its own disk for that reason mainly.\n\nWith that little RAM, you'll also probably cause a lot of I/O in temp\nfiles, so I'd also recommend setting aside another disk for a temp\ntablespace so that I/O doesn't block other transactions as well.\n\nThis is all assuming you've got concurrent activity on the server. If\nnot, install iotop and try to see who's causing that much I/O.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 17 Aug 2017 13:37:29 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "This server is dedicated to be a postgresql production database, therefore\npostgresql is the only thing the runs on the server. The fs that I`m using\nis xfs. I`ll add two different disks - one for the wals and one for the\ntemp tablespace. Regarding the disk, what size should they be considering\nthat the database size is about 250G. Does 16G of ram considered little ? I\ninstalled iotop and I see that postgresql writer is writing most of the\ntime and above all.\n\nI mentioned that I perform alot of insert into table select * from table.\nBefore that I remove indexes,constraints and truncate the table. Should I\nrun vacuum before or after the operation ?\n\n2017-08-17 19:37 GMT+03:00 Claudio Freire <[email protected]>:\n\n> On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky\n> <[email protected]> wrote:\n> > I checked with the storage team in the company and they saw that I have\n> alot\n> > of io on the server. How should I reduce the io that the postgresql uses\n> ?\n>\n> Do you have concurrent activity on that server?\n>\n> What filesystem are you using wherever the data is sitting?\n>\n> If you've got concurrent fsyncs happening, some filesystems handle\n> that poorly. When you've got WAL and data mixed in a single disk, or\n> worse, filesystem, it happens often that the filesystem won't handle\n> the write barriers for the WAL efficiently. I/O gets intermingled with\n> bulk operations, and even small fsyncs will have to flush writes from\n> bulk operations, which makes a mess of things.\n>\n> It is a very good idea, and in fact a recommended practice, to put WAL\n> on its own disk for that reason mainly.\n>\n> With that little RAM, you'll also probably cause a lot of I/O in temp\n> files, so I'd also recommend setting aside another disk for a temp\n> tablespace so that I/O doesn't block other transactions as well.\n>\n> This is all assuming you've got concurrent activity on the server. If\n> not, install iotop and try to see who's causing that much I/O.\n>\n\nThis server is dedicated to be a postgresql production database, therefore postgresql is the only thing the runs on the server. The fs that I`m using is xfs. I`ll add two different disks - one for the wals and one for the temp tablespace. Regarding the disk, what size should they be considering that the database size is about 250G. Does 16G of ram considered little ? I installed iotop and I see that postgresql writer is writing most of the time and above all.I mentioned that I perform alot of insert into table select * from table. Before that I remove indexes,constraints and truncate the table. Should I run vacuum before or after the operation ? 2017-08-17 19:37 GMT+03:00 Claudio Freire <[email protected]>:On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> I checked with the storage team in the company and they saw that I have alot\n> of io on the server. How should I reduce the io that the postgresql uses ?\n\nDo you have concurrent activity on that server?\n\nWhat filesystem are you using wherever the data is sitting?\n\nIf you've got concurrent fsyncs happening, some filesystems handle\nthat poorly. When you've got WAL and data mixed in a single disk, or\nworse, filesystem, it happens often that the filesystem won't handle\nthe write barriers for the WAL efficiently. I/O gets intermingled with\nbulk operations, and even small fsyncs will have to flush writes from\nbulk operations, which makes a mess of things.\n\nIt is a very good idea, and in fact a recommended practice, to put WAL\non its own disk for that reason mainly.\n\nWith that little RAM, you'll also probably cause a lot of I/O in temp\nfiles, so I'd also recommend setting aside another disk for a temp\ntablespace so that I/O doesn't block other transactions as well.\n\nThis is all assuming you've got concurrent activity on the server. If\nnot, install iotop and try to see who's causing that much I/O.",
"msg_date": "Sun, 20 Aug 2017 09:39:45 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "I realized something weird. When I`m preforming the copy utility of\npostgresql in order to create dump from a local table in my postgresql db\nit takes for 32G table 20 minutes. When I try to use copy for a foregin\ntable (on oracle database) It takes more than 2 hours.. During the copy\noperation from the foreign table I dont see alot of write operations, with\niotop i see that its writes 3 M/s. What else I can check ?\n\n2017-08-20 9:39 GMT+03:00 Mariel Cherkassky <[email protected]>:\n\n> This server is dedicated to be a postgresql production database, therefore\n> postgresql is the only thing the runs on the server. The fs that I`m using\n> is xfs. I`ll add two different disks - one for the wals and one for the\n> temp tablespace. Regarding the disk, what size should they be considering\n> that the database size is about 250G. Does 16G of ram considered little ? I\n> installed iotop and I see that postgresql writer is writing most of the\n> time and above all.\n>\n> I mentioned that I perform alot of insert into table select * from table.\n> Before that I remove indexes,constraints and truncate the table. Should I\n> run vacuum before or after the operation ?\n>\n> 2017-08-17 19:37 GMT+03:00 Claudio Freire <[email protected]>:\n>\n>> On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky\n>> <[email protected]> wrote:\n>> > I checked with the storage team in the company and they saw that I have\n>> alot\n>> > of io on the server. How should I reduce the io that the postgresql\n>> uses ?\n>>\n>> Do you have concurrent activity on that server?\n>>\n>> What filesystem are you using wherever the data is sitting?\n>>\n>> If you've got concurrent fsyncs happening, some filesystems handle\n>> that poorly. When you've got WAL and data mixed in a single disk, or\n>> worse, filesystem, it happens often that the filesystem won't handle\n>> the write barriers for the WAL efficiently. I/O gets intermingled with\n>> bulk operations, and even small fsyncs will have to flush writes from\n>> bulk operations, which makes a mess of things.\n>>\n>> It is a very good idea, and in fact a recommended practice, to put WAL\n>> on its own disk for that reason mainly.\n>>\n>> With that little RAM, you'll also probably cause a lot of I/O in temp\n>> files, so I'd also recommend setting aside another disk for a temp\n>> tablespace so that I/O doesn't block other transactions as well.\n>>\n>> This is all assuming you've got concurrent activity on the server. If\n>> not, install iotop and try to see who's causing that much I/O.\n>>\n>\n>\n\nI realized something weird. When I`m preforming the copy utility of postgresql in order to create dump from a local table in my postgresql db it takes for 32G table 20 minutes. When I try to use copy for a foregin table (on oracle database) It takes more than 2 hours.. During the copy operation from the foreign table I dont see alot of write operations, with iotop i see that its writes 3 M/s. What else I can check ? 2017-08-20 9:39 GMT+03:00 Mariel Cherkassky <[email protected]>:This server is dedicated to be a postgresql production database, therefore postgresql is the only thing the runs on the server. The fs that I`m using is xfs. I`ll add two different disks - one for the wals and one for the temp tablespace. Regarding the disk, what size should they be considering that the database size is about 250G. Does 16G of ram considered little ? I installed iotop and I see that postgresql writer is writing most of the time and above all.I mentioned that I perform alot of insert into table select * from table. Before that I remove indexes,constraints and truncate the table. Should I run vacuum before or after the operation ? 2017-08-17 19:37 GMT+03:00 Claudio Freire <[email protected]>:On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> I checked with the storage team in the company and they saw that I have alot\n> of io on the server. How should I reduce the io that the postgresql uses ?\n\nDo you have concurrent activity on that server?\n\nWhat filesystem are you using wherever the data is sitting?\n\nIf you've got concurrent fsyncs happening, some filesystems handle\nthat poorly. When you've got WAL and data mixed in a single disk, or\nworse, filesystem, it happens often that the filesystem won't handle\nthe write barriers for the WAL efficiently. I/O gets intermingled with\nbulk operations, and even small fsyncs will have to flush writes from\nbulk operations, which makes a mess of things.\n\nIt is a very good idea, and in fact a recommended practice, to put WAL\non its own disk for that reason mainly.\n\nWith that little RAM, you'll also probably cause a lot of I/O in temp\nfiles, so I'd also recommend setting aside another disk for a temp\ntablespace so that I/O doesn't block other transactions as well.\n\nThis is all assuming you've got concurrent activity on the server. If\nnot, install iotop and try to see who's causing that much I/O.",
"msg_date": "Sun, 20 Aug 2017 14:00:51 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "When I run copy from local table the speed of the writing is 22 M/S. When I\nuse the copy from remote_oracle_Table it writes 3 M/s. SCP between the\nservers coppies very fast. How should I continue ?\n\n2017-08-20 14:00 GMT+03:00 Mariel Cherkassky <[email protected]>:\n\n> I realized something weird. When I`m preforming the copy utility of\n> postgresql in order to create dump from a local table in my postgresql db\n> it takes for 32G table 20 minutes. When I try to use copy for a foregin\n> table (on oracle database) It takes more than 2 hours.. During the copy\n> operation from the foreign table I dont see alot of write operations, with\n> iotop i see that its writes 3 M/s. What else I can check ?\n>\n> 2017-08-20 9:39 GMT+03:00 Mariel Cherkassky <[email protected]>:\n>\n>> This server is dedicated to be a postgresql production database,\n>> therefore postgresql is the only thing the runs on the server. The fs that\n>> I`m using is xfs. I`ll add two different disks - one for the wals and one\n>> for the temp tablespace. Regarding the disk, what size should they be\n>> considering that the database size is about 250G. Does 16G of ram\n>> considered little ? I installed iotop and I see that postgresql writer is\n>> writing most of the time and above all.\n>>\n>> I mentioned that I perform alot of insert into table select * from table.\n>> Before that I remove indexes,constraints and truncate the table. Should I\n>> run vacuum before or after the operation ?\n>>\n>> 2017-08-17 19:37 GMT+03:00 Claudio Freire <[email protected]>:\n>>\n>>> On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky\n>>> <[email protected]> wrote:\n>>> > I checked with the storage team in the company and they saw that I\n>>> have alot\n>>> > of io on the server. How should I reduce the io that the postgresql\n>>> uses ?\n>>>\n>>> Do you have concurrent activity on that server?\n>>>\n>>> What filesystem are you using wherever the data is sitting?\n>>>\n>>> If you've got concurrent fsyncs happening, some filesystems handle\n>>> that poorly. When you've got WAL and data mixed in a single disk, or\n>>> worse, filesystem, it happens often that the filesystem won't handle\n>>> the write barriers for the WAL efficiently. I/O gets intermingled with\n>>> bulk operations, and even small fsyncs will have to flush writes from\n>>> bulk operations, which makes a mess of things.\n>>>\n>>> It is a very good idea, and in fact a recommended practice, to put WAL\n>>> on its own disk for that reason mainly.\n>>>\n>>> With that little RAM, you'll also probably cause a lot of I/O in temp\n>>> files, so I'd also recommend setting aside another disk for a temp\n>>> tablespace so that I/O doesn't block other transactions as well.\n>>>\n>>> This is all assuming you've got concurrent activity on the server. If\n>>> not, install iotop and try to see who's causing that much I/O.\n>>>\n>>\n>>\n\nWhen I run copy from local table the speed of the writing is 22 M/S. When I use the copy from remote_oracle_Table it writes 3 M/s. SCP between the servers coppies very fast. How should I continue ?2017-08-20 14:00 GMT+03:00 Mariel Cherkassky <[email protected]>:I realized something weird. When I`m preforming the copy utility of postgresql in order to create dump from a local table in my postgresql db it takes for 32G table 20 minutes. When I try to use copy for a foregin table (on oracle database) It takes more than 2 hours.. During the copy operation from the foreign table I dont see alot of write operations, with iotop i see that its writes 3 M/s. What else I can check ? 2017-08-20 9:39 GMT+03:00 Mariel Cherkassky <[email protected]>:This server is dedicated to be a postgresql production database, therefore postgresql is the only thing the runs on the server. The fs that I`m using is xfs. I`ll add two different disks - one for the wals and one for the temp tablespace. Regarding the disk, what size should they be considering that the database size is about 250G. Does 16G of ram considered little ? I installed iotop and I see that postgresql writer is writing most of the time and above all.I mentioned that I perform alot of insert into table select * from table. Before that I remove indexes,constraints and truncate the table. Should I run vacuum before or after the operation ? 2017-08-17 19:37 GMT+03:00 Claudio Freire <[email protected]>:On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> I checked with the storage team in the company and they saw that I have alot\n> of io on the server. How should I reduce the io that the postgresql uses ?\n\nDo you have concurrent activity on that server?\n\nWhat filesystem are you using wherever the data is sitting?\n\nIf you've got concurrent fsyncs happening, some filesystems handle\nthat poorly. When you've got WAL and data mixed in a single disk, or\nworse, filesystem, it happens often that the filesystem won't handle\nthe write barriers for the WAL efficiently. I/O gets intermingled with\nbulk operations, and even small fsyncs will have to flush writes from\nbulk operations, which makes a mess of things.\n\nIt is a very good idea, and in fact a recommended practice, to put WAL\non its own disk for that reason mainly.\n\nWith that little RAM, you'll also probably cause a lot of I/O in temp\nfiles, so I'd also recommend setting aside another disk for a temp\ntablespace so that I/O doesn't block other transactions as well.\n\nThis is all assuming you've got concurrent activity on the server. If\nnot, install iotop and try to see who's causing that much I/O.",
"msg_date": "Sun, 20 Aug 2017 14:32:09 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "To summarize, I still have performance problems. My current situation :\n\nI'm trying to copy the data of many tables in the oracle database into my\npostgresql tables. I'm doing so by running insert into\nlocal_postgresql_temp select * from remote_oracle_table. The performance of\nthis operation are very slow and I tried to check the reason for that and\nmybe choose a different alternative.\n\n1)First method - Insert into local_postgresql_table select * from\nremote_oracle_table this generated total disk write of 7 M/s and actual\ndisk write of 4 M/s(iotop). For 32G table it took me 2 hours and 30 minutes.\n\n2)second method - copy (select * from oracle_remote_table) to\n/tmp/dump generates\ntotal disk write of 4 M/s and actuval disk write of 100 K/s. The copy\nutility suppose to be very fast but it seems very slow.\n\n-When I run copy from the local dump, the reading is very fast 300 M/s.\n\n-I created a 32G file on the oracle server and used scp to copy it and it\ntook me a few minutes.\n\n-The wals directory is located on a different file system. The parameters I\nassigned :\n\nmin_parallel_relation_size = 200MB\nmax_parallel_workers_per_gather = 5\nmax_worker_processes = 8\neffective_cache_size = 12GB\nwork_mem = 128MB\nmaintenance_work_mem = 4GB\nshared_buffers = 2000MB\nRAM : 16G\nCPU CORES : 8\n\nHOW can I increase the writes ? How can I get the data faster from the\noracle database to my postgresql database?\n\n2017-08-20 14:00 GMT+03:00 Mariel Cherkassky <[email protected]>:\n\n> I realized something weird. When I`m preforming the copy utility of\n> postgresql in order to create dump from a local table in my postgresql db\n> it takes for 32G table 20 minutes. When I try to use copy for a foregin\n> table (on oracle database) It takes more than 2 hours.. During the copy\n> operation from the foreign table I dont see alot of write operations, with\n> iotop i see that its writes 3 M/s. What else I can check ?\n>\n> 2017-08-20 9:39 GMT+03:00 Mariel Cherkassky <[email protected]>:\n>\n>> This server is dedicated to be a postgresql production database,\n>> therefore postgresql is the only thing the runs on the server. The fs that\n>> I`m using is xfs. I`ll add two different disks - one for the wals and one\n>> for the temp tablespace. Regarding the disk, what size should they be\n>> considering that the database size is about 250G. Does 16G of ram\n>> considered little ? I installed iotop and I see that postgresql writer is\n>> writing most of the time and above all.\n>>\n>> I mentioned that I perform alot of insert into table select * from table.\n>> Before that I remove indexes,constraints and truncate the table. Should I\n>> run vacuum before or after the operation ?\n>>\n>> 2017-08-17 19:37 GMT+03:00 Claudio Freire <[email protected]>:\n>>\n>>> On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky\n>>> <[email protected]> wrote:\n>>> > I checked with the storage team in the company and they saw that I\n>>> have alot\n>>> > of io on the server. How should I reduce the io that the postgresql\n>>> uses ?\n>>>\n>>> Do you have concurrent activity on that server?\n>>>\n>>> What filesystem are you using wherever the data is sitting?\n>>>\n>>> If you've got concurrent fsyncs happening, some filesystems handle\n>>> that poorly. When you've got WAL and data mixed in a single disk, or\n>>> worse, filesystem, it happens often that the filesystem won't handle\n>>> the write barriers for the WAL efficiently. I/O gets intermingled with\n>>> bulk operations, and even small fsyncs will have to flush writes from\n>>> bulk operations, which makes a mess of things.\n>>>\n>>> It is a very good idea, and in fact a recommended practice, to put WAL\n>>> on its own disk for that reason mainly.\n>>>\n>>> With that little RAM, you'll also probably cause a lot of I/O in temp\n>>> files, so I'd also recommend setting aside another disk for a temp\n>>> tablespace so that I/O doesn't block other transactions as well.\n>>>\n>>> This is all assuming you've got concurrent activity on the server. If\n>>> not, install iotop and try to see who's causing that much I/O.\n>>>\n>>\n>>\n\nTo summarize, I still have performance problems. My current situation : I'm trying to copy the data of many tables in the oracle database into my postgresql tables. I'm doing so by running insert into local_postgresql_temp select * from remote_oracle_table. The performance of this operation are very slow and I tried to check the reason for that and mybe choose a different alternative.1)First method - Insert into local_postgresql_table select * from remote_oracle_table this generated total disk write of 7 M/s and actual disk write of 4 M/s(iotop). For 32G table it took me 2 hours and 30 minutes.2)second method - copy (select * from oracle_remote_table) to /tmp/dump generates total disk write of 4 M/s and actuval disk write of 100 K/s. The copy utility suppose to be very fast but it seems very slow.-When I run copy from the local dump, the reading is very fast 300 M/s.-I created a 32G file on the oracle server and used scp to copy it and it took me a few minutes.-The wals directory is located on a different file system. The parameters I assigned :min_parallel_relation_size = 200MB\nmax_parallel_workers_per_gather = 5 \nmax_worker_processes = 8 \neffective_cache_size = 12GB\nwork_mem = 128MB\nmaintenance_work_mem = 4GB\nshared_buffers = 2000MB\nRAM : 16G\nCPU CORES : 8HOW can I increase the writes ? How can I get the data faster from the oracle database to my postgresql database?2017-08-20 14:00 GMT+03:00 Mariel Cherkassky <[email protected]>:I realized something weird. When I`m preforming the copy utility of postgresql in order to create dump from a local table in my postgresql db it takes for 32G table 20 minutes. When I try to use copy for a foregin table (on oracle database) It takes more than 2 hours.. During the copy operation from the foreign table I dont see alot of write operations, with iotop i see that its writes 3 M/s. What else I can check ? 2017-08-20 9:39 GMT+03:00 Mariel Cherkassky <[email protected]>:This server is dedicated to be a postgresql production database, therefore postgresql is the only thing the runs on the server. The fs that I`m using is xfs. I`ll add two different disks - one for the wals and one for the temp tablespace. Regarding the disk, what size should they be considering that the database size is about 250G. Does 16G of ram considered little ? I installed iotop and I see that postgresql writer is writing most of the time and above all.I mentioned that I perform alot of insert into table select * from table. Before that I remove indexes,constraints and truncate the table. Should I run vacuum before or after the operation ? 2017-08-17 19:37 GMT+03:00 Claudio Freire <[email protected]>:On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> I checked with the storage team in the company and they saw that I have alot\n> of io on the server. How should I reduce the io that the postgresql uses ?\n\nDo you have concurrent activity on that server?\n\nWhat filesystem are you using wherever the data is sitting?\n\nIf you've got concurrent fsyncs happening, some filesystems handle\nthat poorly. When you've got WAL and data mixed in a single disk, or\nworse, filesystem, it happens often that the filesystem won't handle\nthe write barriers for the WAL efficiently. I/O gets intermingled with\nbulk operations, and even small fsyncs will have to flush writes from\nbulk operations, which makes a mess of things.\n\nIt is a very good idea, and in fact a recommended practice, to put WAL\non its own disk for that reason mainly.\n\nWith that little RAM, you'll also probably cause a lot of I/O in temp\nfiles, so I'd also recommend setting aside another disk for a temp\ntablespace so that I/O doesn't block other transactions as well.\n\nThis is all assuming you've got concurrent activity on the server. If\nnot, install iotop and try to see who's causing that much I/O.",
"msg_date": "Mon, 21 Aug 2017 11:00:41 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "> El 21 ago 2017, a las 10:00, Mariel Cherkassky <[email protected]> escribió:\n> \n> To summarize, I still have performance problems. My current situation : \n> I'm trying to copy the data of many tables in the oracle database into my postgresql tables. I'm doing so by running insert into local_postgresql_temp select * from remote_oracle_table. The performance of this operation are very slow and I tried to check the reason for that and mybe choose a different alternative.\n> \n> 1)First method - Insert into local_postgresql_table select * from remote_oracle_table this generated total disk write of 7 M/s and actual disk write of 4 M/s(iotop). For 32G table it took me 2 hours and 30 minutes.\n> \n> 2)second method - copy (select * from oracle_remote_table) to /tmp/dump generates total disk write of 4 M/s and actuval disk write of 100 K/s. The copy utility suppose to be very fast but it seems very slow.\n> \n> \n\n\nAre you using a FDW to access oracle server and then dump it using copy? This is going to be slow, FDW isn't fast.\n\n\n> -When I run copy from the local dump, the reading is very fast 300 M/s.\n> \n> \n\nYou reported it was slow before. What has changed? How much does it take to load the 32G table then?\n\n\n> -I created a 32G file on the oracle server and used scp to copy it and it took me a few minutes.\n> \n> \n> -The wals directory is located on a different file system. The parameters I assigned :\n> \n> min_parallel_relation_size = 200MB\n> max_parallel_workers_per_gather = 5 \n> max_worker_processes = 8 \n> effective_cache_size = 12GB\n> work_mem = 128MB\n> maintenance_work_mem = 4GB\n> shared_buffers = 2000MB\n> RAM : 16G\n> CPU CORES : 8\n> HOW can I increase the writes ? How can I get the data faster from the oracle database to my postgresql database?\n> \n> \n\n\nExtract the table to a file in the oracle server in a format that the COPY utility can read, then copy it to postgres server and load it. You can even pipe commands and do it in a single step.\n\nThis is what I meant when I said that COPY is much faster than any thing else. To make it even faster, if I/O is not your bottleneck, you can chop the table in chunks and load it in parallel as I told you before, I have done this many times when migrating data from oracle to postgres. ora2pg uses this method to migrate data from oracle to postgres too. \n\n> \n> 2017-08-20 14:00 GMT+03:00 Mariel Cherkassky <[email protected] <mailto:[email protected]>>:\n> I realized something weird. When I`m preforming the copy utility of postgresql in order to create dump from a local table in my postgresql db it takes for 32G table 20 minutes. When I try to use copy for a foregin table (on oracle database) It takes more than 2 hours.. During the copy operation from the foreign table I dont see alot of write operations, with iotop i see that its writes 3 M/s. What else I can check ? \n> \n> 2017-08-20 9:39 GMT+03:00 Mariel Cherkassky <[email protected] <mailto:[email protected]>>:\n> This server is dedicated to be a postgresql production database, therefore postgresql is the only thing the runs on the server. The fs that I`m using is xfs. I`ll add two different disks - one for the wals and one for the temp tablespace. Regarding the disk, what size should they be considering that the database size is about 250G. Does 16G of ram considered little ? I installed iotop and I see that postgresql writer is writing most of the time and above all.\n> \n> I mentioned that I perform alot of insert into table select * from table. Before that I remove indexes,constraints and truncate the table. Should I run vacuum before or after the operation ? \n> \n> 2017-08-17 19:37 GMT+03:00 Claudio Freire <[email protected] <mailto:[email protected]>>:\n> On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky\n> <[email protected] <mailto:[email protected]>> wrote:\n> > I checked with the storage team in the company and they saw that I have alot\n> > of io on the server. How should I reduce the io that the postgresql uses ?\n> \n> Do you have concurrent activity on that server?\n> \n> What filesystem are you using wherever the data is sitting?\n> \n> If you've got concurrent fsyncs happening, some filesystems handle\n> that poorly. When you've got WAL and data mixed in a single disk, or\n> worse, filesystem, it happens often that the filesystem won't handle\n> the write barriers for the WAL efficiently. I/O gets intermingled with\n> bulk operations, and even small fsyncs will have to flush writes from\n> bulk operations, which makes a mess of things.\n> \n> It is a very good idea, and in fact a recommended practice, to put WAL\n> on its own disk for that reason mainly.\n> \n> With that little RAM, you'll also probably cause a lot of I/O in temp\n> files, so I'd also recommend setting aside another disk for a temp\n> tablespace so that I/O doesn't block other transactions as well.\n> \n> This is all assuming you've got concurrent activity on the server. If\n> not, install iotop and try to see who's causing that much I/O.\n> \n\n\nEl 21 ago 2017, a las 10:00, Mariel Cherkassky <[email protected]> escribió:To summarize, I still have performance problems. My current situation : I'm trying to copy the data of many tables in the oracle database into my postgresql tables. I'm doing so by running insert into local_postgresql_temp select * from remote_oracle_table. The performance of this operation are very slow and I tried to check the reason for that and mybe choose a different alternative.1)First method - Insert into local_postgresql_table select * from remote_oracle_table this generated total disk write of 7 M/s and actual disk write of 4 M/s(iotop). For 32G table it took me 2 hours and 30 minutes.2)second method - copy (select * from oracle_remote_table) to /tmp/dump generates total disk write of 4 M/s and actuval disk write of 100 K/s. The copy utility suppose to be very fast but it seems very slow.Are you using a FDW to access oracle server and then dump it using copy? This is going to be slow, FDW isn't fast.-When I run copy from the local dump, the reading is very fast 300 M/s.You reported it was slow before. What has changed? How much does it take to load the 32G table then?-I created a 32G file on the oracle server and used scp to copy it and it took me a few minutes.-The wals directory is located on a different file system. The parameters I assigned :min_parallel_relation_size = 200MB\nmax_parallel_workers_per_gather = 5 \nmax_worker_processes = 8 \neffective_cache_size = 12GB\nwork_mem = 128MB\nmaintenance_work_mem = 4GB\nshared_buffers = 2000MB\nRAM : 16G\nCPU CORES : 8HOW can I increase the writes ? How can I get the data faster from the oracle database to my postgresql database?Extract the table to a file in the oracle server in a format that the COPY utility can read, then copy it to postgres server and load it. You can even pipe commands and do it in a single step.This is what I meant when I said that COPY is much faster than any thing else. To make it even faster, if I/O is not your bottleneck, you can chop the table in chunks and load it in parallel as I told you before, I have done this many times when migrating data from oracle to postgres. ora2pg uses this method to migrate data from oracle to postgres too. 2017-08-20 14:00 GMT+03:00 Mariel Cherkassky <[email protected]>:I realized something weird. When I`m preforming the copy utility of postgresql in order to create dump from a local table in my postgresql db it takes for 32G table 20 minutes. When I try to use copy for a foregin table (on oracle database) It takes more than 2 hours.. During the copy operation from the foreign table I dont see alot of write operations, with iotop i see that its writes 3 M/s. What else I can check ? 2017-08-20 9:39 GMT+03:00 Mariel Cherkassky <[email protected]>:This server is dedicated to be a postgresql production database, therefore postgresql is the only thing the runs on the server. The fs that I`m using is xfs. I`ll add two different disks - one for the wals and one for the temp tablespace. Regarding the disk, what size should they be considering that the database size is about 250G. Does 16G of ram considered little ? I installed iotop and I see that postgresql writer is writing most of the time and above all.I mentioned that I perform alot of insert into table select * from table. Before that I remove indexes,constraints and truncate the table. Should I run vacuum before or after the operation ? 2017-08-17 19:37 GMT+03:00 Claudio Freire <[email protected]>:On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> I checked with the storage team in the company and they saw that I have alot\n> of io on the server. How should I reduce the io that the postgresql uses ?\n\nDo you have concurrent activity on that server?\n\nWhat filesystem are you using wherever the data is sitting?\n\nIf you've got concurrent fsyncs happening, some filesystems handle\nthat poorly. When you've got WAL and data mixed in a single disk, or\nworse, filesystem, it happens often that the filesystem won't handle\nthe write barriers for the WAL efficiently. I/O gets intermingled with\nbulk operations, and even small fsyncs will have to flush writes from\nbulk operations, which makes a mess of things.\n\nIt is a very good idea, and in fact a recommended practice, to put WAL\non its own disk for that reason mainly.\n\nWith that little RAM, you'll also probably cause a lot of I/O in temp\nfiles, so I'd also recommend setting aside another disk for a temp\ntablespace so that I/O doesn't block other transactions as well.\n\nThis is all assuming you've got concurrent activity on the server. If\nnot, install iotop and try to see who's causing that much I/O.",
"msg_date": "Mon, 21 Aug 2017 10:37:40 +0200",
"msg_from": "Daniel Blanch Bataller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "All this operation runs as part of a big transaction that I run. How can I\ncreate a dump in the oracle server and copy it to the postgresql server\nfrom a postgresql transaction ? Chopping the table is optional when I use\ncopy, but when I use copy to remote oracle table it takes longer to create\nthe dump.\n\n2017-08-21 11:37 GMT+03:00 Daniel Blanch Bataller <\[email protected]>:\n\n>\n> El 21 ago 2017, a las 10:00, Mariel Cherkassky <\n> [email protected]> escribió:\n>\n> To summarize, I still have performance problems. My current situation :\n>\n> I'm trying to copy the data of many tables in the oracle database into my\n> postgresql tables. I'm doing so by running insert into\n> local_postgresql_temp select * from remote_oracle_table. The performance\n> of this operation are very slow and I tried to check the reason for that\n> and mybe choose a different alternative.\n>\n> 1)First method - Insert into local_postgresql_table select * from\n> remote_oracle_table this generated total disk write of 7 M/s and actual\n> disk write of 4 M/s(iotop). For 32G table it took me 2 hours and 30 minutes.\n>\n> 2)second method - copy (select * from oracle_remote_table) to /tmp/dump generates\n> total disk write of 4 M/s and actuval disk write of 100 K/s. The copy\n> utility suppose to be very fast but it seems very slow.\n>\n>\n>\n> Are you using a FDW to access oracle server and then dump it using copy?\n> This is going to be slow, FDW isn't fast.\n>\n>\n> -When I run copy from the local dump, the reading is very fast 300 M/s.\n>\n>\n> You reported it was slow before. What has changed? How much does it take\n> to load the 32G table then?\n>\n>\n> -I created a 32G file on the oracle server and used scp to copy it and it\n> took me a few minutes.\n>\n> -The wals directory is located on a different file system. The parameters\n> I assigned :\n>\n> min_parallel_relation_size = 200MB\n> max_parallel_workers_per_gather = 5\n> max_worker_processes = 8\n> effective_cache_size = 12GB\n> work_mem = 128MB\n> maintenance_work_mem = 4GB\n> shared_buffers = 2000MB\n> RAM : 16G\n> CPU CORES : 8\n>\n> HOW can I increase the writes ? How can I get the data faster from the\n> oracle database to my postgresql database?\n>\n>\n>\n> Extract the table to a file in the oracle server in a format that the COPY\n> utility can read, then copy it to postgres server and load it. You can even\n> pipe commands and do it in a single step.\n>\n> This is what I meant when I said that COPY is much faster than any thing\n> else. To make it even faster, if I/O is not your bottleneck, you can chop\n> the table in chunks and load it in parallel as I told you before, I have\n> done this many times when migrating data from oracle to postgres. ora2pg\n> uses this method to migrate data from oracle to postgres too.\n>\n>\n> 2017-08-20 14:00 GMT+03:00 Mariel Cherkassky <[email protected]>\n> :\n>\n>> I realized something weird. When I`m preforming the copy utility of\n>> postgresql in order to create dump from a local table in my postgresql db\n>> it takes for 32G table 20 minutes. When I try to use copy for a foregin\n>> table (on oracle database) It takes more than 2 hours.. During the copy\n>> operation from the foreign table I dont see alot of write operations, with\n>> iotop i see that its writes 3 M/s. What else I can check ?\n>>\n>> 2017-08-20 9:39 GMT+03:00 Mariel Cherkassky <[email protected]>\n>> :\n>>\n>>> This server is dedicated to be a postgresql production database,\n>>> therefore postgresql is the only thing the runs on the server. The fs that\n>>> I`m using is xfs. I`ll add two different disks - one for the wals and one\n>>> for the temp tablespace. Regarding the disk, what size should they be\n>>> considering that the database size is about 250G. Does 16G of ram\n>>> considered little ? I installed iotop and I see that postgresql writer is\n>>> writing most of the time and above all.\n>>>\n>>> I mentioned that I perform alot of insert into table select * from\n>>> table. Before that I remove indexes,constraints and truncate the table.\n>>> Should I run vacuum before or after the operation ?\n>>>\n>>> 2017-08-17 19:37 GMT+03:00 Claudio Freire <[email protected]>:\n>>>\n>>>> On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky\n>>>> <[email protected]> wrote:\n>>>> > I checked with the storage team in the company and they saw that I\n>>>> have alot\n>>>> > of io on the server. How should I reduce the io that the postgresql\n>>>> uses ?\n>>>>\n>>>> Do you have concurrent activity on that server?\n>>>>\n>>>> What filesystem are you using wherever the data is sitting?\n>>>>\n>>>> If you've got concurrent fsyncs happening, some filesystems handle\n>>>> that poorly. When you've got WAL and data mixed in a single disk, or\n>>>> worse, filesystem, it happens often that the filesystem won't handle\n>>>> the write barriers for the WAL efficiently. I/O gets intermingled with\n>>>> bulk operations, and even small fsyncs will have to flush writes from\n>>>> bulk operations, which makes a mess of things.\n>>>>\n>>>> It is a very good idea, and in fact a recommended practice, to put WAL\n>>>> on its own disk for that reason mainly.\n>>>>\n>>>> With that little RAM, you'll also probably cause a lot of I/O in temp\n>>>> files, so I'd also recommend setting aside another disk for a temp\n>>>> tablespace so that I/O doesn't block other transactions as well.\n>>>>\n>>>> This is all assuming you've got concurrent activity on the server. If\n>>>> not, install iotop and try to see who's causing that much I/O.\n>>>>\n>>>\n>>>\n>\n\nAll this operation runs as part of a big transaction that I run. How can I create a dump in the oracle server and copy it to the postgresql server from a postgresql transaction ? Chopping the table is optional when I use copy, but when I use copy to remote oracle table it takes longer to create the dump. 2017-08-21 11:37 GMT+03:00 Daniel Blanch Bataller <[email protected]>:El 21 ago 2017, a las 10:00, Mariel Cherkassky <[email protected]> escribió:To summarize, I still have performance problems. My current situation : I'm trying to copy the data of many tables in the oracle database into my postgresql tables. I'm doing so by running insert into local_postgresql_temp select * from remote_oracle_table. The performance of this operation are very slow and I tried to check the reason for that and mybe choose a different alternative.1)First method - Insert into local_postgresql_table select * from remote_oracle_table this generated total disk write of 7 M/s and actual disk write of 4 M/s(iotop). For 32G table it took me 2 hours and 30 minutes.2)second method - copy (select * from oracle_remote_table) to /tmp/dump generates total disk write of 4 M/s and actuval disk write of 100 K/s. The copy utility suppose to be very fast but it seems very slow.Are you using a FDW to access oracle server and then dump it using copy? This is going to be slow, FDW isn't fast.-When I run copy from the local dump, the reading is very fast 300 M/s.You reported it was slow before. What has changed? How much does it take to load the 32G table then?-I created a 32G file on the oracle server and used scp to copy it and it took me a few minutes.-The wals directory is located on a different file system. The parameters I assigned :min_parallel_relation_size = 200MB\nmax_parallel_workers_per_gather = 5 \nmax_worker_processes = 8 \neffective_cache_size = 12GB\nwork_mem = 128MB\nmaintenance_work_mem = 4GB\nshared_buffers = 2000MB\nRAM : 16G\nCPU CORES : 8HOW can I increase the writes ? How can I get the data faster from the oracle database to my postgresql database?Extract the table to a file in the oracle server in a format that the COPY utility can read, then copy it to postgres server and load it. You can even pipe commands and do it in a single step.This is what I meant when I said that COPY is much faster than any thing else. To make it even faster, if I/O is not your bottleneck, you can chop the table in chunks and load it in parallel as I told you before, I have done this many times when migrating data from oracle to postgres. ora2pg uses this method to migrate data from oracle to postgres too. 2017-08-20 14:00 GMT+03:00 Mariel Cherkassky <[email protected]>:I realized something weird. When I`m preforming the copy utility of postgresql in order to create dump from a local table in my postgresql db it takes for 32G table 20 minutes. When I try to use copy for a foregin table (on oracle database) It takes more than 2 hours.. During the copy operation from the foreign table I dont see alot of write operations, with iotop i see that its writes 3 M/s. What else I can check ? 2017-08-20 9:39 GMT+03:00 Mariel Cherkassky <[email protected]>:This server is dedicated to be a postgresql production database, therefore postgresql is the only thing the runs on the server. The fs that I`m using is xfs. I`ll add two different disks - one for the wals and one for the temp tablespace. Regarding the disk, what size should they be considering that the database size is about 250G. Does 16G of ram considered little ? I installed iotop and I see that postgresql writer is writing most of the time and above all.I mentioned that I perform alot of insert into table select * from table. Before that I remove indexes,constraints and truncate the table. Should I run vacuum before or after the operation ? 2017-08-17 19:37 GMT+03:00 Claudio Freire <[email protected]>:On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> I checked with the storage team in the company and they saw that I have alot\n> of io on the server. How should I reduce the io that the postgresql uses ?\n\nDo you have concurrent activity on that server?\n\nWhat filesystem are you using wherever the data is sitting?\n\nIf you've got concurrent fsyncs happening, some filesystems handle\nthat poorly. When you've got WAL and data mixed in a single disk, or\nworse, filesystem, it happens often that the filesystem won't handle\nthe write barriers for the WAL efficiently. I/O gets intermingled with\nbulk operations, and even small fsyncs will have to flush writes from\nbulk operations, which makes a mess of things.\n\nIt is a very good idea, and in fact a recommended practice, to put WAL\non its own disk for that reason mainly.\n\nWith that little RAM, you'll also probably cause a lot of I/O in temp\nfiles, so I'd also recommend setting aside another disk for a temp\ntablespace so that I/O doesn't block other transactions as well.\n\nThis is all assuming you've got concurrent activity on the server. If\nnot, install iotop and try to see who's causing that much I/O.",
"msg_date": "Mon, 21 Aug 2017 14:27:37 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "> El 21 ago 2017, a las 13:27, Mariel Cherkassky <[email protected]> escribió:\n> \n> All this operation runs as part of a big transaction that I run.\n> How can I create a dump in the oracle server and copy it to the postgresql server from a postgresql transaction ?\n\nI guess you could create a user defined function in any of the available languages (perl, python, java, …). Functions run inside transactions too…this is not simple, though. \n\n> Chopping the table is optional when I use copy, but when I use copy to remote oracle table it takes longer to create the dump. \n\nIt may take longer depending on how the oracle machine, table and database are configured. In my experience oracle is not very fast dumping whole tables, not to mention tables with BLOB data, which can be as slow as hundreds of records per second (which is probably not your case).\n\nIf this transaction is to synchronize data between transactional servers and data analysis servers you may consider using some type of replication where only changes are sent. EnterpriseDB has tools to do such things, I’m not aware of any other tool that can do this between oracle and postgres.\n\nRegards,\n\nDaniel.\n\n> \n> 2017-08-21 11:37 GMT+03:00 Daniel Blanch Bataller <[email protected] <mailto:[email protected]>>:\n> \n>> El 21 ago 2017, a las 10:00, Mariel Cherkassky <[email protected] <mailto:[email protected]>> escribió:\n>> \n>> To summarize, I still have performance problems. My current situation : \n>> I'm trying to copy the data of many tables in the oracle database into my postgresql tables. I'm doing so by running insert into local_postgresql_temp select * from remote_oracle_table. The performance of this operation are very slow and I tried to check the reason for that and mybe choose a different alternative.\n>> \n>> 1)First method - Insert into local_postgresql_table select * from remote_oracle_table this generated total disk write of 7 M/s and actual disk write of 4 M/s(iotop). For 32G table it took me 2 hours and 30 minutes.\n>> \n>> 2)second method - copy (select * from oracle_remote_table) to /tmp/dump generates total disk write of 4 M/s and actuval disk write of 100 K/s. The copy utility suppose to be very fast but it seems very slow.\n>> \n>> \n> \n> \n> Are you using a FDW to access oracle server and then dump it using copy? This is going to be slow, FDW isn't fast.\n> \n> \n>> -When I run copy from the local dump, the reading is very fast 300 M/s.\n>> \n>> \n> \n> You reported it was slow before. What has changed? How much does it take to load the 32G table then?\n> \n> \n>> -I created a 32G file on the oracle server and used scp to copy it and it took me a few minutes.\n>> \n>> \n>> -The wals directory is located on a different file system. The parameters I assigned :\n>> \n>> min_parallel_relation_size = 200MB\n>> max_parallel_workers_per_gather = 5 \n>> max_worker_processes = 8 \n>> effective_cache_size = 12GB\n>> work_mem = 128MB\n>> maintenance_work_mem = 4GB\n>> shared_buffers = 2000MB\n>> RAM : 16G\n>> CPU CORES : 8\n>> HOW can I increase the writes ? How can I get the data faster from the oracle database to my postgresql database?\n>> \n>> \n> \n> \n> Extract the table to a file in the oracle server in a format that the COPY utility can read, then copy it to postgres server and load it. You can even pipe commands and do it in a single step.\n> \n> This is what I meant when I said that COPY is much faster than any thing else. To make it even faster, if I/O is not your bottleneck, you can chop the table in chunks and load it in parallel as I told you before, I have done this many times when migrating data from oracle to postgres. ora2pg uses this method to migrate data from oracle to postgres too. \n> \n>> \n>> 2017-08-20 14:00 GMT+03:00 Mariel Cherkassky <[email protected] <mailto:[email protected]>>:\n>> I realized something weird. When I`m preforming the copy utility of postgresql in order to create dump from a local table in my postgresql db it takes for 32G table 20 minutes. When I try to use copy for a foregin table (on oracle database) It takes more than 2 hours.. During the copy operation from the foreign table I dont see alot of write operations, with iotop i see that its writes 3 M/s. What else I can check ? \n>> \n>> 2017-08-20 9:39 GMT+03:00 Mariel Cherkassky <[email protected] <mailto:[email protected]>>:\n>> This server is dedicated to be a postgresql production database, therefore postgresql is the only thing the runs on the server. The fs that I`m using is xfs. I`ll add two different disks - one for the wals and one for the temp tablespace. Regarding the disk, what size should they be considering that the database size is about 250G. Does 16G of ram considered little ? I installed iotop and I see that postgresql writer is writing most of the time and above all.\n>> \n>> I mentioned that I perform alot of insert into table select * from table. Before that I remove indexes,constraints and truncate the table. Should I run vacuum before or after the operation ? \n>> \n>> 2017-08-17 19:37 GMT+03:00 Claudio Freire <[email protected] <mailto:[email protected]>>:\n>> On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky\n>> <[email protected] <mailto:[email protected]>> wrote:\n>> > I checked with the storage team in the company and they saw that I have alot\n>> > of io on the server. How should I reduce the io that the postgresql uses ?\n>> \n>> Do you have concurrent activity on that server?\n>> \n>> What filesystem are you using wherever the data is sitting?\n>> \n>> If you've got concurrent fsyncs happening, some filesystems handle\n>> that poorly. When you've got WAL and data mixed in a single disk, or\n>> worse, filesystem, it happens often that the filesystem won't handle\n>> the write barriers for the WAL efficiently. I/O gets intermingled with\n>> bulk operations, and even small fsyncs will have to flush writes from\n>> bulk operations, which makes a mess of things.\n>> \n>> It is a very good idea, and in fact a recommended practice, to put WAL\n>> on its own disk for that reason mainly.\n>> \n>> With that little RAM, you'll also probably cause a lot of I/O in temp\n>> files, so I'd also recommend setting aside another disk for a temp\n>> tablespace so that I/O doesn't block other transactions as well.\n>> \n>> This is all assuming you've got concurrent activity on the server. If\n>> not, install iotop and try to see who's causing that much I/O.\n>> \n> \n> \n\n\nEl 21 ago 2017, a las 13:27, Mariel Cherkassky <[email protected]> escribió:All this operation runs as part of a big transaction that I run.How can I create a dump in the oracle server and copy it to the postgresql server from a postgresql transaction ? I guess you could create a user defined function in any of the available languages (perl, python, java, …). Functions run inside transactions too…this is not simple, though. Chopping the table is optional when I use copy, but when I use copy to remote oracle table it takes longer to create the dump. It may take longer depending on how the oracle machine, table and database are configured. In my experience oracle is not very fast dumping whole tables, not to mention tables with BLOB data, which can be as slow as hundreds of records per second (which is probably not your case).If this transaction is to synchronize data between transactional servers and data analysis servers you may consider using some type of replication where only changes are sent. EnterpriseDB has tools to do such things, I’m not aware of any other tool that can do this between oracle and postgres.Regards,Daniel.2017-08-21 11:37 GMT+03:00 Daniel Blanch Bataller <[email protected]>:El 21 ago 2017, a las 10:00, Mariel Cherkassky <[email protected]> escribió:To summarize, I still have performance problems. My current situation : I'm trying to copy the data of many tables in the oracle database into my postgresql tables. I'm doing so by running insert into local_postgresql_temp select * from remote_oracle_table. The performance of this operation are very slow and I tried to check the reason for that and mybe choose a different alternative.1)First method - Insert into local_postgresql_table select * from remote_oracle_table this generated total disk write of 7 M/s and actual disk write of 4 M/s(iotop). For 32G table it took me 2 hours and 30 minutes.2)second method - copy (select * from oracle_remote_table) to /tmp/dump generates total disk write of 4 M/s and actuval disk write of 100 K/s. The copy utility suppose to be very fast but it seems very slow.Are you using a FDW to access oracle server and then dump it using copy? This is going to be slow, FDW isn't fast.-When I run copy from the local dump, the reading is very fast 300 M/s.You reported it was slow before. What has changed? How much does it take to load the 32G table then?-I created a 32G file on the oracle server and used scp to copy it and it took me a few minutes.-The wals directory is located on a different file system. The parameters I assigned :min_parallel_relation_size = 200MB\nmax_parallel_workers_per_gather = 5 \nmax_worker_processes = 8 \neffective_cache_size = 12GB\nwork_mem = 128MB\nmaintenance_work_mem = 4GB\nshared_buffers = 2000MB\nRAM : 16G\nCPU CORES : 8HOW can I increase the writes ? How can I get the data faster from the oracle database to my postgresql database?Extract the table to a file in the oracle server in a format that the COPY utility can read, then copy it to postgres server and load it. You can even pipe commands and do it in a single step.This is what I meant when I said that COPY is much faster than any thing else. To make it even faster, if I/O is not your bottleneck, you can chop the table in chunks and load it in parallel as I told you before, I have done this many times when migrating data from oracle to postgres. ora2pg uses this method to migrate data from oracle to postgres too. 2017-08-20 14:00 GMT+03:00 Mariel Cherkassky <[email protected]>:I realized something weird. When I`m preforming the copy utility of postgresql in order to create dump from a local table in my postgresql db it takes for 32G table 20 minutes. When I try to use copy for a foregin table (on oracle database) It takes more than 2 hours.. During the copy operation from the foreign table I dont see alot of write operations, with iotop i see that its writes 3 M/s. What else I can check ? 2017-08-20 9:39 GMT+03:00 Mariel Cherkassky <[email protected]>:This server is dedicated to be a postgresql production database, therefore postgresql is the only thing the runs on the server. The fs that I`m using is xfs. I`ll add two different disks - one for the wals and one for the temp tablespace. Regarding the disk, what size should they be considering that the database size is about 250G. Does 16G of ram considered little ? I installed iotop and I see that postgresql writer is writing most of the time and above all.I mentioned that I perform alot of insert into table select * from table. Before that I remove indexes,constraints and truncate the table. Should I run vacuum before or after the operation ? 2017-08-17 19:37 GMT+03:00 Claudio Freire <[email protected]>:On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> I checked with the storage team in the company and they saw that I have alot\n> of io on the server. How should I reduce the io that the postgresql uses ?\n\nDo you have concurrent activity on that server?\n\nWhat filesystem are you using wherever the data is sitting?\n\nIf you've got concurrent fsyncs happening, some filesystems handle\nthat poorly. When you've got WAL and data mixed in a single disk, or\nworse, filesystem, it happens often that the filesystem won't handle\nthe write barriers for the WAL efficiently. I/O gets intermingled with\nbulk operations, and even small fsyncs will have to flush writes from\nbulk operations, which makes a mess of things.\n\nIt is a very good idea, and in fact a recommended practice, to put WAL\non its own disk for that reason mainly.\n\nWith that little RAM, you'll also probably cause a lot of I/O in temp\nfiles, so I'd also recommend setting aside another disk for a temp\ntablespace so that I/O doesn't block other transactions as well.\n\nThis is all assuming you've got concurrent activity on the server. If\nnot, install iotop and try to see who's causing that much I/O.",
"msg_date": "Mon, 21 Aug 2017 13:53:30 +0200",
"msg_from": "Daniel Blanch Bataller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "I`m searching for a way to improve the current performance, I'm not\ninteresting in using a different tool or writing something new because I'm\ntrying to migrate a system on oracle database to a postgresql database.\n\n2017-08-21 14:53 GMT+03:00 Daniel Blanch Bataller <\[email protected]>:\n\n>\n> El 21 ago 2017, a las 13:27, Mariel Cherkassky <\n> [email protected]> escribió:\n>\n> All this operation runs as part of a big transaction that I run.\n>\n> How can I create a dump in the oracle server and copy it to the postgresql\n> server from a postgresql transaction ?\n>\n>\n> I guess you could create a user defined function in any of the available\n> languages (perl, python, java, …). Functions run inside transactions\n> too…this is not simple, though.\n>\n> Chopping the table is optional when I use copy, but when I use copy to\n> remote oracle table it takes longer to create the dump.\n>\n>\n> It may take longer depending on how the oracle machine, table and database\n> are configured. In my experience oracle is not very fast dumping whole\n> tables, not to mention tables with BLOB data, which can be as slow as\n> hundreds of records per second (which is probably not your case).\n>\n> If this transaction is to synchronize data between transactional servers\n> and data analysis servers you may consider using some type of replication\n> where only changes are sent. EnterpriseDB has tools to do such things, I’m\n> not aware of any other tool that can do this between oracle and postgres.\n>\n> Regards,\n>\n> Daniel.\n>\n>\n> 2017-08-21 11:37 GMT+03:00 Daniel Blanch Bataller <\n> [email protected]>:\n>\n>>\n>> El 21 ago 2017, a las 10:00, Mariel Cherkassky <\n>> [email protected]> escribió:\n>>\n>> To summarize, I still have performance problems. My current situation :\n>>\n>> I'm trying to copy the data of many tables in the oracle database into my\n>> postgresql tables. I'm doing so by running insert into\n>> local_postgresql_temp select * from remote_oracle_table. The performance\n>> of this operation are very slow and I tried to check the reason for that\n>> and mybe choose a different alternative.\n>>\n>> 1)First method - Insert into local_postgresql_table select * from\n>> remote_oracle_table this generated total disk write of 7 M/s and actual\n>> disk write of 4 M/s(iotop). For 32G table it took me 2 hours and 30 minutes.\n>>\n>> 2)second method - copy (select * from oracle_remote_table) to /tmp/dump generates\n>> total disk write of 4 M/s and actuval disk write of 100 K/s. The copy\n>> utility suppose to be very fast but it seems very slow.\n>>\n>>\n>>\n>> Are you using a FDW to access oracle server and then dump it using copy?\n>> This is going to be slow, FDW isn't fast.\n>>\n>>\n>> -When I run copy from the local dump, the reading is very fast 300 M/s.\n>>\n>>\n>> You reported it was slow before. What has changed? How much does it take\n>> to load the 32G table then?\n>>\n>>\n>> -I created a 32G file on the oracle server and used scp to copy it and it\n>> took me a few minutes.\n>>\n>> -The wals directory is located on a different file system. The parameters\n>> I assigned :\n>>\n>> min_parallel_relation_size = 200MB\n>> max_parallel_workers_per_gather = 5\n>> max_worker_processes = 8\n>> effective_cache_size = 12GB\n>> work_mem = 128MB\n>> maintenance_work_mem = 4GB\n>> shared_buffers = 2000MB\n>> RAM : 16G\n>> CPU CORES : 8\n>>\n>> HOW can I increase the writes ? How can I get the data faster from the\n>> oracle database to my postgresql database?\n>>\n>>\n>>\n>> Extract the table to a file in the oracle server in a format that the\n>> COPY utility can read, then copy it to postgres server and load it. You can\n>> even pipe commands and do it in a single step.\n>>\n>> This is what I meant when I said that COPY is much faster than any thing\n>> else. To make it even faster, if I/O is not your bottleneck, you can chop\n>> the table in chunks and load it in parallel as I told you before, I have\n>> done this many times when migrating data from oracle to postgres. ora2pg\n>> uses this method to migrate data from oracle to postgres too.\n>>\n>>\n>> 2017-08-20 14:00 GMT+03:00 Mariel Cherkassky <[email protected]\n>> >:\n>>\n>>> I realized something weird. When I`m preforming the copy utility of\n>>> postgresql in order to create dump from a local table in my postgresql db\n>>> it takes for 32G table 20 minutes. When I try to use copy for a foregin\n>>> table (on oracle database) It takes more than 2 hours.. During the copy\n>>> operation from the foreign table I dont see alot of write operations, with\n>>> iotop i see that its writes 3 M/s. What else I can check ?\n>>>\n>>> 2017-08-20 9:39 GMT+03:00 Mariel Cherkassky <[email protected]\n>>> >:\n>>>\n>>>> This server is dedicated to be a postgresql production database,\n>>>> therefore postgresql is the only thing the runs on the server. The fs that\n>>>> I`m using is xfs. I`ll add two different disks - one for the wals and one\n>>>> for the temp tablespace. Regarding the disk, what size should they be\n>>>> considering that the database size is about 250G. Does 16G of ram\n>>>> considered little ? I installed iotop and I see that postgresql writer is\n>>>> writing most of the time and above all.\n>>>>\n>>>> I mentioned that I perform alot of insert into table select * from\n>>>> table. Before that I remove indexes,constraints and truncate the table.\n>>>> Should I run vacuum before or after the operation ?\n>>>>\n>>>> 2017-08-17 19:37 GMT+03:00 Claudio Freire <[email protected]>:\n>>>>\n>>>>> On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky\n>>>>> <[email protected]> wrote:\n>>>>> > I checked with the storage team in the company and they saw that I\n>>>>> have alot\n>>>>> > of io on the server. How should I reduce the io that the postgresql\n>>>>> uses ?\n>>>>>\n>>>>> Do you have concurrent activity on that server?\n>>>>>\n>>>>> What filesystem are you using wherever the data is sitting?\n>>>>>\n>>>>> If you've got concurrent fsyncs happening, some filesystems handle\n>>>>> that poorly. When you've got WAL and data mixed in a single disk, or\n>>>>> worse, filesystem, it happens often that the filesystem won't handle\n>>>>> the write barriers for the WAL efficiently. I/O gets intermingled with\n>>>>> bulk operations, and even small fsyncs will have to flush writes from\n>>>>> bulk operations, which makes a mess of things.\n>>>>>\n>>>>> It is a very good idea, and in fact a recommended practice, to put WAL\n>>>>> on its own disk for that reason mainly.\n>>>>>\n>>>>> With that little RAM, you'll also probably cause a lot of I/O in temp\n>>>>> files, so I'd also recommend setting aside another disk for a temp\n>>>>> tablespace so that I/O doesn't block other transactions as well.\n>>>>>\n>>>>> This is all assuming you've got concurrent activity on the server. If\n>>>>> not, install iotop and try to see who's causing that much I/O.\n>>>>>\n>>>>\n>>>>\n>>\n>\n>\n\nI`m searching for a way to improve the current performance, I'm not interesting in using a different tool or writing something new because I'm trying to migrate a system on oracle database to a postgresql database.2017-08-21 14:53 GMT+03:00 Daniel Blanch Bataller <[email protected]>:El 21 ago 2017, a las 13:27, Mariel Cherkassky <[email protected]> escribió:All this operation runs as part of a big transaction that I run.How can I create a dump in the oracle server and copy it to the postgresql server from a postgresql transaction ? I guess you could create a user defined function in any of the available languages (perl, python, java, …). Functions run inside transactions too…this is not simple, though. Chopping the table is optional when I use copy, but when I use copy to remote oracle table it takes longer to create the dump. It may take longer depending on how the oracle machine, table and database are configured. In my experience oracle is not very fast dumping whole tables, not to mention tables with BLOB data, which can be as slow as hundreds of records per second (which is probably not your case).If this transaction is to synchronize data between transactional servers and data analysis servers you may consider using some type of replication where only changes are sent. EnterpriseDB has tools to do such things, I’m not aware of any other tool that can do this between oracle and postgres.Regards,Daniel.2017-08-21 11:37 GMT+03:00 Daniel Blanch Bataller <[email protected]>:El 21 ago 2017, a las 10:00, Mariel Cherkassky <[email protected]> escribió:To summarize, I still have performance problems. My current situation : I'm trying to copy the data of many tables in the oracle database into my postgresql tables. I'm doing so by running insert into local_postgresql_temp select * from remote_oracle_table. The performance of this operation are very slow and I tried to check the reason for that and mybe choose a different alternative.1)First method - Insert into local_postgresql_table select * from remote_oracle_table this generated total disk write of 7 M/s and actual disk write of 4 M/s(iotop). For 32G table it took me 2 hours and 30 minutes.2)second method - copy (select * from oracle_remote_table) to /tmp/dump generates total disk write of 4 M/s and actuval disk write of 100 K/s. The copy utility suppose to be very fast but it seems very slow.Are you using a FDW to access oracle server and then dump it using copy? This is going to be slow, FDW isn't fast.-When I run copy from the local dump, the reading is very fast 300 M/s.You reported it was slow before. What has changed? How much does it take to load the 32G table then?-I created a 32G file on the oracle server and used scp to copy it and it took me a few minutes.-The wals directory is located on a different file system. The parameters I assigned :min_parallel_relation_size = 200MB\nmax_parallel_workers_per_gather = 5 \nmax_worker_processes = 8 \neffective_cache_size = 12GB\nwork_mem = 128MB\nmaintenance_work_mem = 4GB\nshared_buffers = 2000MB\nRAM : 16G\nCPU CORES : 8HOW can I increase the writes ? How can I get the data faster from the oracle database to my postgresql database?Extract the table to a file in the oracle server in a format that the COPY utility can read, then copy it to postgres server and load it. You can even pipe commands and do it in a single step.This is what I meant when I said that COPY is much faster than any thing else. To make it even faster, if I/O is not your bottleneck, you can chop the table in chunks and load it in parallel as I told you before, I have done this many times when migrating data from oracle to postgres. ora2pg uses this method to migrate data from oracle to postgres too. 2017-08-20 14:00 GMT+03:00 Mariel Cherkassky <[email protected]>:I realized something weird. When I`m preforming the copy utility of postgresql in order to create dump from a local table in my postgresql db it takes for 32G table 20 minutes. When I try to use copy for a foregin table (on oracle database) It takes more than 2 hours.. During the copy operation from the foreign table I dont see alot of write operations, with iotop i see that its writes 3 M/s. What else I can check ? 2017-08-20 9:39 GMT+03:00 Mariel Cherkassky <[email protected]>:This server is dedicated to be a postgresql production database, therefore postgresql is the only thing the runs on the server. The fs that I`m using is xfs. I`ll add two different disks - one for the wals and one for the temp tablespace. Regarding the disk, what size should they be considering that the database size is about 250G. Does 16G of ram considered little ? I installed iotop and I see that postgresql writer is writing most of the time and above all.I mentioned that I perform alot of insert into table select * from table. Before that I remove indexes,constraints and truncate the table. Should I run vacuum before or after the operation ? 2017-08-17 19:37 GMT+03:00 Claudio Freire <[email protected]>:On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> I checked with the storage team in the company and they saw that I have alot\n> of io on the server. How should I reduce the io that the postgresql uses ?\n\nDo you have concurrent activity on that server?\n\nWhat filesystem are you using wherever the data is sitting?\n\nIf you've got concurrent fsyncs happening, some filesystems handle\nthat poorly. When you've got WAL and data mixed in a single disk, or\nworse, filesystem, it happens often that the filesystem won't handle\nthe write barriers for the WAL efficiently. I/O gets intermingled with\nbulk operations, and even small fsyncs will have to flush writes from\nbulk operations, which makes a mess of things.\n\nIt is a very good idea, and in fact a recommended practice, to put WAL\non its own disk for that reason mainly.\n\nWith that little RAM, you'll also probably cause a lot of I/O in temp\nfiles, so I'd also recommend setting aside another disk for a temp\ntablespace so that I/O doesn't block other transactions as well.\n\nThis is all assuming you've got concurrent activity on the server. If\nnot, install iotop and try to see who's causing that much I/O.",
"msg_date": "Mon, 21 Aug 2017 15:22:58 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "Maybe I missed it in this continuous thread activity, but have you tried \n'''ora2pg\"? You can export from Oracle and import to Postgres in \nparallel jobs. The import commands use the efficient COPY command by \ndefault (unless you override it in the ora2pg configuration file). You \ncan do the export and subsequent import in memory, but I would suggest \nthe actual file export and import so you can take advantage of the \nparallel feature.\n\nRegards,\nMichael Vitale\n\n> Mariel Cherkassky <mailto:[email protected]>\n> Monday, August 21, 2017 8:22 AM\n> I`m searching for a way to improve the current performance, I'm not \n> interesting in using a different tool or writing something new because \n> I'm trying to migrate a system on oracle database to a postgresql \n> database.\n>\n>\n> Daniel Blanch Bataller <mailto:[email protected]>\n> Monday, August 21, 2017 4:37 AM\n>\n>> El 21 ago 2017, a las 10:00, Mariel Cherkassky \n>> <[email protected] <mailto:[email protected]>> \n>> escribió:\n>>\n>> To summarize, I still have performance problems. My current situation :\n>>\n>> I'm trying to copy the data of many tables in the oracle database \n>> into my postgresql tables. I'm doing so by running |insert into \n>> local_postgresql_temp select * from remote_oracle_table|. The \n>> performance of this operation are very slow and I tried to check the \n>> reason for that and mybe choose a different alternative.\n>>\n>> 1)First method - |Insert into local_postgresql_table select * from \n>> remote_oracle_table| this generated total disk write of 7 M/s and \n>> actual disk write of 4 M/s(iotop). For 32G table it took me 2 hours \n>> and 30 minutes.\n>>\n>> 2)second method - |copy (select * from oracle_remote_table) to \n>> /tmp/dump| generates total disk write of 4 M/s and actuval disk write \n>> of 100 K/s. The copy utility suppose to be very fast but it seems \n>> very slow.\n>>\n>>\n>\n>\n> Are you using a FDW to access oracle server and then dump it using \n> copy? This is going to be slow, FDW isn't fast.\n>\n>\n>> -When I run copy from the local dump, the reading is very fast 300 M/s.\n>>\n>>\n>\n> You reported it was slow before. What has changed? How much does it \n> take to load the 32G table then?\n>\n>\n>> -I created a 32G file on the oracle server and used scp to copy it \n>> and it took me a few minutes.\n>>\n>>\n>> -The wals directory is located on a different file system. The \n>> parameters I assigned :\n>>\n>> |min_parallel_relation_size= 200MB\n>> max_parallel_workers_per_gather= 5\n>> max_worker_processes= 8\n>> effective_cache_size= 12GB\n>> work_mem= 128MB\n>> maintenance_work_mem= 4GB\n>> shared_buffers= 2000MB\n>> RAM: 16G\n>> CPU CORES: 8|\n>>\n>> HOW can I increase the writes ? How can I get the data faster from \n>> the oracle database to my postgresql database?\n>>\n>>\n>\n>\n> Extract the table to a file in the oracle server in a format that the \n> COPY utility can read, then copy it to postgres server and load it. \n> You can even pipe commands and do it in a single step.\n>\n> This is what I meant when I said that COPY is much faster than any \n> thing else. To make it even faster, if I/O is not your bottleneck, you \n> can chop the table in chunks and load it in parallel as I told you \n> before, I have done this many times when migrating data from oracle to \n> postgres. ora2pg uses this method to migrate data from oracle to \n> postgres too.\n>\n>>\n>> 2017-08-20 14:00 GMT+03:00 Mariel Cherkassky \n>> <[email protected] <mailto:[email protected]>>:\n>>\n>> I realized something weird. When I`m preforming the copy utility\n>> of postgresql in order to create dump from a local table in my\n>> postgresql db it takes for 32G table 20 minutes. When I try to\n>> use copy for a foregin table (on oracle database) It takes more\n>> than 2 hours.. During the copy operation from the foreign table I\n>> dont see alot of write operations, with iotop i see that its\n>> writes 3 M/s. What else I can check ?\n>>\n>> 2017-08-20 9:39 GMT+03:00 Mariel Cherkassky\n>> <[email protected] <mailto:[email protected]>>:\n>>\n>> This server is dedicated to be a postgresql production\n>> database, therefore postgresql is the only thing the runs on\n>> the server. The fs that I`m using is xfs. I`ll add two\n>> different disks - one for the wals and one for the temp\n>> tablespace. Regarding the disk, what size should they be\n>> considering that the database size is about 250G. Does 16G of\n>> ram considered little ? I installed iotop and I see that\n>> postgresql writer is writing most of the time and above all.\n>>\n>> I mentioned that I perform alot of insert into table select *\n>> from table. Before that I remove indexes,constraints and\n>> truncate the table. Should I run vacuum before or after the\n>> operation ?\n>>\n>> 2017-08-17 19:37 GMT+03:00 Claudio Freire\n>> <[email protected] <mailto:[email protected]>>:\n>>\n>> On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky\n>> <[email protected]\n>> <mailto:[email protected]>> wrote:\n>> > I checked with the storage team in the company and they\n>> saw that I have alot\n>> > of io on the server. How should I reduce the io that the\n>> postgresql uses ?\n>>\n>> Do you have concurrent activity on that server?\n>>\n>> What filesystem are you using wherever the data is sitting?\n>>\n>> If you've got concurrent fsyncs happening, some\n>> filesystems handle\n>> that poorly. When you've got WAL and data mixed in a\n>> single disk, or\n>> worse, filesystem, it happens often that the filesystem\n>> won't handle\n>> the write barriers for the WAL efficiently. I/O gets\n>> intermingled with\n>> bulk operations, and even small fsyncs will have to flush\n>> writes from\n>> bulk operations, which makes a mess of things.\n>>\n>> It is a very good idea, and in fact a recommended\n>> practice, to put WAL\n>> on its own disk for that reason mainly.\n>>\n>> With that little RAM, you'll also probably cause a lot of\n>> I/O in temp\n>> files, so I'd also recommend setting aside another disk\n>> for a temp\n>> tablespace so that I/O doesn't block other transactions\n>> as well.\n>>\n>> This is all assuming you've got concurrent activity on\n>> the server. If\n>> not, install iotop and try to see who's causing that much\n>> I/O.\n>>\n>>\n>\n> Mariel Cherkassky <mailto:[email protected]>\n> Monday, August 21, 2017 4:00 AM\n> To summarize, I still have performance problems. My current situation :\n>\n> I'm trying to copy the data of many tables in the oracle database into \n> my postgresql tables. I'm doing so by running |insert into \n> local_postgresql_temp select * from remote_oracle_table|. The \n> performance of this operation are very slow and I tried to check the \n> reason for that and mybe choose a different alternative.\n>\n> 1)First method - |Insert into local_postgresql_table select * from \n> remote_oracle_table| this generated total disk write of 7 M/s and \n> actual disk write of 4 M/s(iotop). For 32G table it took me 2 hours \n> and 30 minutes.\n>\n> 2)second method - |copy (select * from oracle_remote_table) to \n> /tmp/dump| generates total disk write of 4 M/s and actuval disk write \n> of 100 K/s. The copy utility suppose to be very fast but it seems very \n> slow.\n>\n> -When I run copy from the local dump, the reading is very fast 300 M/s.\n>\n> -I created a 32G file on the oracle server and used scp to copy it and \n> it took me a few minutes.\n>\n> -The wals directory is located on a different file system. The \n> parameters I assigned :\n>\n> |min_parallel_relation_size =200MB\n> max_parallel_workers_per_gather =5\n> max_worker_processes =8\n> effective_cache_size =12GB\n> work_mem =128MB\n> maintenance_work_mem =4GB\n> shared_buffers =2000MB\n> RAM :16G\n> CPU CORES :8|\n>\n> HOW can I increase the writes ? How can I get the data faster from the \n> oracle database to my postgresql database?\n>\n>\n> Mariel Cherkassky <mailto:[email protected]>\n> Sunday, August 20, 2017 7:00 AM\n> I realized something weird. When I`m preforming the copy utility of \n> postgresql in order to create dump from a local table in my postgresql \n> db it takes for 32G table 20 minutes. When I try to use copy for a \n> foregin table (on oracle database) It takes more than 2 hours.. During \n> the copy operation from the foreign table I dont see alot of write \n> operations, with iotop i see that its writes 3 M/s. What else I can \n> check ?\n>\n> Mariel Cherkassky <mailto:[email protected]>\n> Sunday, August 20, 2017 2:39 AM\n> This server is dedicated to be a postgresql production database, \n> therefore postgresql is the only thing the runs on the server. The fs \n> that I`m using is xfs. I`ll add two different disks - one for the wals \n> and one for the temp tablespace. Regarding the disk, what size should \n> they be considering that the database size is about 250G. Does 16G of \n> ram considered little ? I installed iotop and I see that postgresql \n> writer is writing most of the time and above all.\n>\n> I mentioned that I perform alot of insert into table select * from \n> table. Before that I remove indexes,constraints and truncate the \n> table. Should I run vacuum before or after the operation ?\n>\n>\n\n\n\n\nMaybe I missed it in this \ncontinuous thread activity, but have you tried '''ora2pg\"? You can \nexport from Oracle and import to Postgres in parallel jobs. The import \ncommands use the efficient COPY command by default (unless you override \nit in the ora2pg configuration file). You can do the export and \nsubsequent import in memory, but I would suggest the actual file export \nand import so you can take advantage of the parallel feature.\n\nRegards,\nMichael Vitale\n\n\n\n \nMariel Cherkassky Monday,\n August 21, 2017 8:22 AM \nI`m\n searching for a way to improve the current performance, I'm not \ninteresting in using a different tool or writing something new because \nI'm trying to migrate a system on oracle database to a postgresql \ndatabase.\n\n \nDaniel Blanch Bataller Monday,\n August 21, 2017 4:37 AM \nEl 21 ago 2017, a las 10:00, Mariel \nCherkassky <[email protected]>\n escribió:To summarize, I still have \nperformance problems. My current situation : I'm\n trying to copy the data of many tables in the oracle database into my \npostgresql tables. I'm doing so by running insert\n into local_postgresql_temp select * from remote_oracle_table. \nThe performance of this operation are very slow and I tried to check the\n reason for that and mybe choose a different alternative.1)First\n method - Insert\n into local_postgresql_table select * from remote_oracle_table this\n generated total disk write of 7 M/s and actual disk write of 4 \nM/s(iotop). For 32G table it took me 2 hours and 30 minutes.2)second\n method - copy\n (select * from oracle_remote_table) to /tmp/dump generates total\n disk write of 4 M/s and actuval disk write of 100 K/s. The copy utility\n suppose to be very fast but it seems very slow.Are you using a FDW to access oracle server and \nthen dump it using copy? This is going to be slow, FDW isn't fast.-When\n I run copy from the local dump, the reading is very fast 300 M/s.You reported it was slow before. What has changed? \nHow much does it take to load the 32G table then?-I\n created a 32G file on the oracle server and used scp to copy it and it \ntook me a few minutes.-The\n wals directory is located on a different file system. The parameters I \nassigned :min_parallel_relation_size = 200MB\nmax_parallel_workers_per_gather = 5 \nmax_worker_processes = 8 \neffective_cache_size = 12GB\nwork_mem = 128MB\nmaintenance_work_mem = 4GB\nshared_buffers = 2000MB\nRAM : 16G\nCPU CORES : 8HOW\n can I increase the writes ? How can I get the data faster from the \noracle database to my postgresql database?Extract the table to a file\n in the oracle server in a format that the COPY utility can read, then \ncopy it to postgres server and load it. You can even pipe commands and \ndo it in a single step.This is what I\n meant when I said that COPY is much faster than any thing else. To make\n it even faster, if I/O is not your bottleneck, you can chop the table \nin chunks and load it in parallel as I told you before, I have done this\n many times when migrating data from oracle to postgres. ora2pg uses \nthis method to migrate data from oracle to postgres too. 2017-08-20 14:00 GMT+03:00 Mariel Cherkassky <[email protected]>:I \nrealized something weird. When I`m preforming the copy utility of \npostgresql in order to create dump from a local table in my postgresql \ndb it takes for 32G table 20 minutes. When I try to use copy for a \nforegin table (on oracle database) It takes more than 2 hours.. During \nthe copy operation from the foreign table I dont see alot of write \noperations, with iotop i see that its writes 3 M/s. What else I can \ncheck ? 2017-08-20 9:39 GMT+03:00 Mariel Cherkassky <[email protected]>:This\n server is dedicated to be a postgresql production database, therefore \npostgresql is the only thing the runs on the server. The fs that I`m \nusing is xfs. I`ll add two different disks - one for the wals and one \nfor the temp tablespace. Regarding the disk, what size should they be \nconsidering that the database size is about 250G. Does 16G of ram \nconsidered little ? I installed iotop and I see that postgresql writer \nis writing most of the time and above all.I mentioned that I perform alot \nof insert into table select * from table. Before that I remove \nindexes,constraints and truncate the table. Should I run vacuum before \nor after the operation ? 2017-08-17 19:37 GMT+03:00 \nClaudio Freire <[email protected]>:On Thu, Aug 17, 2017 at 6:00 AM, \nMariel Cherkassky\n<[email protected]>\n wrote:\n> I checked with the storage team in the company and they saw that I \nhave alot\n> of io on the server. How should I reduce the io that the postgresql\n uses ?\n\nDo you have concurrent activity on that server?\n\nWhat filesystem are you using wherever the data is sitting?\n\nIf you've got concurrent fsyncs happening, some filesystems handle\nthat poorly. When you've got WAL and data mixed in a single disk, or\nworse, filesystem, it happens often that the filesystem won't handle\nthe write barriers for the WAL efficiently. I/O gets intermingled with\nbulk operations, and even small fsyncs will have to flush writes from\nbulk operations, which makes a mess of things.\n\nIt is a very good idea, and in fact a recommended practice, to put WAL\non its own disk for that reason mainly.\n\nWith that little RAM, you'll also probably cause a lot of I/O in temp\nfiles, so I'd also recommend setting aside another disk for a temp\ntablespace so that I/O doesn't block other transactions as well.\n\nThis is all assuming you've got concurrent activity on the server. If\nnot, install iotop and try to see who's causing that much I/O.\n\n\n\n \nMariel Cherkassky Monday,\n August 21, 2017 4:00 AM \nTo\n summarize, I still have performance problems. My current situation : I'm\n trying to copy the data of many tables in the oracle database into my \npostgresql tables. I'm doing so by running insert\n into local_postgresql_temp select * from remote_oracle_table. \nThe performance of this operation are very slow and I tried to check the\n reason for that and mybe choose a different alternative.1)First\n method - Insert\n into local_postgresql_table select * from remote_oracle_table this\n generated total disk write of 7 M/s and actual disk write of 4 \nM/s(iotop). For 32G table it took me 2 hours and 30 minutes.2)second\n method - copy\n (select * from oracle_remote_table) to /tmp/dump generates total\n disk write of 4 M/s and actuval disk write of 100 K/s. The copy utility\n suppose to be very fast but it seems very slow.-When\n I run copy from the local dump, the reading is very fast 300 M/s.-I\n created a 32G file on the oracle server and used scp to copy it and it \ntook me a few minutes.-The\n wals directory is located on a different file system. The parameters I \nassigned :min_parallel_relation_size = 200MBmax_parallel_workers_per_gather = 5 max_worker_processes = 8 effective_cache_size = 12GBwork_mem = 128MBmaintenance_work_mem = 4GBshared_buffers = 2000MBRAM : 16GCPU CORES : 8HOW\n can I increase the writes ? How can I get the data faster from the \noracle database to my postgresql database?\n\n \nMariel Cherkassky Sunday,\n August 20, 2017 7:00 AM \nI\n realized something weird. When I`m preforming the copy utility of \npostgresql in order to create dump from a local table in my postgresql \ndb it takes for 32G table 20 minutes. When I try to use copy for a \nforegin table (on oracle database) It takes more than 2 hours.. During \nthe copy operation from the foreign table I dont see alot of write \noperations, with iotop i see that its writes 3 M/s. What else I can \ncheck ? \n\n \nMariel Cherkassky Sunday,\n August 20, 2017 2:39 AM \nThis\n server is dedicated to be a postgresql production database, therefore \npostgresql is the only thing the runs on the server. The fs that I`m \nusing is xfs. I`ll add two different disks - one for the wals and one \nfor the temp tablespace. Regarding the disk, what size should they be \nconsidering that the database size is about 250G. Does 16G of ram \nconsidered little ? I installed iotop and I see that postgresql writer \nis writing most of the time and above all.I mentioned that I perform alot of insert into table select *\n from table. Before that I remove indexes,constraints and truncate the \ntable. Should I run vacuum before or after the operation ?",
"msg_date": "Mon, 21 Aug 2017 09:55:32 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "I had a system that consist from many objects(procedures,functions..) on an\noracle database. We decided to integrate that system to postgresql. That\nsystem coppied alot of big tables from a different read only oracle\ndatabase and preformed on it alot of queries to produce reports. The part\nof getting the data is part of some procedures, I cant change it so freely.\nI'm searching a way to improve the perfomance of the database because I'm\nsure that I didnt conifgure something well. Moreover, When I run complicted\nqueries (joint between 4 big tables and filtering) it takes alot of time\nand I see that the server is cacheing all my ram memory.\n\n2017-08-21 16:55 GMT+03:00 MichaelDBA <[email protected]>:\n\n> Maybe I missed it in this continuous thread activity, but have you tried\n> '''ora2pg\"? You can export from Oracle and import to Postgres in parallel\n> jobs. The import commands use the efficient COPY command by default\n> (unless you override it in the ora2pg configuration file). You can do the\n> export and subsequent import in memory, but I would suggest the actual file\n> export and import so you can take advantage of the parallel feature.\n>\n> Regards,\n> Michael Vitale\n>\n> Mariel Cherkassky <[email protected]>\n> Monday, August 21, 2017 8:22 AM\n> I`m searching for a way to improve the current performance, I'm not\n> interesting in using a different tool or writing something new because I'm\n> trying to migrate a system on oracle database to a postgresql database.\n>\n>\n> Daniel Blanch Bataller <[email protected]>\n> Monday, August 21, 2017 4:37 AM\n>\n> El 21 ago 2017, a las 10:00, Mariel Cherkassky <\n> [email protected]> escribió:\n>\n> To summarize, I still have performance problems. My current situation :\n>\n> I'm trying to copy the data of many tables in the oracle database into my\n> postgresql tables. I'm doing so by running insert into\n> local_postgresql_temp select * from remote_oracle_table. The performance\n> of this operation are very slow and I tried to check the reason for that\n> and mybe choose a different alternative.\n>\n> 1)First method - Insert into local_postgresql_table select * from\n> remote_oracle_table this generated total disk write of 7 M/s and actual\n> disk write of 4 M/s(iotop). For 32G table it took me 2 hours and 30 minutes.\n>\n> 2)second method - copy (select * from oracle_remote_table) to /tmp/dump generates\n> total disk write of 4 M/s and actuval disk write of 100 K/s. The copy\n> utility suppose to be very fast but it seems very slow.\n>\n>\n>\n> Are you using a FDW to access oracle server and then dump it using copy?\n> This is going to be slow, FDW isn't fast.\n>\n>\n> -When I run copy from the local dump, the reading is very fast 300 M/s.\n>\n>\n> You reported it was slow before. What has changed? How much does it take\n> to load the 32G table then?\n>\n>\n> -I created a 32G file on the oracle server and used scp to copy it and it\n> took me a few minutes.\n>\n> -The wals directory is located on a different file system. The parameters\n> I assigned :\n>\n> min_parallel_relation_size = 200MB\n> max_parallel_workers_per_gather = 5\n> max_worker_processes = 8\n> effective_cache_size = 12GB\n> work_mem = 128MB\n> maintenance_work_mem = 4GB\n> shared_buffers = 2000MB\n> RAM : 16G\n> CPU CORES : 8\n>\n> HOW can I increase the writes ? How can I get the data faster from the\n> oracle database to my postgresql database?\n>\n>\n>\n> Extract the table to a file in the oracle server in a format that the COPY\n> utility can read, then copy it to postgres server and load it. You can even\n> pipe commands and do it in a single step.\n>\n> This is what I meant when I said that COPY is much faster than any thing\n> else. To make it even faster, if I/O is not your bottleneck, you can chop\n> the table in chunks and load it in parallel as I told you before, I have\n> done this many times when migrating data from oracle to postgres. ora2pg\n> uses this method to migrate data from oracle to postgres too.\n>\n>\n> 2017-08-20 14:00 GMT+03:00 Mariel Cherkassky <[email protected]>\n> :\n>\n>> I realized something weird. When I`m preforming the copy utility of\n>> postgresql in order to create dump from a local table in my postgresql db\n>> it takes for 32G table 20 minutes. When I try to use copy for a foregin\n>> table (on oracle database) It takes more than 2 hours.. During the copy\n>> operation from the foreign table I dont see alot of write operations, with\n>> iotop i see that its writes 3 M/s. What else I can check ?\n>>\n>> 2017-08-20 9:39 GMT+03:00 Mariel Cherkassky <[email protected]>\n>> :\n>>\n>>> This server is dedicated to be a postgresql production database,\n>>> therefore postgresql is the only thing the runs on the server. The fs that\n>>> I`m using is xfs. I`ll add two different disks - one for the wals and one\n>>> for the temp tablespace. Regarding the disk, what size should they be\n>>> considering that the database size is about 250G. Does 16G of ram\n>>> considered little ? I installed iotop and I see that postgresql writer is\n>>> writing most of the time and above all.\n>>>\n>>> I mentioned that I perform alot of insert into table select * from\n>>> table. Before that I remove indexes,constraints and truncate the table.\n>>> Should I run vacuum before or after the operation ?\n>>>\n>>> 2017-08-17 19:37 GMT+03:00 Claudio Freire <[email protected]>:\n>>>\n>>>> On Thu, Aug 17, 2017 at 6:00 AM, Mariel Cherkassky\n>>>> <[email protected]> wrote:\n>>>> > I checked with the storage team in the company and they saw that I\n>>>> have alot\n>>>> > of io on the server. How should I reduce the io that the postgresql\n>>>> uses ?\n>>>>\n>>>> Do you have concurrent activity on that server?\n>>>>\n>>>> What filesystem are you using wherever the data is sitting?\n>>>>\n>>>> If you've got concurrent fsyncs happening, some filesystems handle\n>>>> that poorly. When you've got WAL and data mixed in a single disk, or\n>>>> worse, filesystem, it happens often that the filesystem won't handle\n>>>> the write barriers for the WAL efficiently. I/O gets intermingled with\n>>>> bulk operations, and even small fsyncs will have to flush writes from\n>>>> bulk operations, which makes a mess of things.\n>>>>\n>>>> It is a very good idea, and in fact a recommended practice, to put WAL\n>>>> on its own disk for that reason mainly.\n>>>>\n>>>> With that little RAM, you'll also probably cause a lot of I/O in temp\n>>>> files, so I'd also recommend setting aside another disk for a temp\n>>>> tablespace so that I/O doesn't block other transactions as well.\n>>>>\n>>>> This is all assuming you've got concurrent activity on the server. If\n>>>> not, install iotop and try to see who's causing that much I/O.\n>>>>\n>>>\n>>>\n> Mariel Cherkassky <[email protected]>\n> Monday, August 21, 2017 4:00 AM\n> To summarize, I still have performance problems. My current situation :\n>\n> I'm trying to copy the data of many tables in the oracle database into my\n> postgresql tables. I'm doing so by running insert into\n> local_postgresql_temp select * from remote_oracle_table. The performance\n> of this operation are very slow and I tried to check the reason for that\n> and mybe choose a different alternative.\n>\n> 1)First method - Insert into local_postgresql_table select * from\n> remote_oracle_table this generated total disk write of 7 M/s and actual\n> disk write of 4 M/s(iotop). For 32G table it took me 2 hours and 30 minutes.\n>\n> 2)second method - copy (select * from oracle_remote_table) to /tmp/dump generates\n> total disk write of 4 M/s and actuval disk write of 100 K/s. The copy\n> utility suppose to be very fast but it seems very slow.\n>\n> -When I run copy from the local dump, the reading is very fast 300 M/s.\n>\n> -I created a 32G file on the oracle server and used scp to copy it and it\n> took me a few minutes.\n>\n> -The wals directory is located on a different file system. The parameters\n> I assigned :\n> min_parallel_relation_size = 200MB\n> max_parallel_workers_per_gather = 5\n> max_worker_processes = 8\n> effective_cache_size = 12GB\n> work_mem = 128MB\n> maintenance_work_mem = 4GB\n> shared_buffers = 2000MB\n> RAM : 16G\n> CPU CORES : 8\n>\n> HOW can I increase the writes ? How can I get the data faster from the\n> oracle database to my postgresql database?\n>\n> Mariel Cherkassky <[email protected]>\n> Sunday, August 20, 2017 7:00 AM\n> I realized something weird. When I`m preforming the copy utility of\n> postgresql in order to create dump from a local table in my postgresql db\n> it takes for 32G table 20 minutes. When I try to use copy for a foregin\n> table (on oracle database) It takes more than 2 hours.. During the copy\n> operation from the foreign table I dont see alot of write operations, with\n> iotop i see that its writes 3 M/s. What else I can check ?\n>\n> Mariel Cherkassky <[email protected]>\n> Sunday, August 20, 2017 2:39 AM\n> This server is dedicated to be a postgresql production database, therefore\n> postgresql is the only thing the runs on the server. The fs that I`m using\n> is xfs. I`ll add two different disks - one for the wals and one for the\n> temp tablespace. Regarding the disk, what size should they be considering\n> that the database size is about 250G. Does 16G of ram considered little ? I\n> installed iotop and I see that postgresql writer is writing most of the\n> time and above all.\n>\n> I mentioned that I perform alot of insert into table select * from table.\n> Before that I remove indexes,constraints and truncate the table. Should I\n> run vacuum before or after the operation ?\n>\n>\n>\n>\n\nI had a system that consist from many objects(procedures,functions..) on an oracle database. We decided to integrate that system to postgresql. That system coppied alot of big tables from a different read only oracle database and preformed on it alot of queries to produce reports. The part of getting the data is part of some procedures, I cant change it so freely. I'm searching a way to improve the perfomance of the database because I'm sure that I didnt conifgure something well. Moreover, When I run complicted queries (joint between 4 big tables and filtering) it takes alot of time and I see that the server is cacheing all my ram memory.2017-08-21 16:55 GMT+03:00 MichaelDBA <[email protected]>:\nMaybe I missed it in this \ncontinuous thread activity, but have you tried '''ora2pg\"? You can \nexport from Oracle and import to Postgres in parallel jobs. The import \ncommands use the efficient COPY command by default (unless you override \nit in the ora2pg configuration file). You can do the export and \nsubsequent import in memory, but I would suggest the actual file export \nand import so you can take advantage of the parallel feature.\n\nRegards,\nMichael Vitale\n\n\n\n \nMariel Cherkassky Monday,\n August 21, 2017 8:22 AM \nI`m\n searching for a way to improve the current performance, I'm not \ninteresting in using a different tool or writing something new because \nI'm trying to migrate a system on oracle database to a postgresql \ndatabase.\n\n \nDaniel Blanch Bataller Monday,\n August 21, 2017 4:37 AM \nEl 21 ago 2017, a las 10:00, Mariel \nCherkassky <[email protected]>\n escribió:To summarize, I still have \nperformance problems. My current situation : I'm\n trying to copy the data of many tables in the oracle database into my \npostgresql tables. I'm doing so by running insert\n into local_postgresql_temp select * from remote_oracle_table. \nThe performance of this operation are very slow and I tried to check the\n reason for that and mybe choose a different alternative.1)First\n method - Insert\n into local_postgresql_table select * from remote_oracle_table this\n generated total disk write of 7 M/s and actual disk write of 4 \nM/s(iotop). For 32G table it took me 2 hours and 30 minutes.2)second\n method - copy\n (select * from oracle_remote_table) to /tmp/dump generates total\n disk write of 4 M/s and actuval disk write of 100 K/s. The copy utility\n suppose to be very fast but it seems very slow.Are you using a FDW to access oracle server and \nthen dump it using copy? This is going to be slow, FDW isn't fast.-When\n I run copy from the local dump, the reading is very fast 300 M/s.You reported it was slow before. What has changed? \nHow much does it take to load the 32G table then?-I\n created a 32G file on the oracle server and used scp to copy it and it \ntook me a few minutes.-The\n wals directory is located on a different file system. The parameters I \nassigned :min_parallel_relation_size = 200MB\nmax_parallel_workers_per_gather = 5 \nmax_worker_processes = 8 \neffective_cache_size = 12GB\nwork_mem = 128MB\nmaintenance_work_mem = 4GB\nshared_buffers = 2000MB\nRAM : 16G\nCPU CORES : 8HOW\n can I increase the writes ? How can I get the data faster from the \noracle database to my postgresql database?Extract the table to a file\n in the oracle server in a format that the COPY utility can read, then \ncopy it to postgres server and load it. You can even pipe commands and \ndo it in a single step.This is what I\n meant when I said that COPY is much faster than any thing else. To make\n it even faster, if I/O is not your bottleneck, you can chop the table \nin chunks and load it in parallel as I told you before, I have done this\n many times when migrating data from oracle to postgres. ora2pg uses \nthis method to migrate data from oracle to postgres too. 2017-08-20 14:00 GMT+03:00 Mariel Cherkassky <[email protected]>:I \nrealized something weird. When I`m preforming the copy utility of \npostgresql in order to create dump from a local table in my postgresql \ndb it takes for 32G table 20 minutes. When I try to use copy for a \nforegin table (on oracle database) It takes more than 2 hours.. During \nthe copy operation from the foreign table I dont see alot of write \noperations, with iotop i see that its writes 3 M/s. What else I can \ncheck ? 2017-08-20 9:39 GMT+03:00 Mariel Cherkassky <[email protected]>:This\n server is dedicated to be a postgresql production database, therefore \npostgresql is the only thing the runs on the server. The fs that I`m \nusing is xfs. I`ll add two different disks - one for the wals and one \nfor the temp tablespace. Regarding the disk, what size should they be \nconsidering that the database size is about 250G. Does 16G of ram \nconsidered little ? I installed iotop and I see that postgresql writer \nis writing most of the time and above all.I mentioned that I perform alot \nof insert into table select * from table. Before that I remove \nindexes,constraints and truncate the table. Should I run vacuum before \nor after the operation ? 2017-08-17 19:37 GMT+03:00 \nClaudio Freire <[email protected]>:On Thu, Aug 17, 2017 at 6:00 AM, \nMariel Cherkassky\n<[email protected]>\n wrote:\n> I checked with the storage team in the company and they saw that I \nhave alot\n> of io on the server. How should I reduce the io that the postgresql\n uses ?\n\nDo you have concurrent activity on that server?\n\nWhat filesystem are you using wherever the data is sitting?\n\nIf you've got concurrent fsyncs happening, some filesystems handle\nthat poorly. When you've got WAL and data mixed in a single disk, or\nworse, filesystem, it happens often that the filesystem won't handle\nthe write barriers for the WAL efficiently. I/O gets intermingled with\nbulk operations, and even small fsyncs will have to flush writes from\nbulk operations, which makes a mess of things.\n\nIt is a very good idea, and in fact a recommended practice, to put WAL\non its own disk for that reason mainly.\n\nWith that little RAM, you'll also probably cause a lot of I/O in temp\nfiles, so I'd also recommend setting aside another disk for a temp\ntablespace so that I/O doesn't block other transactions as well.\n\nThis is all assuming you've got concurrent activity on the server. If\nnot, install iotop and try to see who's causing that much I/O.\n\n\n\n \nMariel Cherkassky Monday,\n August 21, 2017 4:00 AM \nTo\n summarize, I still have performance problems. My current situation : I'm\n trying to copy the data of many tables in the oracle database into my \npostgresql tables. I'm doing so by running insert\n into local_postgresql_temp select * from remote_oracle_table. \nThe performance of this operation are very slow and I tried to check the\n reason for that and mybe choose a different alternative.1)First\n method - Insert\n into local_postgresql_table select * from remote_oracle_table this\n generated total disk write of 7 M/s and actual disk write of 4 \nM/s(iotop). For 32G table it took me 2 hours and 30 minutes.2)second\n method - copy\n (select * from oracle_remote_table) to /tmp/dump generates total\n disk write of 4 M/s and actuval disk write of 100 K/s. The copy utility\n suppose to be very fast but it seems very slow.-When\n I run copy from the local dump, the reading is very fast 300 M/s.-I\n created a 32G file on the oracle server and used scp to copy it and it \ntook me a few minutes.-The\n wals directory is located on a different file system. The parameters I \nassigned :min_parallel_relation_size = 200MBmax_parallel_workers_per_gather = 5 max_worker_processes = 8 effective_cache_size = 12GBwork_mem = 128MBmaintenance_work_mem = 4GBshared_buffers = 2000MBRAM : 16GCPU CORES : 8HOW\n can I increase the writes ? How can I get the data faster from the \noracle database to my postgresql database?\n\n \nMariel Cherkassky Sunday,\n August 20, 2017 7:00 AM \nI\n realized something weird. When I`m preforming the copy utility of \npostgresql in order to create dump from a local table in my postgresql \ndb it takes for 32G table 20 minutes. When I try to use copy for a \nforegin table (on oracle database) It takes more than 2 hours.. During \nthe copy operation from the foreign table I dont see alot of write \noperations, with iotop i see that its writes 3 M/s. What else I can \ncheck ? \n\n \nMariel Cherkassky Sunday,\n August 20, 2017 2:39 AM \nThis\n server is dedicated to be a postgresql production database, therefore \npostgresql is the only thing the runs on the server. The fs that I`m \nusing is xfs. I`ll add two different disks - one for the wals and one \nfor the temp tablespace. Regarding the disk, what size should they be \nconsidering that the database size is about 250G. Does 16G of ram \nconsidered little ? I installed iotop and I see that postgresql writer \nis writing most of the time and above all.I mentioned that I perform alot of insert into table select *\n from table. Before that I remove indexes,constraints and truncate the \ntable. Should I run vacuum before or after the operation ?",
"msg_date": "Mon, 21 Aug 2017 17:19:57 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "From: [email protected] [mailto:[email protected]] On Behalf Of Mariel Cherkassky\r\nSent: Monday, August 21, 2017 10:20 AM\r\nTo: MichaelDBA <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] performance problem on big tables\r\n\r\nI had a system that consist from many objects(procedures,functions..) on an oracle database. We decided to integrate that system to postgresql. That system coppied alot of big tables from a different read only oracle database and preformed on it alot of queries to produce reports. The part of getting the data is part of some procedures, I cant change it so freely. I'm searching a way to improve the perfomance of the database because I'm sure that I didnt conifgure something well. Moreover, When I run complicted queries (joint between 4 big tables and filtering) it takes alot of time and I see that the server is cacheing all my ram memory.\r\n\r\n\r\nProbably your joins are done on Postgres side.\r\n\r\nm.b. instead of Postgres pulling data from Oracle, you should try pushing data from Oracle to Postgres using Oracle’s Heterogeneous Services and Postgres ODBC driver. In this case you do your joins and filtering on Oracles side and just push the result set to Postgres.\r\nThat’s how I did migration from Oracle to Postgres.\r\n\r\nRegards,\r\nIgor Neyman\r\n\n\n\n\n\n\n\n\n\n \n\n\nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of Mariel Cherkassky\nSent: Monday, August 21, 2017 10:20 AM\nTo: MichaelDBA <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] performance problem on big tables\n\n\n\n\n\n \nI had a system that consist from many objects(procedures,functions..) on an oracle database. We decided to integrate that system to postgresql. That system coppied alot of big tables from a different read only oracle database and preformed\r\n on it alot of queries to produce reports. The part of getting the data is part of some procedures, I cant change it so freely. I'm searching a way to improve the perfomance of the database because I'm sure that I didnt conifgure something well. Moreover, When\r\n I run complicted queries (joint between 4 big tables and filtering) it takes alot of time and I see that the server is cacheing all my ram memory.\n\n\n\n\n \n\n \nProbably your joins are done on Postgres side.\n \nm.b. instead of Postgres pulling data from Oracle, you should try pushing data from Oracle to Postgres using Oracle’s Heterogeneous Services and Postgres ODBC\r\n driver. In this case you do your joins and filtering on Oracles side and just push the result set to Postgres.\nThat’s how I did migration from Oracle to Postgres.\n \nRegards,\nIgor Neyman",
"msg_date": "Mon, 21 Aug 2017 14:35:30 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "I already finished migrating the system from oracle to postgresql. Right\nnow, I'm trying to improve its performance - Im bringing data from another\nread only database that is updaded every minute. I cant push data from the\noracle side to the postgresql side because the oracle database is read only.\n\n2017-08-21 17:35 GMT+03:00 Igor Neyman <[email protected]>:\n\n>\n>\n> *From:* [email protected] [mailto:pgsql-performance-\n> [email protected]] *On Behalf Of *Mariel Cherkassky\n> *Sent:* Monday, August 21, 2017 10:20 AM\n> *To:* MichaelDBA <[email protected]>\n> *Cc:* [email protected]\n> *Subject:* Re: [PERFORM] performance problem on big tables\n>\n>\n>\n> I had a system that consist from many objects(procedures,functions..) on\n> an oracle database. We decided to integrate that system to postgresql. That\n> system coppied alot of big tables from a different read only oracle\n> database and preformed on it alot of queries to produce reports. The part\n> of getting the data is part of some procedures, I cant change it so freely.\n> I'm searching a way to improve the perfomance of the database because I'm\n> sure that I didnt conifgure something well. Moreover, When I run complicted\n> queries (joint between 4 big tables and filtering) it takes alot of time\n> and I see that the server is cacheing all my ram memory.\n>\n>\n>\n>\n>\n> Probably your joins are done on Postgres side.\n>\n>\n>\n> m.b. instead of Postgres pulling data from Oracle, you should try pushing\n> data from Oracle to Postgres using Oracle’s Heterogeneous Services and\n> Postgres ODBC driver. In this case you do your joins and filtering on\n> Oracles side and just push the result set to Postgres.\n>\n> That’s how I did migration from Oracle to Postgres.\n>\n>\n>\n> Regards,\n>\n> Igor Neyman\n>\n\nI already finished migrating the system from oracle to postgresql. Right now, I'm trying to improve its performance - Im bringing data from another read only database that is updaded every minute. I cant push data from the oracle side to the postgresql side because the oracle database is read only.2017-08-21 17:35 GMT+03:00 Igor Neyman <[email protected]>:\n\n\n \n\n\nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of Mariel Cherkassky\nSent: Monday, August 21, 2017 10:20 AM\nTo: MichaelDBA <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] performance problem on big tables\n\n\n\n\n\n \nI had a system that consist from many objects(procedures,functions..) on an oracle database. We decided to integrate that system to postgresql. That system coppied alot of big tables from a different read only oracle database and preformed\n on it alot of queries to produce reports. The part of getting the data is part of some procedures, I cant change it so freely. I'm searching a way to improve the perfomance of the database because I'm sure that I didnt conifgure something well. Moreover, When\n I run complicted queries (joint between 4 big tables and filtering) it takes alot of time and I see that the server is cacheing all my ram memory.\n\n\n\n\n \n\n \nProbably your joins are done on Postgres side.\n \nm.b. instead of Postgres pulling data from Oracle, you should try pushing data from Oracle to Postgres using Oracle’s Heterogeneous Services and Postgres ODBC\n driver. In this case you do your joins and filtering on Oracles side and just push the result set to Postgres.\nThat’s how I did migration from Oracle to Postgres.\n \nRegards,\nIgor Neyman",
"msg_date": "Mon, 21 Aug 2017 17:37:22 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "If your procedures to get the data is part is a query predicate, then you can still use ora2pg\n\nSent from my iPhone\n\n> On Aug 21, 2017, at 10:35 AM, Igor Neyman <[email protected]> wrote:\n> \n> \n> From: [email protected] [mailto:[email protected]] On Behalf Of Mariel Cherkassky\n> Sent: Monday, August 21, 2017 10:20 AM\n> To: MichaelDBA <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [PERFORM] performance problem on big tables\n> \n> I had a system that consist from many objects(procedures,functions..) on an oracle database. We decided to integrate that system to postgresql. That system coppied alot of big tables from a different read only oracle database and preformed on it alot of queries to produce reports. The part of getting the data is part of some procedures, I cant change it so freely. I'm searching a way to improve the perfomance of the database because I'm sure that I didnt conifgure something well. Moreover, When I run complicted queries (joint between 4 big tables and filtering) it takes alot of time and I see that the server is cacheing all my ram memory.\n> \n> \n> Probably your joins are done on Postgres side.\n> \n> m.b. instead of Postgres pulling data from Oracle, you should try pushing data from Oracle to Postgres using Oracle’s Heterogeneous Services and Postgres ODBC driver. In this case you do your joins and filtering on Oracles side and just push the result set to Postgres.\n> That’s how I did migration from Oracle to Postgres.\n> \n> Regards,\n> Igor Neyman\n\nIf your procedures to get the data is part is a query predicate, then you can still use ora2pgSent from my iPhoneOn Aug 21, 2017, at 10:35 AM, Igor Neyman <[email protected]> wrote:\n\n\n\n\n \n\n\nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of Mariel Cherkassky\nSent: Monday, August 21, 2017 10:20 AM\nTo: MichaelDBA <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] performance problem on big tables\n\n\n\n\n\n \nI had a system that consist from many objects(procedures,functions..) on an oracle database. We decided to integrate that system to postgresql. That system coppied alot of big tables from a different read only oracle database and preformed\n on it alot of queries to produce reports. The part of getting the data is part of some procedures, I cant change it so freely. I'm searching a way to improve the perfomance of the database because I'm sure that I didnt conifgure something well. Moreover, When\n I run complicted queries (joint between 4 big tables and filtering) it takes alot of time and I see that the server is cacheing all my ram memory.\n\n\n\n\n \n\n \nProbably your joins are done on Postgres side.\n \nm.b. instead of Postgres pulling data from Oracle, you should try pushing data from Oracle to Postgres using Oracle’s Heterogeneous Services and Postgres ODBC\n driver. In this case you do your joins and filtering on Oracles side and just push the result set to Postgres.\nThat’s how I did migration from Oracle to Postgres.\n \nRegards,\nIgor Neyman",
"msg_date": "Mon, 21 Aug 2017 10:51:47 -0400",
"msg_from": "Michael DNA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "On Mon, Aug 21, 2017 at 5:00 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> To summarize, I still have performance problems. My current situation :\n>\n> I'm trying to copy the data of many tables in the oracle database into my\n> postgresql tables. I'm doing so by running insert into local_postgresql_temp\n> select * from remote_oracle_table. The performance of this operation are\n> very slow and I tried to check the reason for that and mybe choose a\n> different alternative.\n>\n> 1)First method - Insert into local_postgresql_table select * from\n> remote_oracle_table this generated total disk write of 7 M/s and actual disk\n> write of 4 M/s(iotop). For 32G table it took me 2 hours and 30 minutes.\n>\n> 2)second method - copy (select * from oracle_remote_table) to /tmp/dump\n> generates total disk write of 4 M/s and actuval disk write of 100 K/s. The\n> copy utility suppose to be very fast but it seems very slow.\n\nHave you tried increasing the prefetch option in the remote table?\n\nIf you left it in its default, latency could be hurting your ability\nto saturate the network.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 23 Aug 2017 20:15:44 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "Hi Claudio, how can I do that ? Can you explain me what is this option ?\n\n2017-08-24 2:15 GMT+03:00 Claudio Freire <[email protected]>:\n\n> On Mon, Aug 21, 2017 at 5:00 AM, Mariel Cherkassky\n> <[email protected]> wrote:\n> > To summarize, I still have performance problems. My current situation :\n> >\n> > I'm trying to copy the data of many tables in the oracle database into my\n> > postgresql tables. I'm doing so by running insert into\n> local_postgresql_temp\n> > select * from remote_oracle_table. The performance of this operation are\n> > very slow and I tried to check the reason for that and mybe choose a\n> > different alternative.\n> >\n> > 1)First method - Insert into local_postgresql_table select * from\n> > remote_oracle_table this generated total disk write of 7 M/s and actual\n> disk\n> > write of 4 M/s(iotop). For 32G table it took me 2 hours and 30 minutes.\n> >\n> > 2)second method - copy (select * from oracle_remote_table) to /tmp/dump\n> > generates total disk write of 4 M/s and actuval disk write of 100 K/s.\n> The\n> > copy utility suppose to be very fast but it seems very slow.\n>\n> Have you tried increasing the prefetch option in the remote table?\n>\n> If you left it in its default, latency could be hurting your ability\n> to saturate the network.\n>\n\nHi Claudio, how can I do that ? Can you explain me what is this option ?2017-08-24 2:15 GMT+03:00 Claudio Freire <[email protected]>:On Mon, Aug 21, 2017 at 5:00 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> To summarize, I still have performance problems. My current situation :\n>\n> I'm trying to copy the data of many tables in the oracle database into my\n> postgresql tables. I'm doing so by running insert into local_postgresql_temp\n> select * from remote_oracle_table. The performance of this operation are\n> very slow and I tried to check the reason for that and mybe choose a\n> different alternative.\n>\n> 1)First method - Insert into local_postgresql_table select * from\n> remote_oracle_table this generated total disk write of 7 M/s and actual disk\n> write of 4 M/s(iotop). For 32G table it took me 2 hours and 30 minutes.\n>\n> 2)second method - copy (select * from oracle_remote_table) to /tmp/dump\n> generates total disk write of 4 M/s and actuval disk write of 100 K/s. The\n> copy utility suppose to be very fast but it seems very slow.\n\nHave you tried increasing the prefetch option in the remote table?\n\nIf you left it in its default, latency could be hurting your ability\nto saturate the network.",
"msg_date": "Thu, 24 Aug 2017 10:51:11 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "On Thu, Aug 24, 2017 at 4:51 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> Hi Claudio, how can I do that ? Can you explain me what is this option ?\n>\n> 2017-08-24 2:15 GMT+03:00 Claudio Freire <[email protected]>:\n>>\n>> On Mon, Aug 21, 2017 at 5:00 AM, Mariel Cherkassky\n>> <[email protected]> wrote:\n>> > To summarize, I still have performance problems. My current situation :\n>> >\n>> > I'm trying to copy the data of many tables in the oracle database into\n>> > my\n>> > postgresql tables. I'm doing so by running insert into\n>> > local_postgresql_temp\n>> > select * from remote_oracle_table. The performance of this operation are\n>> > very slow and I tried to check the reason for that and mybe choose a\n>> > different alternative.\n>> >\n>> > 1)First method - Insert into local_postgresql_table select * from\n>> > remote_oracle_table this generated total disk write of 7 M/s and actual\n>> > disk\n>> > write of 4 M/s(iotop). For 32G table it took me 2 hours and 30 minutes.\n>> >\n>> > 2)second method - copy (select * from oracle_remote_table) to /tmp/dump\n>> > generates total disk write of 4 M/s and actuval disk write of 100 K/s.\n>> > The\n>> > copy utility suppose to be very fast but it seems very slow.\n>>\n>> Have you tried increasing the prefetch option in the remote table?\n>>\n>> If you left it in its default, latency could be hurting your ability\n>> to saturate the network.\n>\n>\n\nPlease don't top-post.\n\nI'm assuming you're using this: http://laurenz.github.io/oracle_fdw/\n\nIf you check the docs, you'll see this:\nhttps://github.com/laurenz/oracle_fdw#foreign-table-options\n\nSo I'm guessing you could:\n\nALTER FOREIGN TABLE remote_table OPTIONS ( SET prefetch 10240 );\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 24 Aug 2017 13:14:05 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "Hi, yes indeed I'm using laurenz`s oracle_fdw extension. I tried to run it\nbut I'm getting error\n\ndbch=# ALTER FOREIGN TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch\n10240 );\nERROR: syntax error at or near \"10240\"\nLINE 1: ...N TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch 10240 );\n\n\ndbch=# alter foreign table tc_sub_rate_ver_prod OPTIONS (SET prefetch\n'10240');\nERROR: option \"prefetch\" not found\n\n\n\n\n2017-08-24 19:14 GMT+03:00 Claudio Freire <[email protected]>:\n\n> On Thu, Aug 24, 2017 at 4:51 AM, Mariel Cherkassky\n> <[email protected]> wrote:\n> > Hi Claudio, how can I do that ? Can you explain me what is this option ?\n> >\n> > 2017-08-24 2:15 GMT+03:00 Claudio Freire <[email protected]>:\n> >>\n> >> On Mon, Aug 21, 2017 at 5:00 AM, Mariel Cherkassky\n> >> <[email protected]> wrote:\n> >> > To summarize, I still have performance problems. My current situation\n> :\n> >> >\n> >> > I'm trying to copy the data of many tables in the oracle database into\n> >> > my\n> >> > postgresql tables. I'm doing so by running insert into\n> >> > local_postgresql_temp\n> >> > select * from remote_oracle_table. The performance of this operation\n> are\n> >> > very slow and I tried to check the reason for that and mybe choose a\n> >> > different alternative.\n> >> >\n> >> > 1)First method - Insert into local_postgresql_table select * from\n> >> > remote_oracle_table this generated total disk write of 7 M/s and\n> actual\n> >> > disk\n> >> > write of 4 M/s(iotop). For 32G table it took me 2 hours and 30\n> minutes.\n> >> >\n> >> > 2)second method - copy (select * from oracle_remote_table) to\n> /tmp/dump\n> >> > generates total disk write of 4 M/s and actuval disk write of 100 K/s.\n> >> > The\n> >> > copy utility suppose to be very fast but it seems very slow.\n> >>\n> >> Have you tried increasing the prefetch option in the remote table?\n> >>\n> >> If you left it in its default, latency could be hurting your ability\n> >> to saturate the network.\n> >\n> >\n>\n> Please don't top-post.\n>\n> I'm assuming you're using this: http://laurenz.github.io/oracle_fdw/\n>\n> If you check the docs, you'll see this:\n> https://github.com/laurenz/oracle_fdw#foreign-table-options\n>\n> So I'm guessing you could:\n>\n> ALTER FOREIGN TABLE remote_table OPTIONS ( SET prefetch 10240 );\n>\n\nHi, yes indeed I'm using laurenz`s oracle_fdw extension. I tried to run it but I'm getting error dbch=# ALTER FOREIGN TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch 10240 );ERROR: syntax error at or near \"10240\"LINE 1: ...N TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch 10240 );dbch=# alter foreign table tc_sub_rate_ver_prod OPTIONS (SET prefetch '10240');ERROR: option \"prefetch\" not found2017-08-24 19:14 GMT+03:00 Claudio Freire <[email protected]>:On Thu, Aug 24, 2017 at 4:51 AM, Mariel Cherkassky\n<[email protected]> wrote:\n> Hi Claudio, how can I do that ? Can you explain me what is this option ?\n>\n> 2017-08-24 2:15 GMT+03:00 Claudio Freire <[email protected]>:\n>>\n>> On Mon, Aug 21, 2017 at 5:00 AM, Mariel Cherkassky\n>> <[email protected]> wrote:\n>> > To summarize, I still have performance problems. My current situation :\n>> >\n>> > I'm trying to copy the data of many tables in the oracle database into\n>> > my\n>> > postgresql tables. I'm doing so by running insert into\n>> > local_postgresql_temp\n>> > select * from remote_oracle_table. The performance of this operation are\n>> > very slow and I tried to check the reason for that and mybe choose a\n>> > different alternative.\n>> >\n>> > 1)First method - Insert into local_postgresql_table select * from\n>> > remote_oracle_table this generated total disk write of 7 M/s and actual\n>> > disk\n>> > write of 4 M/s(iotop). For 32G table it took me 2 hours and 30 minutes.\n>> >\n>> > 2)second method - copy (select * from oracle_remote_table) to /tmp/dump\n>> > generates total disk write of 4 M/s and actuval disk write of 100 K/s.\n>> > The\n>> > copy utility suppose to be very fast but it seems very slow.\n>>\n>> Have you tried increasing the prefetch option in the remote table?\n>>\n>> If you left it in its default, latency could be hurting your ability\n>> to saturate the network.\n>\n>\n\nPlease don't top-post.\n\nI'm assuming you're using this: http://laurenz.github.io/oracle_fdw/\n\nIf you check the docs, you'll see this:\nhttps://github.com/laurenz/oracle_fdw#foreign-table-options\n\nSo I'm guessing you could:\n\nALTER FOREIGN TABLE remote_table OPTIONS ( SET prefetch 10240 );",
"msg_date": "Sun, 27 Aug 2017 19:34:45 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "On Sun, Aug 27, 2017 at 1:34 PM, Mariel Cherkassky\n<[email protected]> wrote:\n> Hi, yes indeed I'm using laurenz`s oracle_fdw extension. I tried to run it\n> but I'm getting error\n>\n> dbch=# ALTER FOREIGN TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch 10240\n> );\n> ERROR: syntax error at or near \"10240\"\n> LINE 1: ...N TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch 10240 );\n\nYeah, might need to put the 10240 in quotes.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 28 Aug 2017 02:47:04 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "On Sun, Aug 27, 2017 at 1:34 PM, Mariel Cherkassky\n<[email protected]> wrote:\n> Hi, yes indeed I'm using laurenz`s oracle_fdw extension. I tried to run it\n> but I'm getting error\n>\n> dbch=# ALTER FOREIGN TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch 10240\n> );\n> ERROR: syntax error at or near \"10240\"\n> LINE 1: ...N TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch 10240 );\n>\n>\n> dbch=# alter foreign table tc_sub_rate_ver_prod OPTIONS (SET prefetch\n> '10240');\n> ERROR: option \"prefetch\" not found\n\nOh, sorry, I hadn't seen this until I hit send.\n\nUnless the documentation is inaccurate or you're using a really old\nversion (from the changelog that option is from 2016), that should\nwork.\n\nI don't have enough experience with oracle_fdw to help there, most of\nmy dealings have been with postgres_fdw.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 28 Aug 2017 02:51:22 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance problem on big tables"
},
{
"msg_contents": "I have the newest version :\nselect oracle_diag();\n oracle_diag\n---------------------------------------------------------------------------------------------------------------------\n oracle_fdw 1.5.0, PostgreSQL 9.6.3, Oracle client 11.2.0.4.0,\nORACLE_HOME=/PostgreSQL/9.6/tools/instantclient_11_2/\n(1 row)\n\n\nIs there a prefetch also for local tables ? I mean If I run with a cursor\nover results of a select query, mybe setting the prefetch for a local table\nmight also improve performance ?\n\n2017-08-28 8:51 GMT+03:00 Claudio Freire <[email protected]>:\n\n> On Sun, Aug 27, 2017 at 1:34 PM, Mariel Cherkassky\n> <[email protected]> wrote:\n> > Hi, yes indeed I'm using laurenz`s oracle_fdw extension. I tried to run\n> it\n> > but I'm getting error\n> >\n> > dbch=# ALTER FOREIGN TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch\n> 10240\n> > );\n> > ERROR: syntax error at or near \"10240\"\n> > LINE 1: ...N TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch 10240 );\n> >\n> >\n> > dbch=# alter foreign table tc_sub_rate_ver_prod OPTIONS (SET prefetch\n> > '10240');\n> > ERROR: option \"prefetch\" not found\n>\n> Oh, sorry, I hadn't seen this until I hit send.\n>\n> Unless the documentation is inaccurate or you're using a really old\n> version (from the changelog that option is from 2016), that should\n> work.\n>\n> I don't have enough experience with oracle_fdw to help there, most of\n> my dealings have been with postgres_fdw.\n>\n\nI have the newest version : select oracle_diag(); oracle_diag--------------------------------------------------------------------------------------------------------------------- oracle_fdw 1.5.0, PostgreSQL 9.6.3, Oracle client 11.2.0.4.0, ORACLE_HOME=/PostgreSQL/9.6/tools/instantclient_11_2/(1 row)Is there a prefetch also for local tables ? I mean If I run with a cursor over results of a select query, mybe setting the prefetch for a local table might also improve performance ?2017-08-28 8:51 GMT+03:00 Claudio Freire <[email protected]>:On Sun, Aug 27, 2017 at 1:34 PM, Mariel Cherkassky\n<[email protected]> wrote:\n> Hi, yes indeed I'm using laurenz`s oracle_fdw extension. I tried to run it\n> but I'm getting error\n>\n> dbch=# ALTER FOREIGN TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch 10240\n> );\n> ERROR: syntax error at or near \"10240\"\n> LINE 1: ...N TABLE tc_sub_rate_ver_prod OPTIONS ( SET prefetch 10240 );\n>\n>\n> dbch=# alter foreign table tc_sub_rate_ver_prod OPTIONS (SET prefetch\n> '10240');\n> ERROR: option \"prefetch\" not found\n\nOh, sorry, I hadn't seen this until I hit send.\n\nUnless the documentation is inaccurate or you're using a really old\nversion (from the changelog that option is from 2016), that should\nwork.\n\nI don't have enough experience with oracle_fdw to help there, most of\nmy dealings have been with postgres_fdw.",
"msg_date": "Mon, 28 Aug 2017 09:05:30 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance problem on big tables"
}
] |
[
{
"msg_contents": "This particular db is on 9.3.15. Recently we had a serious performance\ndegradation related to a batch job that creates 4-5 temp tables and 5\nindexes. It is a really badly written job but what really confuses us is\nthat this job has been running for years with no issue remotely approaching\nthis one. We are also using pgpool.\n\nThe job would kick off with 20-30 of similar queries running at once. The\nthing normally takes only 30ms or so to run - it only operates on 1\ncustomer at a time (yes, it's horribly written). All of a sudden the\ncluster started thrashing and performance seriously degraded. We tried a\nnumber of things with no success:\n\n - Analyzed the whole database\n - Turned off full logging\n - Turned off synchronous commit\n - Vacuumed several of the catalog tables\n - Checked if we had an abnormal high amount of traffic this time - we\n didn't\n - No abnormal disk/network issues (we would have seen much larger issues\n if that had been the case)\n - Tried turning down the number of app nodes running\n\nWhat ended up completely resolving the issue was converting the query to\nuse ctes instead of temp tables. That means we avoided the disk writing\nand the catalog churn, and useless indexes. However, we are baffled as to\nwhy this could make such a big difference when we had no issue like this\nbefore, and we have seen no systematic performance degradation in our\nsystem.\n\nAny insights would be greatly appreciated, as we are concerned not knowing\nthe root cause.\n\nThanks,\nJeremy\n\nThis particular db is on 9.3.15. Recently we had a serious performance degradation related to a batch job that creates 4-5 temp tables and 5 indexes. It is a really badly written job but what really confuses us is that this job has been running for years with no issue remotely approaching this one. We are also using pgpool.The job would kick off with 20-30 of similar queries running at once. The thing normally takes only 30ms or so to run - it only operates on 1 customer at a time (yes, it's horribly written). All of a sudden the cluster started thrashing and performance seriously degraded. We tried a number of things with no success:Analyzed the whole databaseTurned off full loggingTurned off synchronous commitVacuumed several of the catalog tablesChecked if we had an abnormal high amount of traffic this time - we didn'tNo abnormal disk/network issues (we would have seen much larger issues if that had been the case)Tried turning down the number of app nodes runningWhat ended up completely resolving the issue was converting the query to use ctes instead of temp tables. That means we avoided the disk writing and the catalog churn, and useless indexes. However, we are baffled as to why this could make such a big difference when we had no issue like this before, and we have seen no systematic performance degradation in our system.Any insights would be greatly appreciated, as we are concerned not knowing the root cause.Thanks,Jeremy",
"msg_date": "Mon, 14 Aug 2017 14:53:48 -0500",
"msg_from": "Jeremy Finzel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Odd sudden performance degradation related to temp object churn"
},
{
"msg_contents": "On Mon, Aug 14, 2017 at 1:53 PM, Jeremy Finzel <[email protected]> wrote:\n> This particular db is on 9.3.15. Recently we had a serious performance\n> degradation related to a batch job that creates 4-5 temp tables and 5\n> indexes. It is a really badly written job but what really confuses us is\n> that this job has been running for years with no issue remotely approaching\n> this one. We are also using pgpool.\n>\n> The job would kick off with 20-30 of similar queries running at once. The\n> thing normally takes only 30ms or so to run - it only operates on 1 customer\n> at a time (yes, it's horribly written). All of a sudden the cluster started\n> thrashing and performance seriously degraded. We tried a number of things\n> with no success:\n>\n> Analyzed the whole database\n> Turned off full logging\n> Turned off synchronous commit\n> Vacuumed several of the catalog tables\n> Checked if we had an abnormal high amount of traffic this time - we didn't\n> No abnormal disk/network issues (we would have seen much larger issues if\n> that had been the case)\n> Tried turning down the number of app nodes running\n>\n> What ended up completely resolving the issue was converting the query to use\n> ctes instead of temp tables. That means we avoided the disk writing and the\n> catalog churn, and useless indexes. However, we are baffled as to why this\n> could make such a big difference when we had no issue like this before, and\n> we have seen no systematic performance degradation in our system.\n>\n> Any insights would be greatly appreciated, as we are concerned not knowing\n> the root cause.\n\nHow are your disks setup? One big drive with everything on it?\nSeparate disks for pg_xlog and pg's data dir and the OS logging? IO\ncontention is one of the big killers of db performance.\n\nLogging likely isn't your problem, but yeah you don't need to log\nERRYTHANG to see the problem either. Log long running queries temp\nusage, buffer usage, query plans on slow queries, stuff like that.\n\nYou've likely hit a \"tipping point\" in terms of data size. Either it's\ncause the query planner to make a bad decision, or you're spilling to\ndisk a lot more than you used to.\n\nBe sure to log temporary stuff with log_temp_files = 0 in your\npostgresql.conf and then look for temporary file in your logs. I bet\nyou've started spilling into the same place as your temp tables are\ngoing, and by default that's your data directory. Adding another drive\nand moving pgsql's temp table space to it might help.\n\nAlso increasing work_mem (but don't go crazy, it's per sort, so can\nmultiply fast on a busy server)\n\nAlso log your query plans or run explain / explain analyze on the slow\nqueries to see what they're doing that's so expensive.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Aug 2017 14:01:42 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd sudden performance degradation related to temp\n object churn"
},
{
"msg_contents": "On Mon, Aug 14, 2017 at 3:01 PM, Scott Marlowe <[email protected]>\nwrote:\n\n> On Mon, Aug 14, 2017 at 1:53 PM, Jeremy Finzel <[email protected]> wrote:\n> > This particular db is on 9.3.15. Recently we had a serious performance\n> > degradation related to a batch job that creates 4-5 temp tables and 5\n> > indexes. It is a really badly written job but what really confuses us is\n> > that this job has been running for years with no issue remotely\n> approaching\n> > this one. We are also using pgpool.\n> >\n> > The job would kick off with 20-30 of similar queries running at once.\n> The\n> > thing normally takes only 30ms or so to run - it only operates on 1\n> customer\n> > at a time (yes, it's horribly written). All of a sudden the cluster\n> started\n> > thrashing and performance seriously degraded. We tried a number of\n> things\n> > with no success:\n> >\n> > Analyzed the whole database\n> > Turned off full logging\n> > Turned off synchronous commit\n> > Vacuumed several of the catalog tables\n> > Checked if we had an abnormal high amount of traffic this time - we\n> didn't\n> > No abnormal disk/network issues (we would have seen much larger issues if\n> > that had been the case)\n> > Tried turning down the number of app nodes running\n> >\n> > What ended up completely resolving the issue was converting the query to\n> use\n> > ctes instead of temp tables. That means we avoided the disk writing and\n> the\n> > catalog churn, and useless indexes. However, we are baffled as to why\n> this\n> > could make such a big difference when we had no issue like this before,\n> and\n> > we have seen no systematic performance degradation in our system.\n> >\n> > Any insights would be greatly appreciated, as we are concerned not\n> knowing\n> > the root cause.\n>\n> How are your disks setup? One big drive with everything on it?\n> Separate disks for pg_xlog and pg's data dir and the OS logging? IO\n> contention is one of the big killers of db performance.\n\n\nIt's one san volume ssd for the data and wal files. But logging and memory\nspilling and archived xlogs go to a local ssd disk.\n\n\n> Logging likely isn't your problem, but yeah you don't need to log\n> ERRYTHANG to see the problem either. Log long running queries temp\n> usage, buffer usage, query plans on slow queries, stuff like that.\n>\n> You've likely hit a \"tipping point\" in terms of data size. Either it's\n> cause the query planner to make a bad decision, or you're spilling to\n> disk a lot more than you used to.\n\nBe sure to log temporary stuff with log_temp_files = 0 in your\n> postgresql.conf and then look for temporary file in your logs. I bet\n> you've started spilling into the same place as your temp tables are\n> going, and by default that's your data directory. Adding another drive\n> and moving pgsql's temp table space to it might help.\n>\n\nWe would not have competition between disk spilling and temp tables because\nwhat I described above - they are going to two different places. Also, I\nneglected to mention that we turned on auto-explain during this crisis, and\nfound the query plan was good, it was just taking forever due to thrashing\njust seconds after we kicked off the batches. I did NOT turn on\nlog_analyze and timing but it was enough to see there was no apparent query\nplan regression. Also, we had no change in the performance/plan after\nre-analyzing all tables.\n\n\n> Also increasing work_mem (but don't go crazy, it's per sort, so can\n> multiply fast on a busy server)\n>\n\nWe are already up at 400MB, and this query was using memory in the low KB\nlevels because it is very small (1 - 20 rows of data per temp table, and no\nexpensive selects with missing indexes or anything).\n\n\n> Also log your query plans or run explain / explain analyze on the slow\n> queries to see what they're doing that's so expensive.\n>\n\nYes, we did do that and there was nothing remarkable about the plan when we\nran them in production. All we saw was that over time, the actual\nexecution time (along with everything else on the entire system) started\nslowing down more and more as thrashing increased. But we found no\nevidence of a plan regression.\n\nThank you! Any more feedback is much appreciated.\n\nOn Mon, Aug 14, 2017 at 3:01 PM, Scott Marlowe <[email protected]> wrote:On Mon, Aug 14, 2017 at 1:53 PM, Jeremy Finzel <[email protected]> wrote:\n> This particular db is on 9.3.15. Recently we had a serious performance\n> degradation related to a batch job that creates 4-5 temp tables and 5\n> indexes. It is a really badly written job but what really confuses us is\n> that this job has been running for years with no issue remotely approaching\n> this one. We are also using pgpool.\n>\n> The job would kick off with 20-30 of similar queries running at once. The\n> thing normally takes only 30ms or so to run - it only operates on 1 customer\n> at a time (yes, it's horribly written). All of a sudden the cluster started\n> thrashing and performance seriously degraded. We tried a number of things\n> with no success:\n>\n> Analyzed the whole database\n> Turned off full logging\n> Turned off synchronous commit\n> Vacuumed several of the catalog tables\n> Checked if we had an abnormal high amount of traffic this time - we didn't\n> No abnormal disk/network issues (we would have seen much larger issues if\n> that had been the case)\n> Tried turning down the number of app nodes running\n>\n> What ended up completely resolving the issue was converting the query to use\n> ctes instead of temp tables. That means we avoided the disk writing and the\n> catalog churn, and useless indexes. However, we are baffled as to why this\n> could make such a big difference when we had no issue like this before, and\n> we have seen no systematic performance degradation in our system.\n>\n> Any insights would be greatly appreciated, as we are concerned not knowing\n> the root cause.\n\nHow are your disks setup? One big drive with everything on it?\nSeparate disks for pg_xlog and pg's data dir and the OS logging? IO\ncontention is one of the big killers of db performance.It's one san volume ssd for the data and wal files. But logging and memory spilling and archived xlogs go to a local ssd disk. Logging likely isn't your problem, but yeah you don't need to log\nERRYTHANG to see the problem either. Log long running queries temp\nusage, buffer usage, query plans on slow queries, stuff like that.\n\nYou've likely hit a \"tipping point\" in terms of data size. Either it's\ncause the query planner to make a bad decision, or you're spilling to\ndisk a lot more than you used to. \nBe sure to log temporary stuff with log_temp_files = 0 in your\npostgresql.conf and then look for temporary file in your logs. I bet\nyou've started spilling into the same place as your temp tables are\ngoing, and by default that's your data directory. Adding another drive\nand moving pgsql's temp table space to it might help.We would not have competition between disk spilling and temp tables because what I described above - they are going to two different places. Also, I neglected to mention that we turned on auto-explain during this crisis, and found the query plan was good, it was just taking forever due to thrashing just seconds after we kicked off the batches. I did NOT turn on log_analyze and timing but it was enough to see there was no apparent query plan regression. Also, we had no change in the performance/plan after re-analyzing all tables. \nAlso increasing work_mem (but don't go crazy, it's per sort, so can\nmultiply fast on a busy server)We are already up at 400MB, and this query was using memory in the low KB levels because it is very small (1 - 20 rows of data per temp table, and no expensive selects with missing indexes or anything). \nAlso log your query plans or run explain / explain analyze on the slow\nqueries to see what they're doing that's so expensive.\nYes, we did do that and there was nothing remarkable about the plan when we ran them in production. All we saw was that over time, the actual execution time (along with everything else on the entire system) started slowing down more and more as thrashing increased. But we found no evidence of a plan regression.Thank you! Any more feedback is much appreciated.",
"msg_date": "Mon, 14 Aug 2017 15:46:23 -0500",
"msg_from": "Jeremy Finzel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd sudden performance degradation related to temp\n object churn"
},
{
"msg_contents": "On Mon, Aug 14, 2017 at 2:46 PM, Jeremy Finzel <[email protected]> wrote:\n> On Mon, Aug 14, 2017 at 3:01 PM, Scott Marlowe <[email protected]>\n> wrote:\n>>\n>> On Mon, Aug 14, 2017 at 1:53 PM, Jeremy Finzel <[email protected]> wrote:\n>> >\n>> > Any insights would be greatly appreciated, as we are concerned not\n>> > knowing\n>> > the root cause.\n>>\n>> How are your disks setup? One big drive with everything on it?\n>> Separate disks for pg_xlog and pg's data dir and the OS logging? IO\n>> contention is one of the big killers of db performance.\n>\n>\n> It's one san volume ssd for the data and wal files. But logging and memory\n> spilling and archived xlogs go to a local ssd disk.\n>\n>>\n>> Logging likely isn't your problem, but yeah you don't need to log\n>> ERRYTHANG to see the problem either. Log long running queries temp\n>> usage, buffer usage, query plans on slow queries, stuff like that.\n>>\n>> You've likely hit a \"tipping point\" in terms of data size. Either it's\n>> cause the query planner to make a bad decision, or you're spilling to\n>> disk a lot more than you used to.\n>>\n>> Be sure to log temporary stuff with log_temp_files = 0 in your\n>> postgresql.conf and then look for temporary file in your logs. I bet\n>> you've started spilling into the same place as your temp tables are\n>> going, and by default that's your data directory. Adding another drive\n>> and moving pgsql's temp table space to it might help.\n>\n>\n> We would not have competition between disk spilling and temp tables because\n> what I described above - they are going to two different places. Also, I\n> neglected to mention that we turned on auto-explain during this crisis, and\n> found the query plan was good, it was just taking forever due to thrashing\n> just seconds after we kicked off the batches. I did NOT turn on log_analyze\n> and timing but it was enough to see there was no apparent query plan\n> regression. Also, we had no change in the performance/plan after\n> re-analyzing all tables.\n\nYou do know that temp tables go into the default temp table space,\njust like sorts, right?\n\nHave you used something like iostat to see which volume is getting all the IO?\n\n>\n>>\n>> Also increasing work_mem (but don't go crazy, it's per sort, so can\n>> multiply fast on a busy server)\n>\n>\n> We are already up at 400MB, and this query was using memory in the low KB\n> levels because it is very small (1 - 20 rows of data per temp table, and no\n> expensive selects with missing indexes or anything).\n\nAhh so it doesn't sound like it's spilling to disk then. Do the logs\nsay yes or no on that?\n\nBasically use unix tools to look for where you're thrashing. iotop can\nbe handy too.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Aug 2017 14:58:06 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd sudden performance degradation related to temp\n object churn"
},
{
"msg_contents": "Also if you're using newly loaded data the db could be setting hint\nbits on the first select etc.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Aug 2017 15:02:22 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd sudden performance degradation related to temp\n object churn"
},
{
"msg_contents": "Scott Marlowe <[email protected]> writes:\n\n> On Mon, Aug 14, 2017 at 2:46 PM, Jeremy Finzel <[email protected]> wrote:\n>\n>> On Mon, Aug 14, 2017 at 3:01 PM, Scott Marlowe <[email protected]>\n>> wrote:\n>>>\n>>> On Mon, Aug 14, 2017 at 1:53 PM, Jeremy Finzel <[email protected]> wrote:\n>>> >\n>>> > Any insights would be greatly appreciated, as we are concerned not\n>>> > knowing\n>>> > the root cause.\n>>>\n>>> How are your disks setup? One big drive with everything on it?\n>>> Separate disks for pg_xlog and pg's data dir and the OS logging? IO\n>>> contention is one of the big killers of db performance.\n>>\n>>\n>> It's one san volume ssd for the data and wal files. But logging and memory\n>> spilling and archived xlogs go to a local ssd disk.\n>>\n>>>\n>>> Logging likely isn't your problem, but yeah you don't need to log\n>>> ERRYTHANG to see the problem either. Log long running queries temp\n>>> usage, buffer usage, query plans on slow queries, stuff like that.\n>>>\n>>> You've likely hit a \"tipping point\" in terms of data size. Either it's\n>>> cause the query planner to make a bad decision, or you're spilling to\n>>> disk a lot more than you used to.\n>>>\n>>> Be sure to log temporary stuff with log_temp_files = 0 in your\n>>> postgresql.conf and then look for temporary file in your logs. I bet\n>>> you've started spilling into the same place as your temp tables are\n>>> going, and by default that's your data directory. Adding another drive\n>>> and moving pgsql's temp table space to it might help.\n>>\n>>\n>> We would not have competition between disk spilling and temp tables because\n>> what I described above - they are going to two different places. Also, I\n>> neglected to mention that we turned on auto-explain during this crisis, and\n>> found the query plan was good, it was just taking forever due to thrashing\n>> just seconds after we kicked off the batches. I did NOT turn on log_analyze\n>> and timing but it was enough to see there was no apparent query plan\n>> regression. Also, we had no change in the performance/plan after\n>> re-analyzing all tables.\n>\n> You do know that temp tables go into the default temp table space,\n> just like sorts, right?\n\nNot so.\n\nThis system has no defined temp_tablespace however spillage due to\nsorting/hashing that exceeds work_mem goes to base/pgsql_tmp which we\nhave symlinked out to a local SSD drive.\n\nWe do run a few of our other systems with temp_tablespace defined and\nfor these the heap/index files do share same volume as other temp usage.\n\nThx\n\n\n\n\n>\n> Have you used something like iostat to see which volume is getting all the IO?\n>\n>>\n>>>\n>>> Also increasing work_mem (but don't go crazy, it's per sort, so can\n>>> multiply fast on a busy server)\n>>\n>>\n>> We are already up at 400MB, and this query was using memory in the low KB\n>> levels because it is very small (1 - 20 rows of data per temp table, and no\n>> expensive selects with missing indexes or anything).\n>\n> Ahh so it doesn't sound like it's spilling to disk then. Do the logs\n> say yes or no on that?\n>\n> Basically use unix tools to look for where you're thrashing. iotop can\n> be handy too.\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: [email protected]\np: 312.241.7800\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Aug 2017 17:16:36 -0500",
"msg_from": "Jerry Sievers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd sudden performance degradation related to temp object churn"
},
{
"msg_contents": "On Mon, Aug 14, 2017 at 12:53 PM, Jeremy Finzel <[email protected]> wrote:\n> This particular db is on 9.3.15. Recently we had a serious performance\n> degradation related to a batch job that creates 4-5 temp tables and 5\n> indexes. It is a really badly written job but what really confuses us is\n> that this job has been running for years with no issue remotely approaching\n> this one. We are also using pgpool.\n\nDid you happen to notice that this occurred when you upgrading point\nrelease? If so, what version did you move from/to?\n\n-- \nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Aug 2017 15:43:40 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd sudden performance degradation related to temp\n object churn"
},
{
"msg_contents": "Peter Geoghegan <[email protected]> writes:\n\n> On Mon, Aug 14, 2017 at 12:53 PM, Jeremy Finzel <[email protected]> wrote:\n>\n>> This particular db is on 9.3.15. Recently we had a serious performance\n>> degradation related to a batch job that creates 4-5 temp tables and 5\n>> indexes. It is a really badly written job but what really confuses us is\n>> that this job has been running for years with no issue remotely approaching\n>> this one. We are also using pgpool.\n>\n> Did you happen to notice that this occurred when you upgrading point\n> release? If so, what version did you move from/to?\n\nThe system was last started back in November. Running 9.3.15.\n\nNot aware of any host system libs or whatever change recently but will investigate.\n\n>\n> -- \n> Peter Geoghegan\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: [email protected]\np: 312.241.7800\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Aug 2017 18:10:38 -0500",
"msg_from": "Jerry Sievers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd sudden performance degradation related to temp object churn"
},
{
"msg_contents": "On Mon, Aug 14, 2017 at 4:16 PM, Jerry Sievers <[email protected]> wrote:\n> Scott Marlowe <[email protected]> writes:\n>\n>> You do know that temp tables go into the default temp table space,\n>> just like sorts, right?\n>\n> Not so.\n>\n> This system has no defined temp_tablespace however spillage due to\n> sorting/hashing that exceeds work_mem goes to base/pgsql_tmp which we\n> have symlinked out to a local SSD drive.\n\nWhich is also where temp tables are created.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 15 Aug 2017 09:51:24 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd sudden performance degradation related to temp\n object churn"
},
{
"msg_contents": ">\n> > Not so.\n> >\n> > This system has no defined temp_tablespace however spillage due to\n> > sorting/hashing that exceeds work_mem goes to base/pgsql_tmp which we\n> > have symlinked out to a local SSD drive.\n>\n> Which is also where temp tables are created.\n>\n\nThis isn't true, at least in our environment. Just as proof, I have\ncreated a couple of temp tables, and querying the relfilenodes, they only\nshow up under base/<dbid>/t4_<relfilenode>:\n\ntest=# CREATE TEMP TABLE foo(id int);\nCREATE TABLE\ntest=# INSERT INTO foo SELECT * FROM generate_series(1,100);\nINSERT 0 100\ntest=# CREATE TEMP TABLE bar();\nCREATE TABLE\ntest=# SELECT relfilenode FROM pg_class WHERE relname IN('foo','bar');\n relfilenode\n-------------\n 20941\n 20944\n(2 rows)\n\npostgres@foo:/san/<cluster>/pgdata/base$ ls -l\ntotal 44\ndrwx------ 2 postgres postgres 4096 Jul 7 15:19 1\ndrwx------ 2 postgres postgres 4096 Nov 29 2016 12408\ndrwx------ 2 postgres postgres 4096 Jul 14 14:00 12409\ndrwx------ 2 postgres postgres 12288 Jul 7 15:19 18289\ndrwx------ 2 postgres postgres 12288 Jul 7 15:19 18803\ndrwx------ 2 postgres postgres 4096 Jul 7 15:19 20613\ndrwx------ 2 postgres postgres 4096 Aug 15 08:06 20886\nlrwxrwxrwx 1 postgres postgres 30 Jul 7 15:15 pgsql_tmp ->\n/local/pgsql_tmp/9.6/<cluster>\n\npostgres@pgsnap05:/san/<cluster>/pgdata/base$ ls -l 20886 | grep\n'20941\\|20944'\n-rw------- 1 postgres postgres 8192 Aug 15 10:55 t4_20941\n-rw------- 1 postgres postgres 0 Aug 15 10:55 t4_20944\npostgres@pgsnap05:/san/dba_dev_d/pgdata/base$ cd pgsql_tmp\npostgres@pgsnap05:/san/dba_dev_d/pgdata/base/pgsql_tmp$ ls -l\ntotal 0\n\n> Not so.\n>\n> This system has no defined temp_tablespace however spillage due to\n> sorting/hashing that exceeds work_mem goes to base/pgsql_tmp which we\n> have symlinked out to a local SSD drive.\n\nWhich is also where temp tables are created.\nThis isn't true, at least in our environment. Just as proof, I have created a couple of temp tables, and querying the relfilenodes, they only show up under base/<dbid>/t4_<relfilenode>:test=# CREATE TEMP TABLE foo(id int);CREATE TABLEtest=# INSERT INTO foo SELECT * FROM generate_series(1,100);INSERT 0 100test=# CREATE TEMP TABLE bar();CREATE TABLEtest=# SELECT relfilenode FROM pg_class WHERE relname IN('foo','bar'); relfilenode------------- 20941 20944(2 rows)postgres@foo:/san/<cluster>/pgdata/base$ ls -ltotal 44drwx------ 2 postgres postgres 4096 Jul 7 15:19 1drwx------ 2 postgres postgres 4096 Nov 29 2016 12408drwx------ 2 postgres postgres 4096 Jul 14 14:00 12409drwx------ 2 postgres postgres 12288 Jul 7 15:19 18289drwx------ 2 postgres postgres 12288 Jul 7 15:19 18803drwx------ 2 postgres postgres 4096 Jul 7 15:19 20613drwx------ 2 postgres postgres 4096 Aug 15 08:06 20886lrwxrwxrwx 1 postgres postgres 30 Jul 7 15:15 pgsql_tmp -> /local/pgsql_tmp/9.6/<cluster>postgres@pgsnap05:/san/<cluster>/pgdata/base$ ls -l 20886 | grep '20941\\|20944'-rw------- 1 postgres postgres 8192 Aug 15 10:55 t4_20941-rw------- 1 postgres postgres 0 Aug 15 10:55 t4_20944postgres@pgsnap05:/san/dba_dev_d/pgdata/base$ cd pgsql_tmppostgres@pgsnap05:/san/dba_dev_d/pgdata/base/pgsql_tmp$ ls -ltotal 0",
"msg_date": "Tue, 15 Aug 2017 11:00:44 -0500",
"msg_from": "Jeremy Finzel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd sudden performance degradation related to temp\n object churn"
},
{
"msg_contents": "Oh yeah, sorry. Was looking at a different system where we were using\na tablespace for temp tables.\n\nOn Tue, Aug 15, 2017 at 10:00 AM, Jeremy Finzel <[email protected]> wrote:\n>> > Not so.\n>> >\n>> > This system has no defined temp_tablespace however spillage due to\n>> > sorting/hashing that exceeds work_mem goes to base/pgsql_tmp which we\n>> > have symlinked out to a local SSD drive.\n>>\n>> Which is also where temp tables are created.\n>\n>\n> This isn't true, at least in our environment. Just as proof, I have created\n> a couple of temp tables, and querying the relfilenodes, they only show up\n> under base/<dbid>/t4_<relfilenode>:\n>\n> test=# CREATE TEMP TABLE foo(id int);\n> CREATE TABLE\n> test=# INSERT INTO foo SELECT * FROM generate_series(1,100);\n> INSERT 0 100\n> test=# CREATE TEMP TABLE bar();\n> CREATE TABLE\n> test=# SELECT relfilenode FROM pg_class WHERE relname IN('foo','bar');\n> relfilenode\n> -------------\n> 20941\n> 20944\n> (2 rows)\n>\n> postgres@foo:/san/<cluster>/pgdata/base$ ls -l\n> total 44\n> drwx------ 2 postgres postgres 4096 Jul 7 15:19 1\n> drwx------ 2 postgres postgres 4096 Nov 29 2016 12408\n> drwx------ 2 postgres postgres 4096 Jul 14 14:00 12409\n> drwx------ 2 postgres postgres 12288 Jul 7 15:19 18289\n> drwx------ 2 postgres postgres 12288 Jul 7 15:19 18803\n> drwx------ 2 postgres postgres 4096 Jul 7 15:19 20613\n> drwx------ 2 postgres postgres 4096 Aug 15 08:06 20886\n> lrwxrwxrwx 1 postgres postgres 30 Jul 7 15:15 pgsql_tmp ->\n> /local/pgsql_tmp/9.6/<cluster>\n>\n> postgres@pgsnap05:/san/<cluster>/pgdata/base$ ls -l 20886 | grep\n> '20941\\|20944'\n> -rw------- 1 postgres postgres 8192 Aug 15 10:55 t4_20941\n> -rw------- 1 postgres postgres 0 Aug 15 10:55 t4_20944\n> postgres@pgsnap05:/san/dba_dev_d/pgdata/base$ cd pgsql_tmp\n> postgres@pgsnap05:/san/dba_dev_d/pgdata/base/pgsql_tmp$ ls -l\n> total 0\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 15 Aug 2017 11:04:58 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd sudden performance degradation related to temp\n object churn"
},
{
"msg_contents": "On Mon, Aug 14, 2017 at 5:10 PM, Jerry Sievers <[email protected]> wrote:\n> Peter Geoghegan <[email protected]> writes:\n>\n>> On Mon, Aug 14, 2017 at 12:53 PM, Jeremy Finzel <[email protected]> wrote:\n>>\n>>> This particular db is on 9.3.15. Recently we had a serious performance\n>>> degradation related to a batch job that creates 4-5 temp tables and 5\n>>> indexes. It is a really badly written job but what really confuses us is\n>>> that this job has been running for years with no issue remotely approaching\n>>> this one. We are also using pgpool.\n>>\n>> Did you happen to notice that this occurred when you upgrading point\n>> release? If so, what version did you move from/to?\n>\n> The system was last started back in November. Running 9.3.15.\n>\n> Not aware of any host system libs or whatever change recently but will investigate.\n\nSo do iostat or iotop show you if / where your disks are working\nhardest? Or is this CPU overhead that's killing performance?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 15 Aug 2017 11:07:26 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd sudden performance degradation related to temp\n object churn"
},
{
"msg_contents": "On Tue, Aug 15, 2017 at 12:07 PM, Scott Marlowe <[email protected]>\nwrote:\n\n> So do iostat or iotop show you if / where your disks are working\n> hardest? Or is this CPU overhead that's killing performance?\n>\n\nSorry for the delayed reply. I took a look in more detail at the query\nplans from our problem query during this incident. There are actually 6\nplans, because there were 6 unique queries. I traced one query through our\nlogs, and found something really interesting. That is that all of the\nfirst 5 queries are creating temp tables, and all of them took upwards of\n500ms each to run. The final query, however, is a simple select from the\nlast temp table, and that query took 0.035ms! This really confirms that\nsomehow, the issue had to do with *writing *to the SAN, I think. Of course\nthis doesn't answer a whole lot, because we had no other apparent issues\nwith write performance at all.\n\nI also provide some graphs below.\n\n7pm-3am on 8/10 (first incidents were around 10:30pm, other incidents ~1am,\n2am):\n\nLocal Disk IO:\n\n[image: Screen Shot 2017-08-18 at 8.20.06 AM.png]\n\nSAN IO:\n\n[image: Screen Shot 2017-08-18 at 8.16.59 AM.png]\n\nCPU:\n\n[image: Screen Shot 2017-08-18 at 8.20.58 AM.png]\n\n7-9pm on 8/10 (controlled attempts starting a little after 7):\n\nCPU:\n\n[image: Screen Shot 2017-08-18 at 8.43.35 AM.png]\n\nWrite IO on SAN:\n\n[image: Screen Shot 2017-08-18 at 8.44.32 AM.png]\n\nRead IO on Local disk:\n\n[image: Screen Shot 2017-08-18 at 8.46.27 AM.png]\n\nWrite IO on Local disk:\n\n[image: Screen Shot 2017-08-18 at 8.46.58 AM.png]\n\nOn Tue, Aug 15, 2017 at 12:07 PM, Scott Marlowe <[email protected]> wrote:So do iostat or iotop show you if / where your disks are working\nhardest? Or is this CPU overhead that's killing performance?\nSorry for the delayed reply. I took a look in more detail at the query plans from our problem query during this incident. There are actually 6 plans, because there were 6 unique queries. I traced one query through our logs, and found something really interesting. That is that all of the first 5 queries are creating temp tables, and all of them took upwards of 500ms each to run. The final query, however, is a simple select from the last temp table, and that query took 0.035ms! This really confirms that somehow, the issue had to do with writing to the SAN, I think. Of course this doesn't answer a whole lot, because we had no other apparent issues with write performance at all.I also provide some graphs below.7pm-3am on 8/10 (first incidents were around 10:30pm, other incidents ~1am, 2am):Local Disk IO:SAN IO:CPU:7-9pm on 8/10 (controlled attempts starting a little after 7):CPU:Write IO on SAN:Read IO on Local disk:Write IO on Local disk:",
"msg_date": "Fri, 18 Aug 2017 09:21:36 -0500",
"msg_from": "Jeremy Finzel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd sudden performance degradation related to temp\n object churn"
},
{
"msg_contents": "\n\nOn 19/08/17 02:21, Jeremy Finzel wrote:\n> On Tue, Aug 15, 2017 at 12:07 PM, Scott Marlowe \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> So do iostat or iotop show you if / where your disks are working\n> hardest? Or is this CPU overhead that's killing performance?\n>\n>\n> Sorry for the delayed reply. I took a look in more detail at the query \n> plans from our problem query during this incident. There are actually \n> 6 plans, because there were 6 unique queries. I traced one query \n> through our logs, and found something really interesting. That is that \n> all of the first 5 queries are creating temp tables, and all of them \n> took upwards of 500ms each to run. The final query, however, is a \n> simple select from the last temp table, and that query took 0.035ms! \n> This really confirms that somehow, the issue had to do with /writing \n> /to the SAN, I think. Of course this doesn't answer a whole lot, \n> because we had no other apparent issues with write performance at all.\n>\n> I also provide some graphs below.\n>\n>\nHi, graphs for latency (or await etc) might be worth looking at too - \nsometimes the troughs between the IO spikes are actually when the disks \nhave been overwhelmed with queued up pending IOs...\n\nAlso SANs are notorious for this sort of thing - typically they have a \nbig RAM cache that you are actually writing to, and everything is nice \nand fast until your workload (along with everyone else's) fills up the \ncache and then performance drops of a cliff for a while (I've seen SAN \ndisks with iostat utilizations of 105% <-- Lol... and await numbers that \nscroll off the page in that scenario)!\n\nregards\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 19 Aug 2017 13:49:49 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd sudden performance degradation related to temp\n object churn"
},
{
"msg_contents": "On 19/08/17 13:49, Mark Kirkwood wrote:\n\n>\n>\n> On 19/08/17 02:21, Jeremy Finzel wrote:\n>> On Tue, Aug 15, 2017 at 12:07 PM, Scott Marlowe \n>> <[email protected] <mailto:[email protected]>> wrote:\n>>\n>> So do iostat or iotop show you if / where your disks are working\n>> hardest? Or is this CPU overhead that's killing performance?\n>>\n>>\n>> Sorry for the delayed reply. I took a look in more detail at the \n>> query plans from our problem query during this incident. There are \n>> actually 6 plans, because there were 6 unique queries. I traced one \n>> query through our logs, and found something really interesting. That \n>> is that all of the first 5 queries are creating temp tables, and all \n>> of them took upwards of 500ms each to run. The final query, however, \n>> is a simple select from the last temp table, and that query took \n>> 0.035ms! This really confirms that somehow, the issue had to do with \n>> /writing /to the SAN, I think. Of course this doesn't answer a whole \n>> lot, because we had no other apparent issues with write performance \n>> at all.\n>>\n>> I also provide some graphs below.\n>>\n>>\n> Hi, graphs for latency (or await etc) might be worth looking at too - \n> sometimes the troughs between the IO spikes are actually when the \n> disks have been overwhelmed with queued up pending IOs...\n>\n>\n\nSorry - I see you *did* actually have iowait in there under your CPU \ngraph...which doesn't look to be showing up a lot of waiting. However \nstill might be well worth getting graphs showing per device waits and \nutilizations.\n\nregards\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 22 Aug 2017 13:04:13 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd sudden performance degradation related to temp\n object churn"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have come across a unexpected behavior.\nYou can see full detail on an issue on the QGEP project in Github :\nhttps://github.com/QGEP/QGEP/issues/308#issuecomment-323122514\n\nBasically, we have this view with some LEFT JOIN :\nhttp://paste.debian.net/982003/\n\nWe have indexes on some fields ( foreign keys, and a GIST index for the\nPostGIS geometry field)\nIf I use the raw SQL defining the view, and add a WHERE clause like:\n\nWHERE \"progression_geometry\" &&\nst_makeenvelope(1728327.03249295568093657,8240789.26074041239917278,1728608.10987572139129043,8240958.16933418624103069,3949)\n\nthe query plan is \"as expected\", as it is using the spatial index (and\nothers too). This query gets 100 lines from a \"main\" table containing 20000\nlines (and child tables having more). It is pretty fast and \"low cost\"\nSee the query plan:\nhttps://explain.depesz.com/s/6Qgb\n\nWhen we call the WHERE on the view:\n\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\nSELECT *\nFROM \"qgep\".\"vw_qgep_reach\"\nWHERE \"progression_geometry\" &&\nst_makeenvelope(1728327.03249295568093657,8240789.26074041239917278,1728608.10987572139129043,8240958.16933418624103069,3949)\n\n\nThe query plan is \"wrong\", as PostgreSQL seems to consider it should do a\nseq scan on the tables, and only afterwards filter with the WHERE:\nhttps://explain.depesz.com/s/wXV\n\nThe query takes about 1 second instead of less than 100ms.\n\nDo you have any hint on this kind of issue ?\n\nThanks in advance\n\nRegards,\n\nMichaël\n\nHi all,I have come across a unexpected behavior.You can see full detail on an issue on the QGEP project in Github : https://github.com/QGEP/QGEP/issues/308#issuecomment-323122514Basically, we have this view with some LEFT JOIN : http://paste.debian.net/982003/We have indexes on some fields ( foreign keys, and a GIST index for the PostGIS geometry field)If I use the raw SQL defining the view, and add a WHERE clause like:WHERE \"progression_geometry\" && st_makeenvelope(1728327.03249295568093657,8240789.26074041239917278,1728608.10987572139129043,8240958.16933418624103069,3949)the query plan is \"as expected\", as it is using the spatial index (and others too). This query gets 100 lines from a \"main\" table containing 20000 lines (and child tables having more). It is pretty fast and \"low cost\"See the query plan:https://explain.depesz.com/s/6QgbWhen we call the WHERE on the view:EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)SELECT *FROM \"qgep\".\"vw_qgep_reach\" WHERE \"progression_geometry\" && st_makeenvelope(1728327.03249295568093657,8240789.26074041239917278,1728608.10987572139129043,8240958.16933418624103069,3949)The query plan is \"wrong\", as PostgreSQL seems to consider it should do a seq scan on the tables, and only afterwards filter with the WHERE:https://explain.depesz.com/s/wXVThe query takes about 1 second instead of less than 100ms.Do you have any hint on this kind of issue ?Thanks in advanceRegards,Michaël",
"msg_date": "Fri, 18 Aug 2017 18:46:47 +0200",
"msg_from": "kimaidou <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query plan for views and WHERE clauses, Luke is not using the index"
},
{
"msg_contents": "Hi all\n\nI also tried to change the values of join_collapse_limit and\nrom_collapse_limit to higher values than default: 12, 50 or even 100, with\nno improvement on the query plan.\n\nIs this a typical behavior, or is there something particular in my query\nthat causes this big difference between the raw query and the view with\nWHERE ?\n\nRegards\nMichaël\n\n2017-08-18 18:46 GMT+02:00 kimaidou <[email protected]>:\n\n> Hi all,\n>\n> I have come across a unexpected behavior.\n> You can see full detail on an issue on the QGEP project in Github :\n> https://github.com/QGEP/QGEP/issues/308#issuecomment-323122514\n>\n> Basically, we have this view with some LEFT JOIN :\n> http://paste.debian.net/982003/\n>\n> We have indexes on some fields ( foreign keys, and a GIST index for the\n> PostGIS geometry field)\n> If I use the raw SQL defining the view, and add a WHERE clause like:\n>\n> WHERE \"progression_geometry\" && st_makeenvelope(1728327.\n> 03249295568093657,8240789.26074041239917278,1728608.\n> 10987572139129043,8240958.16933418624103069,3949)\n>\n> the query plan is \"as expected\", as it is using the spatial index (and\n> others too). This query gets 100 lines from a \"main\" table containing 20000\n> lines (and child tables having more). It is pretty fast and \"low cost\"\n> See the query plan:\n> https://explain.depesz.com/s/6Qgb\n>\n> When we call the WHERE on the view:\n>\n> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\n> SELECT *\n> FROM \"qgep\".\"vw_qgep_reach\"\n> WHERE \"progression_geometry\" && st_makeenvelope(1728327.\n> 03249295568093657,8240789.26074041239917278,1728608.\n> 10987572139129043,8240958.16933418624103069,3949)\n>\n>\n> The query plan is \"wrong\", as PostgreSQL seems to consider it should do a\n> seq scan on the tables, and only afterwards filter with the WHERE:\n> https://explain.depesz.com/s/wXV\n>\n> The query takes about 1 second instead of less than 100ms.\n>\n> Do you have any hint on this kind of issue ?\n>\n> Thanks in advance\n>\n> Regards,\n>\n> Michaël\n>\n>\n\nHi allI also tried to change the values of join_collapse_limit and rom_collapse_limit to higher values than default: 12, 50 or even 100, with no improvement on the query plan.Is this a typical behavior, or is there something particular in my query that causes this big difference between the raw query and the view with WHERE ?RegardsMichaël2017-08-18 18:46 GMT+02:00 kimaidou <[email protected]>:Hi all,I have come across a unexpected behavior.You can see full detail on an issue on the QGEP project in Github : https://github.com/QGEP/QGEP/issues/308#issuecomment-323122514Basically, we have this view with some LEFT JOIN : http://paste.debian.net/982003/We have indexes on some fields ( foreign keys, and a GIST index for the PostGIS geometry field)If I use the raw SQL defining the view, and add a WHERE clause like:WHERE \"progression_geometry\" && st_makeenvelope(1728327.03249295568093657,8240789.26074041239917278,1728608.10987572139129043,8240958.16933418624103069,3949)the query plan is \"as expected\", as it is using the spatial index (and others too). This query gets 100 lines from a \"main\" table containing 20000 lines (and child tables having more). It is pretty fast and \"low cost\"See the query plan:https://explain.depesz.com/s/6QgbWhen we call the WHERE on the view:EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)SELECT *FROM \"qgep\".\"vw_qgep_reach\" WHERE \"progression_geometry\" && st_makeenvelope(1728327.03249295568093657,8240789.26074041239917278,1728608.10987572139129043,8240958.16933418624103069,3949)The query plan is \"wrong\", as PostgreSQL seems to consider it should do a seq scan on the tables, and only afterwards filter with the WHERE:https://explain.depesz.com/s/wXVThe query takes about 1 second instead of less than 100ms.Do you have any hint on this kind of issue ?Thanks in advanceRegards,Michaël",
"msg_date": "Mon, 21 Aug 2017 22:34:27 +0200",
"msg_from": "kimaidou <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query plan for views and WHERE clauses,\n Luke is not using the index"
},
{
"msg_contents": "On 19 August 2017 at 04:46, kimaidou <[email protected]> wrote:\n> When we call the WHERE on the view:\n>\n> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\n> SELECT *\n> FROM \"qgep\".\"vw_qgep_reach\"\n> WHERE \"progression_geometry\" &&\n> st_makeenvelope(1728327.03249295568093657,8240789.26074041239917278,1728608.10987572139129043,8240958.16933418624103069,3949)\n>\n>\n> The query plan is \"wrong\", as PostgreSQL seems to consider it should do a\n> seq scan on the tables, and only afterwards filter with the WHERE:\n> https://explain.depesz.com/s/wXV\n>\n> The query takes about 1 second instead of less than 100ms.\n>\n> Do you have any hint on this kind of issue ?\n\nThis is by design due to the DISTINCT ON() clause. Only quals which\nfilter columns which are in the DISTINCT ON can be safely pushed down.\n\nConsider the following, where I've manually pushed the WHERE clause.\n\npostgres=# create table tt (a int, b int);\nCREATE TABLE\npostgres=# create index on tt (a);\nCREATE INDEX\npostgres=# insert into tt values(1,1),(1,2),(2,1),(2,2);\nINSERT 0 4\npostgres=# select * from (select distinct on (a) a,b from tt order by\na,b) tt where b = 2;\n a | b\n---+---\n(0 rows)\n\n\npostgres=# select * from (select distinct on (a) a,b from tt where b =\n2 order by a,b) tt;\n a | b\n---+---\n 1 | 2\n 2 | 2\n(2 rows)\n\nNote the results are not the same.\n\nIf I'd done WHERE a = 2, then the planner would have pushed the qual\ndown into the subquery.\n\nMore reading in check_output_expressions() in allpaths.c:\n\n/* If subquery uses DISTINCT ON, check point 3 */\nif (subquery->hasDistinctOn &&\n!targetIsInSortList(tle, InvalidOid, subquery->distinctClause))\n{\n/* non-DISTINCT column, so mark it unsafe */\nsafetyInfo->unsafeColumns[tle->resno] = true;\ncontinue;\n}\n\nThe comment for point 3 reads:\n\n * 3. If the subquery uses DISTINCT ON, we must not push down any quals that\n * refer to non-DISTINCT output columns, because that could change the set\n * of rows returned. (This condition is vacuous for DISTINCT, because then\n * there are no non-DISTINCT output columns, so we needn't check. Note that\n * subquery_is_pushdown_safe already reported that we can't use volatile\n * quals if there's DISTINCT or DISTINCT ON.)\n\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 22 Aug 2017 09:52:17 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan for views and WHERE clauses, Luke is not\n using the index"
},
{
"msg_contents": "Thanks a lot for your detailed explanation. I will try ASAP with no\nDISTINCT ( we are quite sure it is not needed anyway ), and report back\nhere.\n\nMichaël\n\n2017-08-21 23:52 GMT+02:00 David Rowley <[email protected]>:\n\n> On 19 August 2017 at 04:46, kimaidou <[email protected]> wrote:\n> > When we call the WHERE on the view:\n> >\n> > EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\n> > SELECT *\n> > FROM \"qgep\".\"vw_qgep_reach\"\n> > WHERE \"progression_geometry\" &&\n> > st_makeenvelope(1728327.03249295568093657,8240789.\n> 26074041239917278,1728608.10987572139129043,8240958.\n> 16933418624103069,3949)\n> >\n> >\n> > The query plan is \"wrong\", as PostgreSQL seems to consider it should do a\n> > seq scan on the tables, and only afterwards filter with the WHERE:\n> > https://explain.depesz.com/s/wXV\n> >\n> > The query takes about 1 second instead of less than 100ms.\n> >\n> > Do you have any hint on this kind of issue ?\n>\n> This is by design due to the DISTINCT ON() clause. Only quals which\n> filter columns which are in the DISTINCT ON can be safely pushed down.\n>\n> Consider the following, where I've manually pushed the WHERE clause.\n>\n> postgres=# create table tt (a int, b int);\n> CREATE TABLE\n> postgres=# create index on tt (a);\n> CREATE INDEX\n> postgres=# insert into tt values(1,1),(1,2),(2,1),(2,2);\n> INSERT 0 4\n> postgres=# select * from (select distinct on (a) a,b from tt order by\n> a,b) tt where b = 2;\n> a | b\n> ---+---\n> (0 rows)\n>\n>\n> postgres=# select * from (select distinct on (a) a,b from tt where b =\n> 2 order by a,b) tt;\n> a | b\n> ---+---\n> 1 | 2\n> 2 | 2\n> (2 rows)\n>\n> Note the results are not the same.\n>\n> If I'd done WHERE a = 2, then the planner would have pushed the qual\n> down into the subquery.\n>\n> More reading in check_output_expressions() in allpaths.c:\n>\n> /* If subquery uses DISTINCT ON, check point 3 */\n> if (subquery->hasDistinctOn &&\n> !targetIsInSortList(tle, InvalidOid, subquery->distinctClause))\n> {\n> /* non-DISTINCT column, so mark it unsafe */\n> safetyInfo->unsafeColumns[tle->resno] = true;\n> continue;\n> }\n>\n> The comment for point 3 reads:\n>\n> * 3. If the subquery uses DISTINCT ON, we must not push down any quals\n> that\n> * refer to non-DISTINCT output columns, because that could change the set\n> * of rows returned. (This condition is vacuous for DISTINCT, because then\n> * there are no non-DISTINCT output columns, so we needn't check. Note\n> that\n> * subquery_is_pushdown_safe already reported that we can't use volatile\n> * quals if there's DISTINCT or DISTINCT ON.)\n>\n>\n> --\n> David Rowley http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n\nThanks a lot for your detailed explanation. I will try ASAP with no DISTINCT ( we are quite sure it is not needed anyway ), and report back here.Michaël2017-08-21 23:52 GMT+02:00 David Rowley <[email protected]>:On 19 August 2017 at 04:46, kimaidou <[email protected]> wrote:\n> When we call the WHERE on the view:\n>\n> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)\n> SELECT *\n> FROM \"qgep\".\"vw_qgep_reach\"\n> WHERE \"progression_geometry\" &&\n> st_makeenvelope(1728327.03249295568093657,8240789.26074041239917278,1728608.10987572139129043,8240958.16933418624103069,3949)\n>\n>\n> The query plan is \"wrong\", as PostgreSQL seems to consider it should do a\n> seq scan on the tables, and only afterwards filter with the WHERE:\n> https://explain.depesz.com/s/wXV\n>\n> The query takes about 1 second instead of less than 100ms.\n>\n> Do you have any hint on this kind of issue ?\n\nThis is by design due to the DISTINCT ON() clause. Only quals which\nfilter columns which are in the DISTINCT ON can be safely pushed down.\n\nConsider the following, where I've manually pushed the WHERE clause.\n\npostgres=# create table tt (a int, b int);\nCREATE TABLE\npostgres=# create index on tt (a);\nCREATE INDEX\npostgres=# insert into tt values(1,1),(1,2),(2,1),(2,2);\nINSERT 0 4\npostgres=# select * from (select distinct on (a) a,b from tt order by\na,b) tt where b = 2;\n a | b\n---+---\n(0 rows)\n\n\npostgres=# select * from (select distinct on (a) a,b from tt where b =\n2 order by a,b) tt;\n a | b\n---+---\n 1 | 2\n 2 | 2\n(2 rows)\n\nNote the results are not the same.\n\nIf I'd done WHERE a = 2, then the planner would have pushed the qual\ndown into the subquery.\n\nMore reading in check_output_expressions() in allpaths.c:\n\n/* If subquery uses DISTINCT ON, check point 3 */\nif (subquery->hasDistinctOn &&\n!targetIsInSortList(tle, InvalidOid, subquery->distinctClause))\n{\n/* non-DISTINCT column, so mark it unsafe */\nsafetyInfo->unsafeColumns[tle->resno] = true;\ncontinue;\n}\n\nThe comment for point 3 reads:\n\n * 3. If the subquery uses DISTINCT ON, we must not push down any quals that\n * refer to non-DISTINCT output columns, because that could change the set\n * of rows returned. (This condition is vacuous for DISTINCT, because then\n * there are no non-DISTINCT output columns, so we needn't check. Note that\n * subquery_is_pushdown_safe already reported that we can't use volatile\n * quals if there's DISTINCT or DISTINCT ON.)\n\n\n--\n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Tue, 22 Aug 2017 09:24:12 +0200",
"msg_from": "kimaidou <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query plan for views and WHERE clauses, Luke is not\n using the index"
}
] |
[
{
"msg_contents": "I am a Postgres Newbie and trying to learn :)We have a scenario wherein, one\nof the SQL with different input value for import_num showing different\nexecution plan.As an example, with import_num = '4520440' the execution plan\nshows Nested Loop and is taking ~12secs. With import_num = '4520460'\nexecution plan showed using \"Materialize\" and never completed. After I set\nenable_material to off, the execution plan is changed using Hash Semi Join\nand completes in less than 3 secs. SELECT count(*) FROM test_tab WHERE login\nIN (SELECT DISTINCT login FROM test_tab WHERE import_num = '4520440' AND\nlogin IS NOT NULL EXCEPT SELECT DISTINCT login FROM test_tab WHERE\nimport_num = '0' AND login IS NOT NULL) AND import_num =\n'4520440';+--------+| count |+--------+| 746982 |+--------+(1 row)Time:\n12054.274 ms\n+-----------------------------------------------------------------------------------------------------------------------------------------------------------+| \nQUERY PLAN \n|+-----------------------------------------------------------------------------------------------------------------------------------------------------------+|\nAggregate (cost=351405.08..351405.09 rows=1 width=8) \n|| -> Nested Loop (cost=349846.23..350366.17 rows=415562 width=0) \n|| -> HashAggregate (cost=349845.67..349847.67 rows=200 width=96) \n|| Group Key: (\"ANY_subquery\".login)::text \n|| -> Subquery Scan on \"ANY_subquery\" \n(cost=340828.23..348557.47 rows=515282 width=96) \n|| -> SetOp Except (cost=340828.23..343404.65\nrows=515282 width=100) \n|| -> Sort (cost=340828.23..342116.44\nrows=515283 width=100) \n|| Sort Key: \"*SELECT* 1\".login \n|| -> Append (cost=0.56..275836.74\nrows=515283 width=100) \n|| -> Subquery Scan on \"*SELECT* 1\" \n(cost=0.56..275834.70 rows=515282 width=12) \n|| -> Unique \n(cost=0.56..270681.88 rows=515282 width=8) \n|| -> Index Only Scan\nusing ui_nkey_test_tab on test_tab test_tab_1 (cost=0.56..268604.07\nrows=831125 width=8) || \nIndex Cond: ((import_num = '4520440'::numeric) AND (login IS NOT NULL)) \n|| -> Subquery Scan on \"*SELECT* 2\" \n(cost=0.56..2.04 rows=1 width=12) \n|| -> Unique (cost=0.56..2.03\nrows=1 width=8) \n|| -> Index Only Scan\nusing ui_nkey_test_tab on test_tab test_tab_2 (cost=0.56..2.03 rows=1\nwidth=8) || \nIndex Cond: ((import_num = '0'::numeric) AND (login IS NOT NULL)) \n|| -> Index Only Scan using ui_nkey_test_tab on test_tab \n(cost=0.56..2.58 rows=1 width=8) \n|| Index Cond: ((import_num = '4520440'::numeric) AND (login =\n(\"ANY_subquery\".login)::text)) \n|+-----------------------------------------------------------------------------------------------------------------------------------------------------------+(19\nrows)\nSELECT count(*) FROM test_tab WHERE import_num = '4520460' and login IN\n(SELECT DISTINCT login FROM test_tab WHERE import_num = '4520460' AND login\nIS NOT NULL EXCEPT SELECT DISTINCT login FROM test_tab WHERE import_num =\n'0' AND login IS NOT NULL);The SQL was never completing and had the below\nSQL execution plan --\n+-------------------------------------------------------------------------------------------------------------------------------------------+| \nQUERY PLAN \n|+-------------------------------------------------------------------------------------------------------------------------------------------+|\nAggregate (cost=6.14..6.15 rows=1 width=8) \n|| -> Nested Loop Semi Join (cost=1.12..6.13 rows=1 width=0) \n|| Join Filter: ((test_tab.login)::text =\n(\"ANY_subquery\".login)::text) \n|| -> Index Only Scan using ui_nkey_test_tab on test_tab \n(cost=0.56..2.02 rows=1 width=8) \n|| Index Cond: (import_num = '4520460'::numeric) \n|| -> Materialize (cost=0.56..4.10 rows=1 width=96) \n|| -> Subquery Scan on \"ANY_subquery\" (cost=0.56..4.09\nrows=1 width=96) || \n-> HashSetOp Except (cost=0.56..4.08 rows=1 width=100) \n|| -> Append (cost=0.56..4.08 rows=2 width=100) \n|| -> Subquery Scan on \"*SELECT* 1\" \n(cost=0.56..2.04 rows=1 width=12) || \n-> Unique (cost=0.56..2.03 rows=1 width=8) \n|| -> Index Only Scan using\nui_nkey_test_tab on test_tab test_tab_1 (cost=0.56..2.03 rows=1 width=8) || \nIndex Cond: ((import_num = '4520460'::numeric) AND (login IS NOT NULL)) \n|| -> Subquery Scan on \"*SELECT* 2\" \n(cost=0.56..2.04 rows=1 width=12) || \n-> Unique (cost=0.56..2.03 rows=1 width=8) \n|| -> Index Only Scan using\nui_nkey_test_tab on test_tab test_tab_2 (cost=0.56..2.03 rows=1 width=8) || \nIndex Cond: ((import_num = '0'::numeric) AND (login IS NOT NULL)) \n|+-------------------------------------------------------------------------------------------------------------------------------------------+(17\nrows)\n############################################## After I set enable_material\nto off;#############################################SELECT count(*) FROM\ntest_tab WHERE import_num = '4520460' and login IN (SELECT DISTINCT login\nFROM test_tab WHERE import_num = '4520460' AND login IS NOT NULL EXCEPT\nSELECT DISTINCT login FROM test_tab WHERE import_num = '0' AND login IS NOT\nNULL);+--------+| count |+--------+| 762599 |+--------+(1 row)Time:\n2116.889 ms\n+-------------------------------------------------------------------------------------------------------------------------------------------+| \nQUERY PLAN \n|+-------------------------------------------------------------------------------------------------------------------------------------------+|\nAggregate (cost=6.13..6.14 rows=1 width=8) \n|| -> Hash Semi Join (cost=4.67..6.13 rows=1 width=0) \n|| Hash Cond: ((test_tab.login)::text =\n(\"ANY_subquery\".login)::text) \n|| -> Index Only Scan using ui_nkey_test_tab on test_tab \n(cost=0.56..2.02 rows=1 width=8) \n|| Index Cond: (import_num = '4520460'::numeric) \n|| -> Hash (cost=4.09..4.09 rows=1 width=96) \n|| -> Subquery Scan on \"ANY_subquery\" (cost=0.56..4.09\nrows=1 width=96) || \n-> HashSetOp Except (cost=0.56..4.08 rows=1 width=100) \n|| -> Append (cost=0.56..4.08 rows=2 width=100) \n|| -> Subquery Scan on \"*SELECT* 1\" \n(cost=0.56..2.04 rows=1 width=12) || \n-> Unique (cost=0.56..2.03 rows=1 width=8) \n|| -> Index Only Scan using\nui_nkey_test_tab on test_tab test_tab_1 (cost=0.56..2.03 rows=1 width=8) || \nIndex Cond: ((import_num = '4520460'::numeric) AND (login IS NOT NULL)) \n|| -> Subquery Scan on \"*SELECT* 2\" \n(cost=0.56..2.04 rows=1 width=12) || \n-> Unique (cost=0.56..2.03 rows=1 width=8) \n|| -> Index Only Scan using\nui_nkey_test_tab on test_tab test_tab_2 (cost=0.56..2.03 rows=1 width=8) || \nIndex Cond: ((import_num = '0'::numeric) AND (login IS NOT NULL)) \n|+-------------------------------------------------------------------------------------------------------------------------------------------+(17\nrows)\nLooking at the row count for import_numselect import_num, count(*) from\ntest_tab group by import_num order by 2;+------------+--------+| import_num\n| count |+------------+--------+| 4520440 | 746982 || 4520460 |\n762599 |+------------+--------+(37 rows)With different value of import_num\nwe are having different execution plan. Is there a way to force the same\nHash semi Join plan to sql with import_num 4520440, currently doing nested\nloop.I tried /*+HashJoin(a1 ANY_subquery)*/ but the sql execution plan\ndoesn't change.SELECT /*+HashJoin(a1 ANY_subquery)*/ count(*) FROM test_tab\na1 WHERE import_num = '4520440' and login IN (SELECT DISTINCT login FROM\ntest_tab a2 WHERE import_num = '4520440' AND login IS NOT NULL EXCEPT\nSELECT DISTINCT login FROM test_tab a3 WHERE import_num = '0' AND login IS\nNOT NULL);Regards,Anand\n\n\n\n--\nView this message in context: http://www.postgresql-archive.org/Performance-Issue-Materialize-tp5979128.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\nI am a Postgres Newbie and trying to learn :)\n\nWe have a scenario wherein, one of the SQL with different input value for import_num showing different execution plan.\n\nAs an example, with import_num = '4520440' the execution plan shows Nested Loop and is taking ~12secs. \nWith import_num = '4520460' execution plan showed using \"Materialize\" and never completed. After I set enable_material to off, the execution plan is changed using Hash Semi Join and completes in less than 3 secs. \n\nSELECT count(*) FROM test_tab WHERE login IN (SELECT DISTINCT login FROM test_tab WHERE import_num = '4520440' AND \nlogin IS NOT NULL EXCEPT SELECT DISTINCT login FROM test_tab WHERE import_num = '0' AND login IS NOT NULL) \nAND import_num = '4520440';\n\n+--------+\n| count |\n+--------+\n| 746982 |\n+--------+\n(1 row)\n\nTime: 12054.274 ms\n\n\n+-----------------------------------------------------------------------------------------------------------------------------------------------------------+\n| QUERY PLAN |\n+-----------------------------------------------------------------------------------------------------------------------------------------------------------+\n| Aggregate (cost=351405.08..351405.09 rows=1 width=8) |\n| -> Nested Loop (cost=349846.23..350366.17 rows=415562 width=0) |\n| -> HashAggregate (cost=349845.67..349847.67 rows=200 width=96) |\n| Group Key: (\"ANY_subquery\".login)::text |\n| -> Subquery Scan on \"ANY_subquery\" (cost=340828.23..348557.47 rows=515282 width=96) |\n| -> SetOp Except (cost=340828.23..343404.65 rows=515282 width=100) |\n| -> Sort (cost=340828.23..342116.44 rows=515283 width=100) |\n| Sort Key: \"*SELECT* 1\".login |\n| -> Append (cost=0.56..275836.74 rows=515283 width=100) |\n| -> Subquery Scan on \"*SELECT* 1\" (cost=0.56..275834.70 rows=515282 width=12) |\n| -> Unique (cost=0.56..270681.88 rows=515282 width=8) |\n| -> Index Only Scan using ui_nkey_test_tab on test_tab test_tab_1 (cost=0.56..268604.07 rows=831125 width=8) |\n| Index Cond: ((import_num = '4520440'::numeric) AND (login IS NOT NULL)) |\n| -> Subquery Scan on \"*SELECT* 2\" (cost=0.56..2.04 rows=1 width=12) |\n| -> Unique (cost=0.56..2.03 rows=1 width=8) |\n| -> Index Only Scan using ui_nkey_test_tab on test_tab test_tab_2 (cost=0.56..2.03 rows=1 width=8) |\n| Index Cond: ((import_num = '0'::numeric) AND (login IS NOT NULL)) |\n| -> Index Only Scan using ui_nkey_test_tab on test_tab (cost=0.56..2.58 rows=1 width=8) |\n| Index Cond: ((import_num = '4520440'::numeric) AND (login = (\"ANY_subquery\".login)::text)) |\n+-----------------------------------------------------------------------------------------------------------------------------------------------------------+\n(19 rows)\n\n\n\nSELECT count(*) FROM test_tab WHERE import_num = '4520460' and login IN (SELECT DISTINCT login FROM test_tab WHERE import_num = '4520460' AND login IS NOT NULL EXCEPT SELECT DISTINCT login FROM test_tab WHERE import_num = '0' AND login IS NOT NULL);\n\nThe SQL was never completing and had the below SQL execution plan --\n\n\n\n+-------------------------------------------------------------------------------------------------------------------------------------------+\n| QUERY PLAN |\n+-------------------------------------------------------------------------------------------------------------------------------------------+\n| Aggregate (cost=6.14..6.15 rows=1 width=8) |\n| -> Nested Loop Semi Join (cost=1.12..6.13 rows=1 width=0) |\n| Join Filter: ((test_tab.login)::text = (\"ANY_subquery\".login)::text) |\n| -> Index Only Scan using ui_nkey_test_tab on test_tab (cost=0.56..2.02 rows=1 width=8) |\n| Index Cond: (import_num = '4520460'::numeric) |\n| -> Materialize (cost=0.56..4.10 rows=1 width=96) |\n| -> Subquery Scan on \"ANY_subquery\" (cost=0.56..4.09 rows=1 width=96) |\n| -> HashSetOp Except (cost=0.56..4.08 rows=1 width=100) |\n| -> Append (cost=0.56..4.08 rows=2 width=100) |\n| -> Subquery Scan on \"*SELECT* 1\" (cost=0.56..2.04 rows=1 width=12) |\n| -> Unique (cost=0.56..2.03 rows=1 width=8) |\n| -> Index Only Scan using ui_nkey_test_tab on test_tab test_tab_1 (cost=0.56..2.03 rows=1 width=8) |\n| Index Cond: ((import_num = '4520460'::numeric) AND (login IS NOT NULL)) |\n| -> Subquery Scan on \"*SELECT* 2\" (cost=0.56..2.04 rows=1 width=12) |\n| -> Unique (cost=0.56..2.03 rows=1 width=8) |\n| -> Index Only Scan using ui_nkey_test_tab on test_tab test_tab_2 (cost=0.56..2.03 rows=1 width=8) |\n| Index Cond: ((import_num = '0'::numeric) AND (login IS NOT NULL)) |\n+-------------------------------------------------------------------------------------------------------------------------------------------+\n(17 rows)\n\n\n#############################################\n# After I set enable_material to off;\n#############################################\n\nSELECT count(*) FROM test_tab WHERE import_num = '4520460' and login IN (SELECT DISTINCT login FROM test_tab WHERE import_num = '4520460' AND login IS NOT NULL EXCEPT SELECT DISTINCT login FROM test_tab WHERE import_num = '0' AND login IS NOT NULL);\n+--------+\n| count |\n+--------+\n| 762599 |\n+--------+\n(1 row)\n\nTime: 2116.889 ms\n\n\n+-------------------------------------------------------------------------------------------------------------------------------------------+\n| QUERY PLAN |\n+-------------------------------------------------------------------------------------------------------------------------------------------+\n| Aggregate (cost=6.13..6.14 rows=1 width=8) |\n| -> Hash Semi Join (cost=4.67..6.13 rows=1 width=0) |\n| Hash Cond: ((test_tab.login)::text = (\"ANY_subquery\".login)::text) |\n| -> Index Only Scan using ui_nkey_test_tab on test_tab (cost=0.56..2.02 rows=1 width=8) |\n| Index Cond: (import_num = '4520460'::numeric) |\n| -> Hash (cost=4.09..4.09 rows=1 width=96) |\n| -> Subquery Scan on \"ANY_subquery\" (cost=0.56..4.09 rows=1 width=96) |\n| -> HashSetOp Except (cost=0.56..4.08 rows=1 width=100) |\n| -> Append (cost=0.56..4.08 rows=2 width=100) |\n| -> Subquery Scan on \"*SELECT* 1\" (cost=0.56..2.04 rows=1 width=12) |\n| -> Unique (cost=0.56..2.03 rows=1 width=8) |\n| -> Index Only Scan using ui_nkey_test_tab on test_tab test_tab_1 (cost=0.56..2.03 rows=1 width=8) |\n| Index Cond: ((import_num = '4520460'::numeric) AND (login IS NOT NULL)) |\n| -> Subquery Scan on \"*SELECT* 2\" (cost=0.56..2.04 rows=1 width=12) |\n| -> Unique (cost=0.56..2.03 rows=1 width=8) |\n| -> Index Only Scan using ui_nkey_test_tab on test_tab test_tab_2 (cost=0.56..2.03 rows=1 width=8) |\n| Index Cond: ((import_num = '0'::numeric) AND (login IS NOT NULL)) |\n+-------------------------------------------------------------------------------------------------------------------------------------------+\n(17 rows)\n\n\n\nLooking at the row count for import_num\n\nselect import_num, count(*) from test_tab group by import_num order by 2;\n+------------+--------+\n| import_num | count |\n+------------+--------+\n| 4520440 | 746982 |\n| 4520460 | 762599 |\n+------------+--------+\n(37 rows)\n\n\nWith different value of import_num we are having different execution plan. Is there a way to force the same Hash semi Join plan to sql with import_num 4520440, currently doing nested loop.\n\nI tried /*+HashJoin(a1 ANY_subquery)*/ but the sql execution plan doesn't change.\n\nSELECT /*+HashJoin(a1 ANY_subquery)*/ count(*) FROM test_tab a1 WHERE import_num = '4520440' and login IN (SELECT DISTINCT login FROM test_tab a2 WHERE import_num = '4520440' AND login IS NOT NULL EXCEPT SELECT DISTINCT login FROM test_tab a3 WHERE import_num = '0' AND login IS NOT NULL);\n\n\nRegards,\nAnand\n\n\n\n\n\t\n\t\n\t\n\nView this message in context: Performance Issue -- \"Materialize\"\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.",
"msg_date": "Sat, 19 Aug 2017 10:37:56 -0700 (MST)",
"msg_from": "anand086 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance Issue -- \"Materialize\""
},
{
"msg_contents": "Any thoughts on this? \n\n\n\n--\nView this message in context: http://www.postgresql-archive.org/Performance-Issue-Materialize-tp5979128p5979481.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 21 Aug 2017 11:03:21 -0700 (MST)",
"msg_from": "anand086 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Issue -- \"Materialize\""
},
{
"msg_contents": "On Sat, Aug 19, 2017 at 10:37:56AM -0700, anand086 wrote:\n> +-----------------------------------------------------------------------------------------------------------------------------------------------------------+| \n> QUERY PLAN \n> |+-----------------------------------------------------------------------------------------------------------------------------------------------------------+|\n> Aggregate (cost=351405.08..351405.09 rows=1 width=8) \n\nWould you send explain ANALYZE and not just explain ?\n\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 21 Aug 2017 13:32:26 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Issue -- \"Materialize\""
},
{
"msg_contents": "Do you have an index on login column ?\n\nIf not, try creating an index and taking off those DISTICTs.\n\nEm seg, 21 de ago de 2017 às 15:33, Justin Pryzby <[email protected]>\nescreveu:\n\n> On Sat, Aug 19, 2017 at 10:37:56AM -0700, anand086 wrote:\n> >\n> +-----------------------------------------------------------------------------------------------------------------------------------------------------------+|\n> > QUERY PLAN\n> >\n> |+-----------------------------------------------------------------------------------------------------------------------------------------------------------+|\n> > Aggregate (cost=351405.08..351405.09 rows=1 width=8)\n>\n> Would you send explain ANALYZE and not just explain ?\n>\n> Justin\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nDo you have an index on login column ?If not, try creating an index and taking off those DISTICTs.Em seg, 21 de ago de 2017 às 15:33, Justin Pryzby <[email protected]> escreveu:On Sat, Aug 19, 2017 at 10:37:56AM -0700, anand086 wrote:\n> +-----------------------------------------------------------------------------------------------------------------------------------------------------------+|\n> QUERY PLAN\n> |+-----------------------------------------------------------------------------------------------------------------------------------------------------------+|\n> Aggregate (cost=351405.08..351405.09 rows=1 width=8)\n\nWould you send explain ANALYZE and not just explain ?\n\nJustin\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 21 Aug 2017 18:46:32 +0000",
"msg_from": "Carlos Augusto Machado <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Issue -- \"Materialize\""
},
{
"msg_contents": "I think you query is a bit confusing and have many subqueries, so I tried\nto simplify\n\nIf you cant´t have more import_num = 0 to the same login, try this\n\nSELECT count(*)\nFROM test_tab tab1\nLEFT JOIN test_tab tab2\n ON tab1.login = tab2.login AND tab2.import_num = '0'\nWHERE\n tab2.login IS NULL AND\n import_num = '4520440'\n\notherwise try this\n\nSELECT count(*)\nFROM test_tab tab1\nLEFT JOIN (\n SELECT DISTINCT login FROM test_tab WHERE import_num = '0'\n) tab2\n ON tab1.login = tab2.login\nWHERE\n tab2.login IS NULL AND\n import_num = '4520440'\n\n\nEm seg, 21 de ago de 2017 às 15:47, Carlos Augusto Machado <\[email protected]> escreveu:\n\n>\n> Do you have an index on login column ?\n>\n> If not, try creating an index and taking off those DISTICTs.\n>\n> Em seg, 21 de ago de 2017 às 15:33, Justin Pryzby <[email protected]>\n> escreveu:\n>\n>> On Sat, Aug 19, 2017 at 10:37:56AM -0700, anand086 wrote:\n>> >\n>> +-----------------------------------------------------------------------------------------------------------------------------------------------------------+|\n>> > QUERY PLAN\n>> >\n>> |+-----------------------------------------------------------------------------------------------------------------------------------------------------------+|\n>> > Aggregate (cost=351405.08..351405.09 rows=1 width=8)\n>>\n>> Would you send explain ANALYZE and not just explain ?\n>>\n>> Justin\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]\n>> )\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n\nI think you query is a bit confusing and have many subqueries, so I tried to simplifyIf you cant´t have more import_num = 0 to the same login, try thisSELECT count(*)FROM test_tab tab1LEFT JOIN test_tab tab2 ON tab1.login = tab2.login AND tab2.import_num = '0'WHERE tab2.login IS NULL AND import_num = '4520440'otherwise try thisSELECT count(*)FROM test_tab tab1LEFT JOIN ( SELECT DISTINCT login FROM test_tab WHERE import_num = '0') tab2 ON tab1.login = tab2.loginWHERE tab2.login IS NULL AND import_num = '4520440'Em seg, 21 de ago de 2017 às 15:47, Carlos Augusto Machado <[email protected]> escreveu:Do you have an index on login column ?If not, try creating an index and taking off those DISTICTs.Em seg, 21 de ago de 2017 às 15:33, Justin Pryzby <[email protected]> escreveu:On Sat, Aug 19, 2017 at 10:37:56AM -0700, anand086 wrote:\n> +-----------------------------------------------------------------------------------------------------------------------------------------------------------+|\n> QUERY PLAN\n> |+-----------------------------------------------------------------------------------------------------------------------------------------------------------+|\n> Aggregate (cost=351405.08..351405.09 rows=1 width=8)\n\nWould you send explain ANALYZE and not just explain ?\n\nJustin\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 21 Aug 2017 19:19:10 +0000",
"msg_from": "Carlos Augusto Machado <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Issue -- \"Materialize\""
},
{
"msg_contents": "On Sat, Aug 19, 2017 at 10:37 AM, anand086 <[email protected]> wrote:\n\nYour email is very hard to read, the formatting and line wrapping is\nheavily mangled. You might want to attach the plans as files attachments\ninstead of or in addition to putting the in the body.\n\n\n\n> -> Index Only Scan using ui_nkey_test_tab on test_tab test_tab_1\n> (cost=0.56..2.03 rows=1 width=8) |\n>\n> Index Cond: ((import_num = '4520460'::numeric) AND (login IS NOT NULL))\n>\n>\nIt looks like the statistics for your table are desperately out of date, as\na later query showed there are 762599 rows (unless login is null for all of\nthem) but the above is estimating there is only one. When was the table\nlast analyzed?\n\nCheers,\n\nJeff\n\nOn Sat, Aug 19, 2017 at 10:37 AM, anand086 <[email protected]> wrote:\n\n> I am a Postgres Newbie and trying to learn :) We have a scenario wherein,\n> one of the SQL with different input value for import_num showing different\n> execution plan. As an example, with import_num = '4520440' the execution\n> plan shows Nested Loop and is taking ~12secs. With import_num = '4520460'\n> execution plan showed using \"Materialize\" and never completed. After I set\n> enable_material to off, the execution plan is changed using Hash Semi Join\n> and completes in less than 3 secs. SELECT count(*) FROM test_tab WHERE\n> login IN (SELECT DISTINCT login FROM test_tab WHERE import_num = '4520440'\n> AND login IS NOT NULL EXCEPT SELECT DISTINCT login FROM test_tab WHERE\n> import_num = '0' AND login IS NOT NULL) AND import_num = '4520440';\n> +--------+ | count | +--------+ | 746982 | +--------+ (1 row) Time:\n> 12054.274 ms\n>\n> +-----------------------------------------------------------------------------------------------------------------------------------------------------------+\n> | QUERY PLAN |\n> +-----------------------------------------------------------------------------------------------------------------------------------------------------------+\n> | Aggregate (cost=351405.08..351405.09 rows=1 width=8) |\n> | -> Nested Loop (cost=349846.23..350366.17 rows=415562 width=0) |\n> | -> HashAggregate (cost=349845.67..349847.67 rows=200 width=96) |\n> | Group Key: (\"ANY_subquery\".login)::text |\n> | -> Subquery Scan on \"ANY_subquery\" (cost=340828.23..348557.47 rows=515282 width=96) |\n> | -> SetOp Except (cost=340828.23..343404.65 rows=515282 width=100) |\n> | -> Sort (cost=340828.23..342116.44 rows=515283 width=100) |\n> | Sort Key: \"*SELECT* 1\".login |\n> | -> Append (cost=0.56..275836.74 rows=515283 width=100) |\n> | -> Subquery Scan on \"*SELECT* 1\" (cost=0.56..275834.70 rows=515282 width=12) |\n> | -> Unique (cost=0.56..270681.88 rows=515282 width=8) |\n> | -> Index Only Scan using ui_nkey_test_tab on test_tab test_tab_1 (cost=0.56..268604.07 rows=831125 width=8) |\n> | Index Cond: ((import_num = '4520440'::numeric) AND (login IS NOT NULL)) |\n> | -> Subquery Scan on \"*SELECT* 2\" (cost=0.56..2.04 rows=1 width=12) |\n> | -> Unique (cost=0.56..2.03 rows=1 width=8) |\n> | -> Index Only Scan using ui_nkey_test_tab on test_tab test_tab_2 (cost=0.56..2.03 rows=1 width=8) |\n> | Index Cond: ((import_num = '0'::numeric) AND (login IS NOT NULL)) |\n> | -> Index Only Scan using ui_nkey_test_tab on test_tab (cost=0.56..2.58 rows=1 width=8) |\n> | Index Cond: ((import_num = '4520440'::numeric) AND (login = (\"ANY_subquery\".login)::text)) |\n> +-----------------------------------------------------------------------------------------------------------------------------------------------------------+\n> (19 rows)\n>\n> SELECT count(*) FROM test_tab WHERE import_num = '4520460' and login IN\n> (SELECT DISTINCT login FROM test_tab WHERE import_num = '4520460' AND login\n> IS NOT NULL EXCEPT SELECT DISTINCT login FROM test_tab WHERE import_num =\n> '0' AND login IS NOT NULL); The SQL was never completing and had the below\n> SQL execution plan --\n>\n> +-------------------------------------------------------------------------------------------------------------------------------------------+\n> | QUERY PLAN |\n> +-------------------------------------------------------------------------------------------------------------------------------------------+\n> | Aggregate (cost=6.14..6.15 rows=1 width=8) |\n> | -> Nested Loop Semi Join (cost=1.12..6.13 rows=1 width=0) |\n> | Join Filter: ((test_tab.login)::text = (\"ANY_subquery\".login)::text) |\n> | -> Index Only Scan using ui_nkey_test_tab on test_tab (cost=0.56..2.02 rows=1 width=8) |\n> | Index Cond: (import_num = '4520460'::numeric) |\n> | -> Materialize (cost=0.56..4.10 rows=1 width=96) |\n> | -> Subquery Scan on \"ANY_subquery\" (cost=0.56..4.09 rows=1 width=96) |\n> | -> HashSetOp Except (cost=0.56..4.08 rows=1 width=100) |\n> | -> Append (cost=0.56..4.08 rows=2 width=100) |\n> | -> Subquery Scan on \"*SELECT* 1\" (cost=0.56..2.04 rows=1 width=12) |\n> | -> Unique (cost=0.56..2.03 rows=1 width=8) |\n> | -> Index Only Scan using ui_nkey_test_tab on test_tab test_tab_1 (cost=0.56..2.03 rows=1 width=8) |\n> | Index Cond: ((import_num = '4520460'::numeric) AND (login IS NOT NULL)) |\n> | -> Subquery Scan on \"*SELECT* 2\" (cost=0.56..2.04 rows=1 width=12) |\n> | -> Unique (cost=0.56..2.03 rows=1 width=8) |\n> | -> Index Only Scan using ui_nkey_test_tab on test_tab test_tab_2 (cost=0.56..2.03 rows=1 width=8) |\n> | Index Cond: ((import_num = '0'::numeric) AND (login IS NOT NULL)) |\n> +-------------------------------------------------------------------------------------------------------------------------------------------+\n> (17 rows)\n>\n> ############################################# # After I set\n> enable_material to off; #############################################\n> SELECT count(*) FROM test_tab WHERE import_num = '4520460' and login IN\n> (SELECT DISTINCT login FROM test_tab WHERE import_num = '4520460' AND login\n> IS NOT NULL EXCEPT SELECT DISTINCT login FROM test_tab WHERE import_num =\n> '0' AND login IS NOT NULL); +--------+ | count | +--------+ | 762599 |\n> +--------+ (1 row) Time: 2116.889 ms\n>\n> +-------------------------------------------------------------------------------------------------------------------------------------------+\n> | QUERY PLAN |\n> +-------------------------------------------------------------------------------------------------------------------------------------------+\n> | Aggregate (cost=6.13..6.14 rows=1 width=8) |\n> | -> Hash Semi Join (cost=4.67..6.13 rows=1 width=0) |\n> | Hash Cond: ((test_tab.login)::text = (\"ANY_subquery\".login)::text) |\n> | -> Index Only Scan using ui_nkey_test_tab on test_tab (cost=0.56..2.02 rows=1 width=8) |\n> | Index Cond: (import_num = '4520460'::numeric) |\n> | -> Hash (cost=4.09..4.09 rows=1 width=96) |\n> | -> Subquery Scan on \"ANY_subquery\" (cost=0.56..4.09 rows=1 width=96) |\n> | -> HashSetOp Except (cost=0.56..4.08 rows=1 width=100) |\n> | -> Append (cost=0.56..4.08 rows=2 width=100) |\n> | -> Subquery Scan on \"*SELECT* 1\" (cost=0.56..2.04 rows=1 width=12) |\n> | -> Unique (cost=0.56..2.03 rows=1 width=8) |\n> | -> Index Only Scan using ui_nkey_test_tab on test_tab test_tab_1 (cost=0.56..2.03 rows=1 width=8) |\n> | Index Cond: ((import_num = '4520460'::numeric) AND (login IS NOT NULL)) |\n> | -> Subquery Scan on \"*SELECT* 2\" (cost=0.56..2.04 rows=1 width=12) |\n> | -> Unique (cost=0.56..2.03 rows=1 width=8) |\n> | -> Index Only Scan using ui_nkey_test_tab on test_tab test_tab_2 (cost=0.56..2.03 rows=1 width=8) |\n> | Index Cond: ((import_num = '0'::numeric) AND (login IS NOT NULL)) |\n> +-------------------------------------------------------------------------------------------------------------------------------------------+\n> (17 rows)\n>\n> Looking at the row count for import_num select import_num, count(*) from\n> test_tab group by import_num order by 2; +------------+--------+ |\n> import_num | count | +------------+--------+ | 4520440 | 746982 | | 4520460\n> | 762599 | +------------+--------+ (37 rows) With different value of\n> import_num we are having different execution plan. Is there a way to force\n> the same Hash semi Join plan to sql with import_num 4520440, currently\n> doing nested loop. I tried /*+HashJoin(a1 ANY_subquery)*/ but the sql\n> execution plan doesn't change. SELECT /*+HashJoin(a1 ANY_subquery)*/\n> count(*) FROM test_tab a1 WHERE import_num = '4520440' and login IN (SELECT\n> DISTINCT login FROM test_tab a2 WHERE import_num = '4520440' AND login IS\n> NOT NULL EXCEPT SELECT DISTINCT login FROM test_tab a3 WHERE import_num =\n> '0' AND login IS NOT NULL); Regards, Anand\n> ------------------------------\n> View this message in context: Performance Issue -- \"Materialize\"\n> <http://www.postgresql-archive.org/Performance-Issue-Materialize-tp5979128.html>\n> Sent from the PostgreSQL - performance mailing list archive\n> <http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html>\n> at Nabble.com.\n>\n\nOn Sat, Aug 19, 2017 at 10:37 AM, anand086 <[email protected]> wrote:Your email is very hard to read, the formatting and line wrapping is heavily mangled. You might want to attach the plans as files attachments instead of or in addition to putting the in the body. -> Index Only Scan using ui_nkey_test_tab on test_tab test_tab_1 (cost=0.56..2.03 rows=1 width=8) | Index Cond: ((import_num = '4520460'::numeric) AND (login IS NOT NULL)) It looks like the statistics for your table are desperately out of date, as a later query showed there are 762599 rows (unless login is null for all of them) but the above is estimating there is only one. When was the table last analyzed? Cheers,JeffOn Sat, Aug 19, 2017 at 10:37 AM, anand086 <[email protected]> wrote:I am a Postgres Newbie and trying to learn :)\n\nWe have a scenario wherein, one of the SQL with different input value for import_num showing different execution plan.\n\nAs an example, with import_num = '4520440' the execution plan shows Nested Loop and is taking ~12secs. \nWith import_num = '4520460' execution plan showed using \"Materialize\" and never completed. After I set enable_material to off, the execution plan is changed using Hash Semi Join and completes in less than 3 secs. \n\nSELECT count(*) FROM test_tab WHERE login IN (SELECT DISTINCT login FROM test_tab WHERE import_num = '4520440' AND \nlogin IS NOT NULL EXCEPT SELECT DISTINCT login FROM test_tab WHERE import_num = '0' AND login IS NOT NULL) \nAND import_num = '4520440';\n\n+--------+\n| count |\n+--------+\n| 746982 |\n+--------+\n(1 row)\n\nTime: 12054.274 ms\n\n+-----------------------------------------------------------------------------------------------------------------------------------------------------------+\n| QUERY PLAN |\n+-----------------------------------------------------------------------------------------------------------------------------------------------------------+\n| Aggregate (cost=351405.08..351405.09 rows=1 width=8) |\n| -> Nested Loop (cost=349846.23..350366.17 rows=415562 width=0) |\n| -> HashAggregate (cost=349845.67..349847.67 rows=200 width=96) |\n| Group Key: (\"ANY_subquery\".login)::text |\n| -> Subquery Scan on \"ANY_subquery\" (cost=340828.23..348557.47 rows=515282 width=96) |\n| -> SetOp Except (cost=340828.23..343404.65 rows=515282 width=100) |\n| -> Sort (cost=340828.23..342116.44 rows=515283 width=100) |\n| Sort Key: \"*SELECT* 1\".login |\n| -> Append (cost=0.56..275836.74 rows=515283 width=100) |\n| -> Subquery Scan on \"*SELECT* 1\" (cost=0.56..275834.70 rows=515282 width=12) |\n| -> Unique (cost=0.56..270681.88 rows=515282 width=8) |\n| -> Index Only Scan using ui_nkey_test_tab on test_tab test_tab_1 (cost=0.56..268604.07 rows=831125 width=8) |\n| Index Cond: ((import_num = '4520440'::numeric) AND (login IS NOT NULL)) |\n| -> Subquery Scan on \"*SELECT* 2\" (cost=0.56..2.04 rows=1 width=12) |\n| -> Unique (cost=0.56..2.03 rows=1 width=8) |\n| -> Index Only Scan using ui_nkey_test_tab on test_tab test_tab_2 (cost=0.56..2.03 rows=1 width=8) |\n| Index Cond: ((import_num = '0'::numeric) AND (login IS NOT NULL)) |\n| -> Index Only Scan using ui_nkey_test_tab on test_tab (cost=0.56..2.58 rows=1 width=8) |\n| Index Cond: ((import_num = '4520440'::numeric) AND (login = (\"ANY_subquery\".login)::text)) |\n+-----------------------------------------------------------------------------------------------------------------------------------------------------------+\n(19 rows)\n\n\n\nSELECT count(*) FROM test_tab WHERE import_num = '4520460' and login IN (SELECT DISTINCT login FROM test_tab WHERE import_num = '4520460' AND login IS NOT NULL EXCEPT SELECT DISTINCT login FROM test_tab WHERE import_num = '0' AND login IS NOT NULL);\n\nThe SQL was never completing and had the below SQL execution plan --\n\n\n+-------------------------------------------------------------------------------------------------------------------------------------------+\n| QUERY PLAN |\n+-------------------------------------------------------------------------------------------------------------------------------------------+\n| Aggregate (cost=6.14..6.15 rows=1 width=8) |\n| -> Nested Loop Semi Join (cost=1.12..6.13 rows=1 width=0) |\n| Join Filter: ((test_tab.login)::text = (\"ANY_subquery\".login)::text) |\n| -> Index Only Scan using ui_nkey_test_tab on test_tab (cost=0.56..2.02 rows=1 width=8) |\n| Index Cond: (import_num = '4520460'::numeric) |\n| -> Materialize (cost=0.56..4.10 rows=1 width=96) |\n| -> Subquery Scan on \"ANY_subquery\" (cost=0.56..4.09 rows=1 width=96) |\n| -> HashSetOp Except (cost=0.56..4.08 rows=1 width=100) |\n| -> Append (cost=0.56..4.08 rows=2 width=100) |\n| -> Subquery Scan on \"*SELECT* 1\" (cost=0.56..2.04 rows=1 width=12) |\n| -> Unique (cost=0.56..2.03 rows=1 width=8) |\n| -> Index Only Scan using ui_nkey_test_tab on test_tab test_tab_1 (cost=0.56..2.03 rows=1 width=8) |\n| Index Cond: ((import_num = '4520460'::numeric) AND (login IS NOT NULL)) |\n| -> Subquery Scan on \"*SELECT* 2\" (cost=0.56..2.04 rows=1 width=12) |\n| -> Unique (cost=0.56..2.03 rows=1 width=8) |\n| -> Index Only Scan using ui_nkey_test_tab on test_tab test_tab_2 (cost=0.56..2.03 rows=1 width=8) |\n| Index Cond: ((import_num = '0'::numeric) AND (login IS NOT NULL)) |\n+-------------------------------------------------------------------------------------------------------------------------------------------+\n(17 rows)\n\n\n#############################################\n# After I set enable_material to off;\n#############################################\n\nSELECT count(*) FROM test_tab WHERE import_num = '4520460' and login IN (SELECT DISTINCT login FROM test_tab WHERE import_num = '4520460' AND login IS NOT NULL EXCEPT SELECT DISTINCT login FROM test_tab WHERE import_num = '0' AND login IS NOT NULL);\n+--------+\n| count |\n+--------+\n| 762599 |\n+--------+\n(1 row)\n\nTime: 2116.889 ms\n\n+-------------------------------------------------------------------------------------------------------------------------------------------+\n| QUERY PLAN |\n+-------------------------------------------------------------------------------------------------------------------------------------------+\n| Aggregate (cost=6.13..6.14 rows=1 width=8) |\n| -> Hash Semi Join (cost=4.67..6.13 rows=1 width=0) |\n| Hash Cond: ((test_tab.login)::text = (\"ANY_subquery\".login)::text) |\n| -> Index Only Scan using ui_nkey_test_tab on test_tab (cost=0.56..2.02 rows=1 width=8) |\n| Index Cond: (import_num = '4520460'::numeric) |\n| -> Hash (cost=4.09..4.09 rows=1 width=96) |\n| -> Subquery Scan on \"ANY_subquery\" (cost=0.56..4.09 rows=1 width=96) |\n| -> HashSetOp Except (cost=0.56..4.08 rows=1 width=100) |\n| -> Append (cost=0.56..4.08 rows=2 width=100) |\n| -> Subquery Scan on \"*SELECT* 1\" (cost=0.56..2.04 rows=1 width=12) |\n| -> Unique (cost=0.56..2.03 rows=1 width=8) |\n| -> Index Only Scan using ui_nkey_test_tab on test_tab test_tab_1 (cost=0.56..2.03 rows=1 width=8) |\n| Index Cond: ((import_num = '4520460'::numeric) AND (login IS NOT NULL)) |\n| -> Subquery Scan on \"*SELECT* 2\" (cost=0.56..2.04 rows=1 width=12) |\n| -> Unique (cost=0.56..2.03 rows=1 width=8) |\n| -> Index Only Scan using ui_nkey_test_tab on test_tab test_tab_2 (cost=0.56..2.03 rows=1 width=8) |\n| Index Cond: ((import_num = '0'::numeric) AND (login IS NOT NULL)) |\n+-------------------------------------------------------------------------------------------------------------------------------------------+\n(17 rows)\n\n\n\nLooking at the row count for import_num\n\nselect import_num, count(*) from test_tab group by import_num order by 2;\n+------------+--------+\n| import_num | count |\n+------------+--------+\n| 4520440 | 746982 |\n| 4520460 | 762599 |\n+------------+--------+\n(37 rows)\n\n\nWith different value of import_num we are having different execution plan. Is there a way to force the same Hash semi Join plan to sql with import_num 4520440, currently doing nested loop.\n\nI tried /*+HashJoin(a1 ANY_subquery)*/ but the sql execution plan doesn't change.\n\nSELECT /*+HashJoin(a1 ANY_subquery)*/ count(*) FROM test_tab a1 WHERE import_num = '4520440' and login IN (SELECT DISTINCT login FROM test_tab a2 WHERE import_num = '4520440' AND login IS NOT NULL EXCEPT SELECT DISTINCT login FROM test_tab a3 WHERE import_num = '0' AND login IS NOT NULL);\n\n\nRegards,\nAnand\n\n\n\n\n\t\n\t\n\t\n\nView this message in context: Performance Issue -- \"Materialize\"\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.",
"msg_date": "Mon, 21 Aug 2017 13:28:07 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Issue -- \"Materialize\""
}
] |
[
{
"msg_contents": "Hi, I have a query that I run in my postgresql 9.6 database and it runs for\nmore than 24 hours and doesnt finish.\n\nMy select consist from few joins :\n\nSELECT a.inst_prod_id,\n product_id,\n nap_area2,\n nap_phone_num,\n nap_product_id,\n b.nap_discount_num,\n b.nap_makat_cd,\n nap_act_start_dt,\n b.nap_debt_line,\n nap_act_end_dt,\n b.row_added_dttm\n b.row_lastmant_dttm,\n FROM ps_rf_inst_prod a,\n AND a.setid || ''= 'SHARE'\n nap_ip_discount b\n WHERE nap_crm_status = 'C_04'\n AND b.nap_makat_cd IN (SELECT term_code\n AND b.setid || ''= 'SHARE'\n AND a.inst_prod_id = b.inst_prod_id\n AND start_date <=\nb.nap_rishum_date\n FROM tv_finterm\n WHERE\npricing_method_code in ('2', '4')\n AND coalesce(end_date,\nto_date('01/01/2095','DD/MM/YYYY')) !=\n AND coalesce(end_date,\nto_date('01/01/2095','DD/MM/YYYY')) >=\n b.nap_rishum_date\n start_date)\n AND (b.row_lastmant_dttm >\nto_date('01/01/2005','DD/MM/YYYY') OR\n AND b.nap_act_end_dt > clock_timestamp()\n AND TRUNC(b.nap_act_start_dt) <\nTRUNC(b.nap_act_end_dt)\n b.nap_rishum_date >\nto_date('01/01/2005','DD/MM/YYYY') OR\n WHERE PERCENT IS NOT NULL\n b.row_added_dttm >\nto_date('01/01/2005','DD/MM/YYYY'))\n AND b.nap_discount_num IN\n(SELECT k.discount_line\n FROM tv_discounts_details k\n AND k.start_month = 1)\n AND c.phone\n= a.nap_phone_num\n AND (NOT EXISTS(SELECT /*+index(c\nTC_FINTERMS_I_SERVICE) */\n 1\n FROM tc_finterms c\n WHERE c.area\n= a.nap_area2\n AND\nc.term_code = b.nap_makat_cd\n WHERE service_uid =\n(a.inst_prod_id)::integer\n AND\ndeb_cred_line_no = b.nap_debt_line\n AND\n(payment_end_date > clock_timestamp())\n AND term_type = '2')\n OR NOT EXISTS(SELECT 1\n FROM ip_service_discounts\n AND service_code =\nb.nap_makat_cd\n and b.nap_purch_instprod = ' ';\n AND discount_code =\nb.nap_discount_num\n AND (end_date IS NULL OR\ncoalesce(discount_end_date, clock_timestamp() + interval '1 days') >\nclock_timestamp())))\n\nBefore trying to work on performance I checked locks and nothing returned :\n\n=# select a1.query as blocking_query, a2. query as waiting_query,\n t.schemaname ||'.'||t.relname as locked_table from\npg_stat_activity\n a1 join pg_locks p1 on a1. pid = p1.pid and p1.granted join pg_locks\n pg_stat_activity a2 on a2. pid = p2.pid join pg_stat_all_tables t on\n p2 on p1.relation = p2.relation and not p2.granted join\n p1.relation = t.relid;\n (0 rows)\n blocking_query | waiting_query | locked_table\n ----------------+---------------+--------------\n\nI checked the explain plan of my query :\n\n Nested Loop Semi Join (cost=0.43..7565655389.26 rows=1 width=93)\n Join Filter: (b.nap_discount_num = (k.discount_line)::numeric)\n -> Seq Scan on ps_rf_inst_prod a (cost=0.00..4337158.91\nrows=40452 width=41)\n -> Nested Loop (cost=0.43..7565653159.07 rows=2 width=93)\n -> Index Scan using ps_nap_ip_discount on nap_ip_discount b\n (cost=0.43..186920.69 rows=1 width=60)\n Filter: (((nap_crm_status)::text = 'C_04'::text) AND\n (((setid)::text || ''::text) = 'SHARE'::text))\n Filter: (((nap_purch_instprod)::text = ' '::text) AND\n (nap_act_end_dt > clock_timestamp()) AND (((setid)::text || ''::t\n Index Cond: ((inst_prod_id)::text = (a.inst_prod_id)::text)\ne('01/01/2005'::text, 'DD/MM/YYYY'::text)) OR (nap_rishum_date >\nto_date('01/01/2005'::text, 'DD/MM/YYYY'::text)) OR (row_added_dttm >\next) = 'SHARE'::text) AND (trunc(nap_act_start_dt, 'DDD'::text) <\ntrunc(nap_act_end_dt, 'DDD'::text)) AND ((row_lastmant_dttm > to_dat\n -> Index Scan using tc_finterms_ix1 on tc_finterms c\n (cost=0.56..8.60 rows=1 width=0)\n to_date('01/01/2005'::text, 'DD/MM/YYYY'::text))) AND ((NOT (SubPlan\n2)) OR (NOT (SubPlan 3))) AND (SubPlan 1))\n SubPlan 2\n b.nap_makat_cd) AND (deb_cred_line_no = (b.nap_debt_line)::double\nprecision))\n Index Cond: (((area)::text = (a.nap_area2)::text)\nAND ((phone)::text = (a.nap_phone_num)::text))\n Filter: (((term_type)::text = '2'::text) AND\n(payment_end_date > clock_timestamp()) AND ((term_code)::numeric =\n Filter: (((service_code)::numeric =\nb.nap_makat_cd) AND ((discount_code)::numeric =\nb.nap_discount_num) AND ((e\n SubPlan 3\n -> Index Scan using ip_service_discounts_pkey on\nip_service_discounts (cost=0.56..10.78 rows=1 width=0)\n Index Cond: (service_uid = (a.inst_prod_id)::integer)\n Recheck Cond: (((pricing_method_code)::text =\nANY ('{2,4}'::text[])) AND (start_date <= b.nap_rishum_date))\nnd_date IS NULL) OR (COALESCE((discount_end_date)::timestamp with time\nzone, (clock_timestamp() + '1 day'::interval)) > clock_timestam\np())))\n SubPlan 1\n -> Bitmap Heap Scan on tv_finterm\n(cost=2290.83..17301.61 rows=26907 width=4)\n -> Bitmap Index Scan on index_test_mariel\n(cost=0.00..2284.11 rows=81126 width=0)\n Filter: ((COALESCE(end_date,\n(to_date('01/01/2095'::text, 'DD/MM/YYYY'::text))::timestamp without\ntime zone) >=\n b.nap_rishum_date) AND (COALESCE(end_date,\n(to_date('01/01/2095'::text, 'DD/MM/YYYY'::text))::timestamp\nwithout time zone) <> start_d\nate))\n(25 rows)\n Index Cond: (((pricing_method_code)::text\n= ANY ('{2,4}'::text[])) AND (start_date <= b.nap_rishum_date))\n -> Materialize (cost=0.00..1407.38 rows=43933 width=4)\n -> Seq Scan on tv_discounts_details k (cost=0.00..1187.71\nrows=43933 width=4)\n Filter: ((percent IS NOT NULL) AND (start_month = 1))\n\nI run vacuum analyze database before running the query. Some info about the\ntables :\n\nps_rf_inst_prod - 32G\nnap_ip_discount-1G\ntv_finterm - 100M\ntc_finterms - 6G\nTV_FINTERM - 1G\n\nThis query is part of an app that I migrated from oracle to postgresql. I\ndont want to change the query much, looking for a way to change the plan to\nmake it faster.. I have indexes on ps_rf_inst_prod, when I delete the\npipelines in :\n\n AND a.setid || ''= 'SHARE'\n AND b.setid || ''= 'SHARE'\n\nthe plan is changing and it uses indexes on ps_rf_inst_prod but it costs\nmore and the performance are worse.\n\nPlease , HELP...\n\nHi, I have a query that I run in my postgresql 9.6 database and it runs for more than 24 hours and doesnt finish.My select consist from few joins :SELECT a.inst_prod_id, product_id, nap_area2,\n nap_phone_num, nap_product_id,\n b.nap_discount_num, b.nap_makat_cd,\n nap_act_start_dt, b.nap_debt_line,\n nap_act_end_dt,\n b.row_added_dttm b.row_lastmant_dttm,\n FROM ps_rf_inst_prod a,\n AND a.setid || ''= 'SHARE' nap_ip_discount b\n WHERE nap_crm_status = 'C_04'\n AND b.nap_makat_cd IN (SELECT term_code AND b.setid || ''= 'SHARE'\n AND a.inst_prod_id = b.inst_prod_id\n AND start_date <= b.nap_rishum_date FROM tv_finterm\n WHERE pricing_method_code in ('2', '4')\n AND coalesce(end_date, to_date('01/01/2095','DD/MM/YYYY')) != AND coalesce(end_date, to_date('01/01/2095','DD/MM/YYYY')) >=\n b.nap_rishum_date\n start_date)\n AND (b.row_lastmant_dttm > to_date('01/01/2005','DD/MM/YYYY') OR AND b.nap_act_end_dt > clock_timestamp()\n AND TRUNC(b.nap_act_start_dt) < TRUNC(b.nap_act_end_dt)\n b.nap_rishum_date > to_date('01/01/2005','DD/MM/YYYY') OR\n WHERE PERCENT IS NOT NULL b.row_added_dttm > to_date('01/01/2005','DD/MM/YYYY'))\n AND b.nap_discount_num IN (SELECT k.discount_line\n FROM tv_discounts_details k\n AND k.start_month = 1)\n AND c.phone = a.nap_phone_num AND (NOT EXISTS(SELECT /*+index(c TC_FINTERMS_I_SERVICE) */\n 1\n FROM tc_finterms c\n WHERE c.area = a.nap_area2\n AND c.term_code = b.nap_makat_cd\n WHERE service_uid = (a.inst_prod_id)::integer AND deb_cred_line_no = b.nap_debt_line\n AND (payment_end_date > clock_timestamp())\n AND term_type = '2')\n OR NOT EXISTS(SELECT 1\n FROM ip_service_discounts\n AND service_code = b.nap_makat_cd\n and b.nap_purch_instprod = ' '; AND discount_code = b.nap_discount_num\n AND (end_date IS NULL OR coalesce(discount_end_date, clock_timestamp() + interval '1 days') > clock_timestamp())))Before trying to work on performance I checked locks and nothing returned :=# select a1.query as blocking_query, a2. query as waiting_query, t.schemaname ||'.'||t.relname as locked_table from pg_stat_activity a1 join pg_locks p1 on a1. pid = p1.pid and p1.granted join pg_locks \n pg_stat_activity a2 on a2. pid = p2.pid join pg_stat_all_tables t on p2 on p1.relation = p2.relation and not p2.granted join \n p1.relation = t.relid;\n (0 rows) blocking_query | waiting_query | locked_table\n ----------------+---------------+--------------I checked the explain plan of my query : Nested Loop Semi Join (cost=0.43..7565655389.26 rows=1 width=93) Join Filter: (b.nap_discount_num = (k.discount_line)::numeric)\n -> Seq Scan on ps_rf_inst_prod a (cost=0.00..4337158.91 rows=40452 width=41) -> Nested Loop (cost=0.43..7565653159.07 rows=2 width=93)\n -> Index Scan using ps_nap_ip_discount on nap_ip_discount b (cost=0.43..186920.69 rows=1 width=60) Filter: (((nap_crm_status)::text = 'C_04'::text) AND (((setid)::text || ''::text) = 'SHARE'::text))\n Filter: (((nap_purch_instprod)::text = ' '::text) AND (nap_act_end_dt > clock_timestamp()) AND (((setid)::text || ''::t Index Cond: ((inst_prod_id)::text = (a.inst_prod_id)::text)\ne('01/01/2005'::text, 'DD/MM/YYYY'::text)) OR (nap_rishum_date > to_date('01/01/2005'::text, 'DD/MM/YYYY'::text)) OR (row_added_dttm >ext) = 'SHARE'::text) AND (trunc(nap_act_start_dt, 'DDD'::text) < trunc(nap_act_end_dt, 'DDD'::text)) AND ((row_lastmant_dttm > to_dat\n -> Index Scan using tc_finterms_ix1 on tc_finterms c (cost=0.56..8.60 rows=1 width=0) to_date('01/01/2005'::text, 'DD/MM/YYYY'::text))) AND ((NOT (SubPlan 2)) OR (NOT (SubPlan 3))) AND (SubPlan 1))\n SubPlan 2\n b.nap_makat_cd) AND (deb_cred_line_no = (b.nap_debt_line)::double precision)) Index Cond: (((area)::text = (a.nap_area2)::text) AND ((phone)::text = (a.nap_phone_num)::text))\n Filter: (((term_type)::text = '2'::text) AND (payment_end_date > clock_timestamp()) AND ((term_code)::numeric =\n Filter: (((service_code)::numeric = b.nap_makat_cd) AND ((discount_code)::numeric = b.nap_discount_num) AND ((e SubPlan 3\n -> Index Scan using ip_service_discounts_pkey on ip_service_discounts (cost=0.56..10.78 rows=1 width=0)\n Index Cond: (service_uid = (a.inst_prod_id)::integer)\n Recheck Cond: (((pricing_method_code)::text = ANY ('{2,4}'::text[])) AND (start_date <= b.nap_rishum_date))nd_date IS NULL) OR (COALESCE((discount_end_date)::timestamp with time zone, (clock_timestamp() + '1 day'::interval)) > clock_timestam\np())))\n SubPlan 1\n -> Bitmap Heap Scan on tv_finterm (cost=2290.83..17301.61 rows=26907 width=4)\n -> Bitmap Index Scan on index_test_mariel (cost=0.00..2284.11 rows=81126 width=0) Filter: ((COALESCE(end_date, (to_date('01/01/2095'::text, 'DD/MM/YYYY'::text))::timestamp without time zone) >=\n b.nap_rishum_date) AND (COALESCE(end_date, (to_date('01/01/2095'::text, 'DD/MM/YYYY'::text))::timestamp without time zone) <> start_d\nate))\n(25 rows) Index Cond: (((pricing_method_code)::text = ANY ('{2,4}'::text[])) AND (start_date <= b.nap_rishum_date))\n -> Materialize (cost=0.00..1407.38 rows=43933 width=4)\n -> Seq Scan on tv_discounts_details k (cost=0.00..1187.71 rows=43933 width=4)\n Filter: ((percent IS NOT NULL) AND (start_month = 1))I run vacuum analyze database before running the query. Some info about the tables :ps_rf_inst_prod - 32G nap_ip_discount-1G \ntv_finterm - 100Mtc_finterms - 6G\nTV_FINTERM - 1GThis query is part of an app that I migrated from oracle to postgresql. I dont want to change the query much, looking for a way to change the plan to make it faster.. I have indexes on ps_rf_inst_prod, when I delete the pipelines in : AND a.setid || ''= 'SHARE' AND b.setid || ''= 'SHARE'the plan is changing and it uses indexes on ps_rf_inst_prod but it costs more and the performance are worse.Please , HELP...",
"msg_date": "Tue, 22 Aug 2017 17:23:24 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "query runs for more than 24 hours!"
},
{
"msg_contents": "\n\nOn 08/22/2017 04:23 PM, Mariel Cherkassky wrote:\n> Hi, I have a query that I run in my postgresql 9.6 database and it runs \n> for more than 24 hours and doesnt finish.\n> \n> My select consist from few joins :\n> \n\nI'm sorry, but the query and plans are completely broken (wrapped in \nfunny ways, missing important bits. ...) I don't know what client you \nuse or how that happened, but I recommend attaching the information as \ntext files instead of pasting it into the message directly.\n\nRegarding the query analysis - we can't really help you much without \nseeing an explain analyze (that is, not just the plan and estimates, but \nactual performance and row counts). That usually identifies the query \noperations (scans, join, ...) causing issues.\n\nOf course, if the query is already running for 24h and you don't know \nhow much longer it will take to complete, running EXPLAIN ANALYZE on it \nis not very practical. The best thing you can do is break the query into \nsmaller parts and debugging that - start with one table, and then add \ntables/conditions until the performance gets bad. Hopefully the explain \nanalyze on that will complete in reasonable time.\n\nOf course, you haven't told us anything about what's happening on the \nmachine. It is reading a lot of data from the disks? Random or \nsequential? Is it writing a lot of data into temporary files? Is it \nconsuming a lot of CPU? And so on.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 22 Aug 2017 23:02:39 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query runs for more than 24 hours!"
},
{
"msg_contents": "On 2017-08-22 16:23, Mariel Cherkassky wrote:\n\n> \n> SELECT a.inst_prod_id,\n> product_id,\n> nap_area2,\n> nap_phone_num,\n> nap_product_id,\n> b.nap_discount_num,\n> b.nap_makat_cd,\n> nap_act_start_dt,\n> b.nap_debt_line,\n> nap_act_end_dt,\n> b.row_added_dttm\n> b.row_lastmant_dttm,\n> FROM ps_rf_inst_prod a,\n> AND a.setid || ''= 'SHARE'\n> nap_ip_discount b\n> WHERE nap_crm_status = 'C_04'\n> AND b.nap_makat_cd IN (SELECT\n> term_code AND b.setid || ''=\n> 'SHARE'\n> AND a.inst_prod_id =\n\n\nOn my screen the order of the lines in the query seem to get messed up,\nI'm not sure if that's my email program or a copy/paste error.\n\n From what I can see, you are using subselects in an IN statement,\nwhich can be a problem if that has to be re-evaluated a lot.\n\nIt's hard for me to say more because I can't tell what the actual query \nis at the moment.\n\nRegards, Vincent.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Aug 2017 09:06:00 +0200",
"msg_from": "vinny <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query runs for more than 24 hours!"
}
] |
[
{
"msg_contents": "I'm trying to understand what postgresql doing in an issue that I'm having.\nOur app team wrote a function that runs with a cursor over the results of a\nquery and via the utl_file func they write some columns to a file. I dont\nunderstand why, but postgresql write the data into the file in the fs in\nparts. I mean that it runs the query and it takes time to get back results\nand when I see that the results back postgresql write to file the data and\nthen suddenly stops for X minutes. After those x minutes it starts again to\nwrite the data and it continues that way until its done. The query returns\ntotal *100* rows. I want to understand why it stops suddenly. There arent\nany locks in the database during this operation.\n\nmy function looks like that :\n\nfunc(a,b,c...)\n\ncursor cr for\n\nselect ab,c,d,e.....\n\nbegin\n\nraise notice - 'starting loop time - %',timeofday();\n\n for cr_record in cr\n\n Raise notice 'print to file - '%',timeofday();\n\n utl_file.write(file,cr_record)\n\n end loop\n\nend\n\nI see the log of the running the next output :\n\nstarting loop 16:00\n\nprint to file : 16:03\n\nprint to file : 16:03\n\nprint to file : 16:07\n\nprint to file : 16:07\n\nprint to file : 16:07\n\nprint to file : 16:010\n\n......\n\n\n\nCan somebody explain to me this kind of behavior ? Why is it taking some\nmuch time to write and in different minutes after the query already been\nexecuted and finished ? Mybe I'm getting from the cursor only part of the\nrows ?\n\nI'm trying to understand what postgresql doing in an issue that I'm having. Our app team wrote a function that runs with a cursor over the results of a query and via the utl_file func they write some columns to a file. I dont understand why, but postgresql write the data into the file in the fs in parts. I mean that it runs the query and it takes time to get back results and when I see that the results back postgresql write to file the data and then suddenly stops for X minutes. After those x minutes it starts again to write the data and it continues that way until its done. The query returns total 100 rows. I want to understand why it stops suddenly. There arent any locks in the database during this operation.my function looks like that : func(a,b,c...)cursor cr forselect ab,c,d,e.....beginraise notice - 'starting loop time - %',timeofday(); for cr_record in cr Raise notice 'print to file - '%',timeofday(); utl_file.write(file,cr_record) end loopendI see the log of the running the next output : starting loop 16:00print to file : 16:03print to file : 16:03print to file : 16:07print to file : 16:07print to file : 16:07print to file : 16:010......Can somebody explain to me this kind of behavior ? Why is it taking some much time to write and in different minutes after the query already been executed and finished ? Mybe I'm getting from the cursor only part of the rows ?",
"msg_date": "Thu, 24 Aug 2017 16:15:19 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "printing results of query to file in different times"
},
{
"msg_contents": "Anyone?\n\n2017-08-24 16:15 GMT+03:00 Mariel Cherkassky <[email protected]>:\n\n> I'm trying to understand what postgresql doing in an issue that I'm\n> having. Our app team wrote a function that runs with a cursor over the\n> results of a query and via the utl_file func they write some columns to a\n> file. I dont understand why, but postgresql write the data into the file in\n> the fs in parts. I mean that it runs the query and it takes time to get\n> back results and when I see that the results back postgresql write to file\n> the data and then suddenly stops for X minutes. After those x minutes it\n> starts again to write the data and it continues that way until its done.\n> The query returns total *100* rows. I want to understand why it stops\n> suddenly. There arent any locks in the database during this operation.\n>\n> my function looks like that :\n>\n> func(a,b,c...)\n>\n> cursor cr for\n>\n> select ab,c,d,e.....\n>\n> begin\n>\n> raise notice - 'starting loop time - %',timeofday();\n>\n> for cr_record in cr\n>\n> Raise notice 'print to file - '%',timeofday();\n>\n> utl_file.write(file,cr_record)\n>\n> end loop\n>\n> end\n>\n> I see the log of the running the next output :\n>\n> starting loop 16:00\n>\n> print to file : 16:03\n>\n> print to file : 16:03\n>\n> print to file : 16:07\n>\n> print to file : 16:07\n>\n> print to file : 16:07\n>\n> print to file : 16:010\n>\n> ......\n>\n>\n>\n> Can somebody explain to me this kind of behavior ? Why is it taking some\n> much time to write and in different minutes after the query already been\n> executed and finished ? Mybe I'm getting from the cursor only part of the\n> rows ?\n>\n>\n>\n\nAnyone?2017-08-24 16:15 GMT+03:00 Mariel Cherkassky <[email protected]>:I'm trying to understand what postgresql doing in an issue that I'm having. Our app team wrote a function that runs with a cursor over the results of a query and via the utl_file func they write some columns to a file. I dont understand why, but postgresql write the data into the file in the fs in parts. I mean that it runs the query and it takes time to get back results and when I see that the results back postgresql write to file the data and then suddenly stops for X minutes. After those x minutes it starts again to write the data and it continues that way until its done. The query returns total 100 rows. I want to understand why it stops suddenly. There arent any locks in the database during this operation.my function looks like that : func(a,b,c...)cursor cr forselect ab,c,d,e.....beginraise notice - 'starting loop time - %',timeofday(); for cr_record in cr Raise notice 'print to file - '%',timeofday(); utl_file.write(file,cr_record) end loopendI see the log of the running the next output : starting loop 16:00print to file : 16:03print to file : 16:03print to file : 16:07print to file : 16:07print to file : 16:07print to file : 16:010......Can somebody explain to me this kind of behavior ? Why is it taking some much time to write and in different minutes after the query already been executed and finished ? Mybe I'm getting from the cursor only part of the rows ?",
"msg_date": "Thu, 31 Aug 2017 11:07:28 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: printing results of query to file in different times"
},
{
"msg_contents": "Our app team wrote a function that runs with a cursor over the results of a\nquery and via the utl_file func they write some columns to a file. I dont\nunderstand why, but postgresql write the data into the file in the fs in\nparts. I mean that it runs the query and it takes time to get back results\nand when I see that the results back postgresql write to file the data and\nthen suddenly stops for X minutes. After those x minutes it starts again to\nwrite the data and it continues that way until its done. I want to\nunderstand why it stops suddenly. There arent any locks in the database\nduring this operation.\n\nmy function looks like that :\n\nfunc(a,b,c...)\n\ncursor cr for\n\nselect ab,c,d,e.....\n\nbegin\n\nraise notice - 'starting loop time - %',timeofday();\n\n for cr_record in cr\n\n Raise notice 'print to file - '%',timeofday();\n\n utl_file.write(file,cr_record)\n\n end loop\n\nend\n\nI see the log of the running the next output :\n\nstarting loop 16:00\n\nprint to file : 16:03\n\nprint to file : 16:03\n\nprint to file : 16:03\n\nprint to file : 16:03\n\nprint to file : 16:07\n\nprint to file : 16:07\n\nprint to file : 16:07\n\nprint to file : 16:010\n\n......\n\n\n\nCan somebody explain to me this kind of behavior ? Why is it taking some\nmuch time to write and in different minutes after the query already been\nexecuted and finished ? Mybe I'm getting from the cursor only part of the\nrows ?\n\nOur app team wrote a function that runs with a cursor over the results of a query and via the utl_file func they write some columns to a file. I dont understand why, but postgresql write the data into the file in the fs in parts. I mean that it runs the query and it takes time to get back results and when I see that the results back postgresql write to file the data and then suddenly stops for X minutes. After those x minutes it starts again to write the data and it continues that way until its done. I want to understand why it stops suddenly. There arent any locks in the database during this operation.my function looks like that : func(a,b,c...)cursor cr forselect ab,c,d,e.....beginraise notice - 'starting loop time - %',timeofday(); for cr_record in cr Raise notice 'print to file - '%',timeofday(); utl_file.write(file,cr_record) end loopendI see the log of the running the next output : starting loop 16:00print to file : 16:03print to file : 16:03print to file : 16:03print to file : 16:03print to file : 16:07print to file : 16:07print to file : 16:07print to file : 16:010......Can somebody explain to me this kind of behavior ? Why is it taking some much time to write and in different minutes after the query already been executed and finished ? Mybe I'm getting from the cursor only part of the rows ?",
"msg_date": "Thu, 31 Aug 2017 11:44:04 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: printing results of query to file in different times"
},
{
"msg_contents": "Can you show explain with analyze and buffers options for your query?\n\nRegards,\nRoman Konoval\[email protected]\n\n\n\n> On Aug 24, 2017, at 15:15, Mariel Cherkassky <[email protected]> wrote:\n> \n> I'm trying to understand what postgresql doing in an issue that I'm having. Our app team wrote a function that runs with a cursor over the results of a query and via the utl_file func they write some columns to a file. I dont understand why, but postgresql write the data into the file in the fs in parts. I mean that it runs the query and it takes time to get back results and when I see that the results back postgresql write to file the data and then suddenly stops for X minutes. After those x minutes it starts again to write the data and it continues that way until its done. The query returns total 100 rows. I want to understand why it stops suddenly. There arent any locks in the database during this operation.\n> \n> my function looks like that : \n> \n> func(a,b,c...)\n> \n> cursor cr for\n> \n> select ab,c,d,e.....\n> \n> begin\n> \n> raise notice - 'starting loop time - %',timeofday();\n> \n> for cr_record in cr\n> \n> Raise notice 'print to file - '%',timeofday();\n> \n> utl_file.write(file,cr_record)\n> \n> end loop\n> \n> end\n> \n> I see the log of the running the next output : \n> \n> starting loop 16:00\n> \n> print to file : 16:03\n> \n> print to file : 16:03\n> \n> print to file : 16:07\n> \n> print to file : 16:07\n> \n> print to file : 16:07\n> \n> print to file : 16:010\n> \n> ......\n> \n> \n> \n> \n> \n> Can somebody explain to me this kind of behavior ? Why is it taking some much time to write and in different minutes after the query already been executed and finished ? Mybe I'm getting from the cursor only part of the rows ?\n> \n> \n\n\nCan you show explain with analyze and buffers options for your query?\nRegards,Roman [email protected]\n\nOn Aug 24, 2017, at 15:15, Mariel Cherkassky <[email protected]> wrote:I'm trying to understand what postgresql doing in an issue that I'm having. Our app team wrote a function that runs with a cursor over the results of a query and via the utl_file func they write some columns to a file. I dont understand why, but postgresql write the data into the file in the fs in parts. I mean that it runs the query and it takes time to get back results and when I see that the results back postgresql write to file the data and then suddenly stops for X minutes. After those x minutes it starts again to write the data and it continues that way until its done. The query returns total 100 rows. I want to understand why it stops suddenly. There arent any locks in the database during this operation.my function looks like that : func(a,b,c...)cursor cr forselect ab,c,d,e.....beginraise notice - 'starting loop time - %',timeofday(); for cr_record in cr Raise notice 'print to file - '%',timeofday(); utl_file.write(file,cr_record) end loopendI see the log of the running the next output : starting loop 16:00print to file : 16:03print to file : 16:03print to file : 16:07print to file : 16:07print to file : 16:07print to file : 16:010......Can somebody explain to me this kind of behavior ? Why is it taking some much time to write and in different minutes after the query already been executed and finished ? Mybe I'm getting from the cursor only part of the rows ?",
"msg_date": "Thu, 31 Aug 2017 14:26:34 +0200",
"msg_from": "Roman Konoval <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: printing results of query to file in different times"
}
] |
[
{
"msg_contents": "Many years ago, I considered myself fairly expert when it came to\nperformance tuning and query optimization in postgresql, but I've been\nhiding out inside (insert big company that doesn't use Postgresql here) for\na long while while postgresql has continued to evolve, and much has changed\nsince Greg's book shipped in 2010. I'm wondering if there is anything more\nup-to-date than that excellent reference, especially anything that touches\non tuning and performance in AWS - RDS or running it directly on an EC2\ninstance.\n\nBut really, I'm mostly looking for documentation on query optimization and\ntuning for developers, rather than administration stuff. I've got a lot of\nfairly inexperienced engineers bumping into their first performance\nproblems resulting from very naive schema and query design as datasets have\nstarted to get large, and I'd rather point them at good documentation than\njust do their tuning for them. It's all low hanging fruit right now, but it\nwon't stay that way for long.\n\nThanks,\n\n--sam\n\nMany years ago, I considered myself fairly expert when it came to performance tuning and query optimization in postgresql, but I've been hiding out inside (insert big company that doesn't use Postgresql here) for a long while while postgresql has continued to evolve, and much has changed since Greg's book shipped in 2010. I'm wondering if there is anything more up-to-date than that excellent reference, especially anything that touches on tuning and performance in AWS - RDS or running it directly on an EC2 instance.But really, I'm mostly looking for documentation on query optimization and tuning for developers, rather than administration stuff. I've got a lot of fairly inexperienced engineers bumping into their first performance problems resulting from very naive schema and query design as datasets have started to get large, and I'd rather point them at good documentation than just do their tuning for them. It's all low hanging fruit right now, but it won't stay that way for long. Thanks,--sam",
"msg_date": "Thu, 24 Aug 2017 16:32:09 -0700",
"msg_from": "Sam Gendler <[email protected]>",
"msg_from_op": true,
"msg_subject": "latest perf tuning info"
}
] |
[
{
"msg_contents": "Hello,\n\nWould I request to help me on this query.\n\nSELECT 'Inspection Completed' as \"ALL Status\" ,COUNT(*) as \"Number of Count\" FROM ud_document WHERE status = 'Inspection Completed' union SELECT 'Pending', COUNT(*) FROM ud_document WHERE status = 'Pending' union SELECT 'Approved', COUNT(*) FROM ud_document WHERE status = 'Approved' union SELECT 'Rejected', COUNT(*) FROM ud_document WHERE status = 'Rejected' union SELECT 'Payment Due',count(*) from ud_document where payment_status = 'Payment Due' union SELECT 'Payment Done' ,count(*) from ud_document where payment_status = 'Payment Done'\n\nAnd now I want to exclude the uniqueid= '201708141701018' from the above query. how it can be ???\n\n\nRegards,\nDaulat\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n\n\nHello, \n \nWould I request to help me on this query.\n \nSELECT 'Inspection Completed' as \"ALL Status\" ,COUNT(*) as \"Number of Count\" FROM ud_document WHERE status = 'Inspection Completed' union SELECT 'Pending', COUNT(*) FROM ud_document WHERE status = 'Pending' union\n SELECT 'Approved', COUNT(*) FROM ud_document WHERE status = 'Approved' union SELECT 'Rejected', COUNT(*) FROM ud_document WHERE status = 'Rejected' union SELECT 'Payment Due',count(*) from ud_document where payment_status = 'Payment Due' union SELECT 'Payment\n Done' ,count(*) from ud_document where payment_status = 'Payment Done' \n \nAnd now I want to exclude the uniqueid= '201708141701018' from the above query. how it can be ???\n \n \nRegards,\nDaulat\n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Fri, 25 Aug 2017 06:49:07 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hi "
},
{
"msg_contents": "On Thu, Aug 24, 2017 at 11:49 PM, Daulat Ram <[email protected]> wrote:\n\n> Hello,\n>\n>\n>\n> Would I request to help me on this query.\n>\n>\n>\n> SELECT 'Inspection Completed' as \"ALL Status\" ,COUNT(*) as \"Number of\n> Count\" FROM ud_document WHERE status = 'Inspection Completed' union SELECT\n> 'Pending', COUNT(*) FROM ud_document WHERE status = 'Pending' union SELECT\n> 'Approved', COUNT(*) FROM ud_document WHERE status = 'Approved' union\n> SELECT 'Rejected', COUNT(*) FROM ud_document WHERE status = 'Rejected'\n> union SELECT 'Payment Due',count(*) from ud_document where payment_status =\n> 'Payment Due' union SELECT 'Payment Done' ,count(*) from ud_document where\n> payment_status = 'Payment Done'\n>\n>\n>\n> And now I want to exclude the uniqueid= '201708141701018' from the above\n> query. how it can be ???\n>\n>\n>\nYour use of UNION here seems necessary. Just write a normal GROUP BY\naggregation query. You might need to get a bit creative since you are\ncollapsing status and payment_status into a single column. \"CASE ... WHEN\n... THEN ... ELSE ... END\" is quite helpful for doing stuff like that. For\nnow I'll just leave them as two columns.\n\nSELECT status, payment_status, count(*)\nFROM ud_document\nWHERE uniqueid <> '201708141701018'\nGROUP BY 1, 2;\n\nDavid J.\n\nOn Thu, Aug 24, 2017 at 11:49 PM, Daulat Ram <[email protected]> wrote:\n\n\nHello, \n \nWould I request to help me on this query.\n \nSELECT 'Inspection Completed' as \"ALL Status\" ,COUNT(*) as \"Number of Count\" FROM ud_document WHERE status = 'Inspection Completed' union SELECT 'Pending', COUNT(*) FROM ud_document WHERE status = 'Pending' union\n SELECT 'Approved', COUNT(*) FROM ud_document WHERE status = 'Approved' union SELECT 'Rejected', COUNT(*) FROM ud_document WHERE status = 'Rejected' union SELECT 'Payment Due',count(*) from ud_document where payment_status = 'Payment Due' union SELECT 'Payment\n Done' ,count(*) from ud_document where payment_status = 'Payment Done' \n \nAnd now I want to exclude the uniqueid= '201708141701018' from the above query. how it can be ???\nYour use of UNION here seems necessary. Just write a normal GROUP BY aggregation query. You might need to get a bit creative since you are collapsing status and payment_status into a single column. \"CASE ... WHEN ... THEN ... ELSE ... END\" is quite helpful for doing stuff like that. For now I'll just leave them as two columns.SELECT status, payment_status, count(*)FROM ud_documentWHERE uniqueid <> '201708141701018'GROUP BY 1, 2;David J.",
"msg_date": "Fri, 25 Aug 2017 07:42:20 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hi"
}
] |
[
{
"msg_contents": "Dear all\n\nSomeone help me analyze the execution plans below, is the query 12 of\nTPC-H benchmark [1].\nI need to find out why the query without index runs faster (7 times)\nthan with index, although the costs are smaller (see table).\nI have other cases that happened in the same situation. The server\nparameters have been set with PGTUNE. I use postgresql version 9.6.4\non Debian 8 OS with 4 GB memory.\n\nQuery|Index(yes/no) |Time Spend |Cost Total\n===================================\n12 Yes 00:08:58 2710805.51\n12 No 00:01:42 3365996.34\n\n\n----------------- Explain Analyze Query 12 WITH INDEX\n----------------------------\nSort (cost=2710805.51..2710805.51 rows=1 width=27) (actual\ntime=537713.672..537713.672 rows=2 loops=1)\n Sort Key: lineitem.l_shipmode\n Sort Method: quicksort Memory: 25kB\n -> HashAggregate (cost=2710805.47..2710805.50 rows=1 width=27)\n(actual time=537713.597..537713.598 rows=2 loops=1)\n -> Merge Join (cost=1994471.69..2708777.28 rows=270426\nwidth=27) (actual time=510717.977..536818.802 rows=311208 loops=1)\n Merge Cond: (orders.o_orderkey = lineitem.l_orderkey)\n -> Index Scan using orders_pkey on orders\n(cost=0.00..672772.57 rows=15000045 width=20) (actual\ntime=0.019..20898.325 rows=14999972 loops=1)\n -> Sort (cost=1994455.40..1995131.47\nrows=270426 width=19) (actual time=510690.114..510915.678 rows=311208\nloops=1)\n Sort Key: lineitem.l_orderkey\n Sort Method: external sort Disk: 11568kB\n -> Bitmap Heap Scan on\nlineitem (cost=336295.10..1970056.39 rows=270426 width=19) (actual\ntime=419620.817..509685.421 rows=311208 loops=1)\n Recheck Cond:\n(l_shipmode = ANY (_{TRUCK,AIR}_::bpchar[]))\n Filter:\n((l_commitdate < l_receiptdate) AND (l_shipdate < l_commitdate) AND\n(l_receiptdate >= _1997-01-01_::date) AND (l_receiptdate < _1998-01-01\n00:00:00_::timestamp without time zone))\n -> Bitmap\nIndex Scan on idx_l_shipmodelineitem000 (cost=0.00..336227.49\nrows=15942635 width=0) (actual time=419437.172..419437.172\nrows=17133713 loops=1)\n Index\nCond: (l_shipmode = ANY (_{TRUCK,AIR}_::bpchar[]))\n\nTotal runtime: 537728.848 ms\n\n\n----------------- Explain Analyze Query 12 WITHOUT INDEX\n----------------------------\nSort (cost=3365996.33..3365996.34 rows=1 width=27) (actual\ntime=101850.883..101850.884 rows=2 loops=1)\n Sort Key: lineitem.l_shipmode Sort Method: quicksort Memory: 25kB\n -> HashAggregate (cost=3365996.30..3365996.32 rows=1 width=27)\n(actual time=101850.798..101850.800 rows=2 loops=1)\n -> Merge Join (cost=2649608.28..3363936.68 rows=274616\nwidth=27) (actual time=75497.181..100938.830 rows=311208 loops=1)\n Merge Cond: (orders.o_orderkey = lineitem.l_orderkey)\n -> Index Scan using orders_pkey on orders\n(cost=0.00..672771.90 rows=15000000 width=20) (actual\ntime=0.020..20272.828 rows=14999972 loops=1)\n -> Sort (cost=2649545.68..2650232.22\nrows=274616 width=19) (actual time=75364.450..75618.772 rows=311208\nloops=1)\n Sort Key: lineitem.l_orderkey\n Sort Method: external sort\nDisk: 11568kB\n -> Seq Scan on lineitem\n(cost=0.00..2624738.17 rows=274616 width=19) (actual\ntime=0.839..74391.087 rows=311208 loops=1)\n Filter: ((l_shipmode\n= ANY (_{TRUCK,AIR}_::bpchar[])) AND (l_commitdate < l_receiptdate)\nAND (l_shipdate < l_commitdate) AND (l_receiptdate >=\n_1997-01-01_::date) AND (l_receiptdate < _1998-01-01\n00:00:00_::timestamp without time zone))\n Total runtime:\n101865.253 ms\n\n -=========------ SQL query 12 ----------------------\n select\n l_shipmode,\n sum(case\n when o_orderpriority = '1-URGENT'\n or o_orderpriority = '2-HIGH'\n then 1\n else 0\n end) as high_line_count,\n sum(case\n when o_orderpriority <> '1-URGENT'\n and o_orderpriority <> '2-HIGH'\n then 1\n else 0\n end) as low_line_count\nfrom\n orders,\n lineitem\nwhere\n o_orderkey = l_orderkey\n and l_shipmode in ('TRUCK', 'AIR')\n and l_commitdate < l_receiptdate\n and l_shipdate < l_commitdate\n and l_receiptdate >= date '1997-01-01'\n and l_receiptdate < date '1997-01-01' + interval '1' year\ngroup by\n l_shipmode\norder by\n l_shipmode\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Aug 2017 05:31:44 -0300",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Execution plan analysis"
},
{
"msg_contents": "2017-08-25 5:31 GMT-03:00 Neto pr <[email protected]>:\n> Dear all\n>\n> Someone help me analyze the execution plans below, is the query 12 of\n> TPC-H benchmark [1].\n> I need to find out why the query without index runs faster (7 times)\n> than with index, although the costs are smaller (see table).\n> I have other cases that happened in the same situation. The server\n> parameters have been set with PGTUNE. I use postgresql version 9.6.4\n> on Debian 8 OS with 4 GB memory.\n>\n> Query|Index(yes/no) |Time Spend |Cost Total\n> ===================================\n> 12 Yes 00:08:58 2710805.51\n> 12 No 00:01:42 3365996.34\n>\n>\n> ----------------- Explain Analyze Query 12 WITH INDEX\n> ----------------------------\n> Sort (cost=2710805.51..2710805.51 rows=1 width=27) (actual\n> time=537713.672..537713.672 rows=2 loops=1)\n> Sort Key: lineitem.l_shipmode\n> Sort Method: quicksort Memory: 25kB\n> -> HashAggregate (cost=2710805.47..2710805.50 rows=1 width=27)\n> (actual time=537713.597..537713.598 rows=2 loops=1)\n> -> Merge Join (cost=1994471.69..2708777.28 rows=270426\n> width=27) (actual time=510717.977..536818.802 rows=311208 loops=1)\n> Merge Cond: (orders.o_orderkey = lineitem.l_orderkey)\n> -> Index Scan using orders_pkey on orders\n> (cost=0.00..672772.57 rows=15000045 width=20) (actual\n> time=0.019..20898.325 rows=14999972 loops=1)\n> -> Sort (cost=1994455.40..1995131.47\n> rows=270426 width=19) (actual time=510690.114..510915.678 rows=311208\n> loops=1)\n> Sort Key: lineitem.l_orderkey\n> Sort Method: external sort Disk: 11568kB\n> -> Bitmap Heap Scan on\n> lineitem (cost=336295.10..1970056.39 rows=270426 width=19) (actual\n> time=419620.817..509685.421 rows=311208 loops=1)\n> Recheck Cond:\n> (l_shipmode = ANY (_{TRUCK,AIR}_::bpchar[]))\n> Filter:\n> ((l_commitdate < l_receiptdate) AND (l_shipdate < l_commitdate) AND\n> (l_receiptdate >= _1997-01-01_::date) AND (l_receiptdate < _1998-01-01\n> 00:00:00_::timestamp without time zone))\n> -> Bitmap\n> Index Scan on idx_l_shipmodelineitem000 (cost=0.00..336227.49\n> rows=15942635 width=0) (actual time=419437.172..419437.172\n> rows=17133713 loops=1)\n> Index\n> Cond: (l_shipmode = ANY (_{TRUCK,AIR}_::bpchar[]))\n>\n> Total runtime: 537728.848 ms\n>\n>\n> ----------------- Explain Analyze Query 12 WITHOUT INDEX\n> ----------------------------\n> Sort (cost=3365996.33..3365996.34 rows=1 width=27) (actual\n> time=101850.883..101850.884 rows=2 loops=1)\n> Sort Key: lineitem.l_shipmode Sort Method: quicksort Memory: 25kB\n> -> HashAggregate (cost=3365996.30..3365996.32 rows=1 width=27)\n> (actual time=101850.798..101850.800 rows=2 loops=1)\n> -> Merge Join (cost=2649608.28..3363936.68 rows=274616\n> width=27) (actual time=75497.181..100938.830 rows=311208 loops=1)\n> Merge Cond: (orders.o_orderkey = lineitem.l_orderkey)\n> -> Index Scan using orders_pkey on orders\n> (cost=0.00..672771.90 rows=15000000 width=20) (actual\n> time=0.020..20272.828 rows=14999972 loops=1)\n> -> Sort (cost=2649545.68..2650232.22\n> rows=274616 width=19) (actual time=75364.450..75618.772 rows=311208\n> loops=1)\n> Sort Key: lineitem.l_orderkey\n> Sort Method: external sort\n> Disk: 11568kB\n> -> Seq Scan on lineitem\n> (cost=0.00..2624738.17 rows=274616 width=19) (actual\n> time=0.839..74391.087 rows=311208 loops=1)\n> Filter: ((l_shipmode\n> = ANY (_{TRUCK,AIR}_::bpchar[])) AND (l_commitdate < l_receiptdate)\n> AND (l_shipdate < l_commitdate) AND (l_receiptdate >=\n> _1997-01-01_::date) AND (l_receiptdate < _1998-01-01\n> 00:00:00_::timestamp without time zone))\n> Total runtime:\n> 101865.253 ms\n>\n> -=========------ SQL query 12 ----------------------\n> select\n> l_shipmode,\n> sum(case\n> when o_orderpriority = '1-URGENT'\n> or o_orderpriority = '2-HIGH'\n> then 1\n> else 0\n> end) as high_line_count,\n> sum(case\n> when o_orderpriority <> '1-URGENT'\n> and o_orderpriority <> '2-HIGH'\n> then 1\n> else 0\n> end) as low_line_count\n> from\n> orders,\n> lineitem\n> where\n> o_orderkey = l_orderkey\n> and l_shipmode in ('TRUCK', 'AIR')\n> and l_commitdate < l_receiptdate\n> and l_shipdate < l_commitdate\n> and l_receiptdate >= date '1997-01-01'\n> and l_receiptdate < date '1997-01-01' + interval '1' year\n> group by\n> l_shipmode\n> order by\n> l_shipmode\n\nComplementing the question I'm using a server HP proliant Ml110-G9:\nProcessador: (1) Intel Xeon E5-1603v3 (2.8GHz/4-core/10MB/140W)\nMemória RAM: 4GB DDR4\nDisco Rígido: SATA 1TB 7.2K rpm LFF\nMore specifications\nhere:https://www.hpe.com/us/en/product-catalog/servers/proliant-servers/pip.specifications.hpe-proliant-ml110-gen9-server.7796454.html\n154/5000\n\nSee Below parameters presents in postgresql.conf. You would indicate\nwhich value for example: cpu_index_tuple_cost and other CPU_*, based\non this\nServer.\n\n#seq_page_cost = 1.0\n#random_page_cost = 4.0\n#cpu_tuple_cost = 0.01\n#cpu_index_tuple_cost = 0.005\n#cpu_operator_cost = 0.0025\nshared_buffers = 1GB\neffective_cache_size = 3GB\nwork_mem = 26214kB\nmaintenance_work_mem = 512MB\ncheckpoint_segments = 128\ncheckpoint_completion_target = 0.9\nwal_buffers = 16MB\ndefault_statistics_target = 500\n\nBest Regards\nNeto Br\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Aug 2017 10:06:40 -0300",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Execution plan analysis"
},
{
"msg_contents": "Hi,\n\nSo looking at the plans, essentially the only part that is different is\nthe scan node at the very bottom - in one case it's a sequential scan,\nin the other case (the slow one) it's the bitmap index scan.\n\nEssentially it's this:\n\n -> Seq Scan on lineitem\n (cost=0.00..2624738.17 ...)\n (actual time=0.839..74391.087 ...)\n\nvs. this:\n\n -> Bitmap Heap Scan on lineitem\n (cost=336295.10..1970056.39 ...)\n (actual time=419620.817..509685.421 ...)\n -> Bitmap Index Scan on idx_l_shipmodelineitem000\n (cost=0.00..336227.49 ...)\n (actual time=419437.172..419437.172 ...)\n\nAll the nodes are the same and perform about the same in both cases, so\nyou can ignore them. This difference it the the root cause you need to\ninvestigate.\n\nThe question is why is the sequential scan so much faster than bitmap\nindex scan? Ideally, the bitmap heap scan should scan the index (in a\nmostly sequential way), build a bitmap, and then read just the matching\npart of the table (sequentially, by skipping some of the pages).\n\nNow, there are a few reasons why this might not work that well.\n\nPerhaps the table fits into RAM, but table + index does not. That would\nmake the sequential scan much faster than the index path. Not sure if\nthis is the case, as you haven't mentioned which TPC-H scale are you\ntesting, but you only have 4GB of RAM which if fairly low.\n\nAnother bit is prefetching - with sequential scans, the OS is able to\nprefetch the next bit of data automatically (read-ahead). With bitmap\nindex scans that's not the case, producing a lot of individual\nsynchronous I/O requests. See if increasing effective_cache_size (from\ndefault 1 to 16 or 32) helps.\n\nTry generating the plans with EXPLAIN (ANALYZE, BUFFERS), that should\ntell us more about how many blocks are found in shared buffers, etc.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Aug 2017 17:33:54 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Execution plan analysis"
}
] |
[
{
"msg_contents": "Hi,\n\nI recently came across a performance difference between two machines that surprised me:\n\nPostgres Version / OS on both machines: v9.6.3 / MacOS 10.12.5\n\nMachine A: MacBook Pro Mid 2012, 2.7 GHz Intel Core i7 (Ivy Bridge), 8 MB L3 Cache, 16 GB 1600 MHz DDR3 [1]\nMachine B: MacBook Pro Late 2016, 2.6 GHz Intel Core i7 (Skylake), 6 MB L3 Cache,16 GB 2133 MHz LPDDR3 [2]\n\nQuery Performance on Machine A: [3]\n\nCTE Scan on zulu (cost=40673.620..40742.300 rows=3434 width=56) (actual time=6339.404..6339.462 rows=58 loops=1)\n CTE zulu\n -> HashAggregate (cost=40639.280..40673.620 rows=3434 width=31) (actual time=6339.400..6339.434 rows=58 loops=1)\n Group Key: mike.two, mike.golf\n -> Unique (cost=37656.690..40038.310 rows=34341 width=64) (actual time=5937.934..6143.161 rows=298104 loops=1)\n -> Sort (cost=37656.690..38450.560 rows=317549 width=64) (actual time=5937.933..6031.925 rows=316982 loops=1)\n Sort Key: mike.two, mike.lima, mike.echo DESC, mike.quebec\n Sort Method: quicksort Memory: 56834kB\n -> Seq Scan on mike (cost=0.000..8638.080 rows=317549 width=64) (actual time=0.019..142.831 rows=316982 loops=1)\n Filter: (golf five NOT NULL)\n Rows Removed by Filter: 26426\n\nQuery Performance on Machine B: [4]\n\nCTE Scan on zulu (cost=40621.420..40690.100 rows=3434 width=56) (actual time=853.436..853.472 rows=58 loops=1)\n CTE zulu\n -> HashAggregate (cost=40587.080..40621.420 rows=3434 width=31) (actual time=853.433..853.448 rows=58 loops=1)\n Group Key: mike.two, mike.golf\n -> Unique (cost=37608.180..39986.110 rows=34341 width=64) (actual time=634.412..761.678 rows=298104 loops=1)\n -> Sort (cost=37608.180..38400.830 rows=317057 width=64) (actual time=634.411..694.719 rows=316982 loops=1)\n Sort Key: mike.two, mike.lima, mike.echo DESC, mike.quebec\n Sort Method: quicksort Memory: 56834kB\n -> Seq Scan on mike (cost=0.000..8638.080 rows=317057 width=64) (actual time=0.047..85.534 rows=316982 loops=1)\n Filter: (golf five NOT NULL)\n Rows Removed by Filter: 26426\n\nAs you can see, Machine A spends 5889ms on the Sort Node vs 609ms on Machine B when looking at the \"Exclusive\" time with explain.depesz.com [3][4]. I.e. Machine B is ~10x faster at sorting than Machine B (for this particular query).\n\nMy question is: Why?\n\nI understand that this is a 3rd gen CPU vs a 6th gen, and that things have gotten faster despite stagnant clock speeds, but seeing a 10x difference still caught me off guard.\n\nDoes anybody have some pointers to understand where those gains are coming from? Is it the CPU, memory, or both? And in particular, why does Sort benefit so massively from the advancement here (~10x), but Seq Scan, Unique and HashAggregate don't benefit as much (~2x)?\n\nAs you can probably tell, my hardware knowledge is very superficial, so I apologize if this is a stupid question. But I'd genuinely like to improve my understanding and intuition about these things.\n\nCheers\nFelix Geisendörfer\n\n[1] http://www.everymac.com/systems/apple/macbook_pro/specs/macbook-pro-core-i7-2.7-15-mid-2012-retina-display-specs.html\n[2] http://www.everymac.com/systems/apple/macbook_pro/specs/macbook-pro-core-i7-2.6-15-late-2016-retina-display-touch-bar-specs.html\n[3] https://explain.depesz.com/s/hmn\n[4] https://explain.depesz.com/s/zVe\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Aug 2017 16:12:26 +0200",
"msg_from": "=?utf-8?Q?Felix_Geisend=C3=B6rfer?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "10x faster sort performance on Skylake CPU vs Ivy Bridge"
},
{
"msg_contents": "=?utf-8?Q?Felix_Geisend=C3=B6rfer?= <[email protected]> writes:\n> I recently came across a performance difference between two machines that surprised me:\n> ...\n> As you can see, Machine A spends 5889ms on the Sort Node vs 609ms on Machine B when looking at the \"Exclusive\" time with explain.depesz.com [3][4]. I.e. Machine B is ~10x faster at sorting than Machine B (for this particular query).\n\nI doubt this is a hardware issue, it's more likely that you're comparing\napples and oranges. The first theory that springs to mind is that the\nsort keys are strings and you're using C locale on the faster machine but\nsome non-C locale on the slower. strcoll() is pretty darn expensive\ncompared to strcmp() :-(\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Aug 2017 11:07:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10x faster sort performance on Skylake CPU vs Ivy Bridge"
},
{
"msg_contents": "On Fri, Aug 25, 2017 at 8:07 AM, Tom Lane <[email protected]> wrote:\n> I doubt this is a hardware issue, it's more likely that you're comparing\n> apples and oranges. The first theory that springs to mind is that the\n> sort keys are strings and you're using C locale on the faster machine but\n> some non-C locale on the slower. strcoll() is pretty darn expensive\n> compared to strcmp() :-(\n\nstrcoll() is very noticeably slower on macOS, too.\n\n-- \nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Aug 2017 08:43:49 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10x faster sort performance on Skylake CPU vs Ivy Bridge"
},
{
"msg_contents": "\n> On Aug 25, 2017, at 17:07, Tom Lane <[email protected]> wrote:\n> \n> =?utf-8?Q?Felix_Geisend=C3=B6rfer?= <[email protected]> writes:\n>> I recently came across a performance difference between two machines that surprised me:\n>> ...\n>> As you can see, Machine A spends 5889ms on the Sort Node vs 609ms on Machine B when looking at the \"Exclusive\" time with explain.depesz.com [3][4]. I.e. Machine B is ~10x faster at sorting than Machine B (for this particular query).\n> \n> I doubt this is a hardware issue, it's more likely that you're comparing\n> apples and oranges. The first theory that springs to mind is that the\n> sort keys are strings and you're using C locale on the faster machine but\n> some non-C locale on the slower. strcoll() is pretty darn expensive\n> compared to strcmp() :-(\n\nYou're right, that seems to be it.\n\nMachine A was using strcoll() (lc_collate=en_US.UTF-8)\nMachine B was using strcmp() (lc_collate=C)\n\nAfter switching Machine A to use lc_collate=C, I get:\n\nCTE Scan on zulu (cost=40673.620..40742.300 rows=3434 width=56) (actual time=1368.610..1368.698 rows=58 loops=1)\n CTE zulu\n -> HashAggregate (cost=40639.280..40673.620 rows=3434 width=56) (actual time=1368.607..1368.659 rows=58 loops=1)\n Group Key: mike.two, ((mike.golf)::text)\n -> Unique (cost=37656.690..40038.310 rows=34341 width=104) (actual time=958.493..1168.128 rows=298104 loops=1)\n -> Sort (cost=37656.690..38450.560 rows=317549 width=104) (actual time=958.491..1055.635 rows=316982 loops=1)\n Sort Key: mike.two, ((mike.lima)::text) COLLATE \"papa\", mike.echo DESC, mike.quebec\n Sort Method: quicksort Memory: 56834kB\n -> Seq Scan on mike (cost=0.000..8638.080 rows=317549 width=104) (actual time=0.043..172.496 rows=316982 loops=1)\n Filter: (golf five NOT NULL)\n Rows Removed by Filter: 26426\n\nSo Machine A needs 883ms [1] for the sort vs 609ms [2] for Machine B. That's ~1.4x faster which seems reasonable :).\n\nSorry for the delayed response, I didn't have access to machine B to confirm this right away.\n\n> \t\t\tregards, tom lane\n\nThis is my first post to a PostgreSQL mailing list, but I've been lurking\nfor a while. Thank you for taking the time for replying to e-mails such\nas mine and all the work you've put into PostgreSQL over the years.\nI'm deeply grateful.\n\n> On Aug 25, 2017, at 17:43, Peter Geoghegan <[email protected]> wrote:\n> \n> On Fri, Aug 25, 2017 at 8:07 AM, Tom Lane <[email protected]> wrote:\n>> I doubt this is a hardware issue, it's more likely that you're comparing\n>> apples and oranges. The first theory that springs to mind is that the\n>> sort keys are strings and you're using C locale on the faster machine but\n>> some non-C locale on the slower. strcoll() is pretty darn expensive\n>> compared to strcmp() :-(\n> \n> strcoll() is very noticeably slower on macOS, too.\n> \n\nThanks. This immediately explains what I saw when testing this query on a Linux machine that was also using lc_collate=en_US.UTF-8 but not being slowed down by it as much as the macOS machine.\n\n[1] https://explain.depesz.com/s/LOqa\n[2] https://explain.depesz.com/s/zVe\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 27 Aug 2017 12:56:20 +0200",
"msg_from": "=?utf-8?Q?Felix_Geisend=C3=B6rfer?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 10x faster sort performance on Skylake CPU vs Ivy\n Bridge"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nWe have an issue with one of our partitioned tables. It has a column with timestamp without time zone type, and we had to partition it daily. To do that, we created the following constraints like this example:\r\nCHECK (to_char(impression_time, 'YYYYMMDD'::text) = '20170202'::text)\r\n\r\n\r\nThe problem we’re facing is no matter how we’re trying to select from it, it scans through every partitions.\r\n\r\nParent table:\r\n Table \"public.dfp_in_network_impressions\"\r\n Column | Type | Modifiers \r\n-----------------+-----------------------------+-----------\r\n impression_time | timestamp without time zone | \r\n nexus_id | character varying | \r\n line_item_id | bigint | \r\n creative_id | bigint | \r\n ad_unit_id | bigint | \r\nTriggers:\r\n insert_dfp_in_network_impressions_trigger BEFORE INSERT ON dfp_in_network_impressions FOR EACH ROW EXECUTE PROCEDURE dfp_in_network_impressions_insert_function()\r\nNumber of child tables: 214 (Use \\d+ to list them.)\r\n\r\n\r\n\r\nOne example of the child tables:\r\nTable \"dfp_in_network_impressions.dfp_in_network_impressions_20170202\"\r\n Column | Type | Modifiers \r\n-----------------+-----------------------------+-----------\r\n impression_time | timestamp without time zone | \r\n nexus_id | character varying | \r\n line_item_id | bigint | \r\n creative_id | bigint | \r\n ad_unit_id | bigint | \r\nIndexes:\r\n \"idx_dfp_in_network_impressions_20170202_creative_id\" btree (creative_id)\r\n \"idx_dfp_in_network_impressions_20170202_line_item_id\" btree (line_item_id)\r\nCheck constraints:\r\n \"dfp_in_network_impressions_20170202_impression_time_check\" CHECK (to_char(impression_time, 'YYYYMMDD'::text) = '20170202'::text)\r\nInherits: dfp_in_network_impressions\r\n\r\n\r\n\r\nConfirmed that the records are in the correct partitions.\r\n\r\nWe even tried to query with the exact same condition as it is defined in the check constraint:\r\nexplain select * from dfp_in_network_impressions where to_char(impression_time, 'YYYYMMDD'::text) = '20170202'::text;\r\n QUERY PLAN \r\n---------------------------------------------------------------------------------------------------\r\n Append (cost=0.00..18655467.21 rows=3831328 width=45)\r\n -> Seq Scan on dfp_in_network_impressions (cost=0.00..0.00 rows=1 width=64)\r\n Filter: (to_char(impression_time, 'YYYYMMDD'::text) = '20170202'::text)\r\n -> Seq Scan on dfp_in_network_impressions_20170101 (cost=0.00..7261.48 rows=1491 width=45)\r\n Filter: (to_char(impression_time, 'YYYYMMDD'::text) = '20170202'::text)\r\n -> Seq Scan on dfp_in_network_impressions_20170219 (cost=0.00..20824.01 rows=4277 width=45)\r\n Filter: (to_char(impression_time, 'YYYYMMDD'::text) = '20170202'::text)\r\n -> Seq Scan on dfp_in_network_impressions_20170102 (cost=0.00..28899.83 rows=5935 width=45)\r\n Filter: (to_char(impression_time, 'YYYYMMDD'::text) = '20170202'::text)\r\n -> Seq Scan on dfp_in_network_impressions_20170220 (cost=0.00..95576.80 rows=19629 width=45)\r\n Filter: (to_char(impression_time, 'YYYYMMDD'::text) = '20170202'::text)\r\n -> Seq Scan on dfp_in_network_impressions_20170103 (cost=0.00..88588.22 rows=18194 width=45)\r\n Filter: (to_char(impression_time, 'YYYYMMDD'::text) = '20170202'::text)\r\n -> Seq Scan on dfp_in_network_impressions_20170221 (cost=0.00..116203.54 rows=23865 width=45)\r\n Filter: (to_char(impression_time, 'YYYYMMDD'::text) = '20170202'::text)\r\n -> Seq Scan on dfp_in_network_impressions_20170410 (cost=0.00..158102.98 rows=32470 width=45)\r\n Filter: (to_char(impression_time, 'YYYYMMDD'::text) = '20170202'::text)\r\n -> Seq Scan on dfp_in_network_impressions_20170531 (cost=0.00..116373.83 rows=23900 width=45)\r\n Filter: (to_char(impression_time, 'YYYYMMDD'::text) = '20170202'::text)\r\n -> Seq Scan on dfp_in_network_impressions_20170104 (cost=0.00..91502.48 rows=18792 width=45)\r\n Filter: (to_char(impression_time, 'YYYYMMDD'::text) = '20170202'::text)\r\n -> Seq Scan on dfp_in_network_impressions_20170222 (cost=0.00..106469.76 rows=21866 width=45)\r\n Filter: (to_char(impression_time, 'YYYYMMDD'::text) = '20170202'::text)\r\n -> Seq Scan on dfp_in_network_impressions_20170411 (cost=0.00..152244.92 rows=31267 width=45)\r\n Filter: (to_char(impression_time, 'YYYYMMDD'::text) = '20170202'::text)\r\n -> Seq Scan on dfp_in_network_impressions_20170601 (cost=0.00..117742.66 rows=24181 width=45)\r\n Filter: (to_char(impression_time, 'YYYYMMDD'::text) = '20170202'::text)\r\n -> Seq Scan on dfp_in_network_impressions_20170105 (cost=0.00..87029.80 rows=17874 width=45)\r\n Filter: (to_char(impression_time, 'YYYYMMDD'::text) = '20170202'::text)\r\n -> Seq Scan on dfp_in_network_impressions_20170223 (cost=0.00..105371.79 rows=21641 width=45)\r\n Filter: (to_char(impression_time, 'YYYYMMDD'::text) = '20170202'::text)\r\n -> Seq Scan on dfp_in_network_impressions_20170412 (cost=0.00..143897.43 rows=29553 width=45)\r\n Filter: (to_char(impression_time, 'YYYYMMDD'::text) = '20170202'::text)\r\n… Etc.\r\n\r\nIt scans through every partitions. Shouldn’t it only scan the dfp_in_network_impressions.dfp_in_network_impressions_20170202 child table? Or we missing something?\r\nAny advice/help would highly appreciated.\r\n\r\nSystem details:\r\nPostgres version: PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-11), 64-bit\r\nThe constraint_exclusion parameter is set to partition, but same behavior when I set it to “on”.\r\n\r\nSELECT name, current_setting(name), SOURCE\r\n FROM pg_settings\r\n WHERE SOURCE NOT IN ('default', 'override’);\r\n\r\n name | current_setting | source \r\n------------------------------+-----------------------------------------+----------------------\r\n application_name | psql | client\r\n archive_command | /var/db/wal_archive.sh %p %f | configuration file\r\n archive_mode | on | configuration file\r\n autovacuum_naptime | 1min | configuration file\r\n checkpoint_completion_target | 0.9 | configuration file\r\n checkpoint_segments | 32 | configuration file\r\n client_encoding | UTF8 | client\r\n DateStyle | ISO, MDY | configuration file\r\n default_text_search_config | pg_catalog.english | configuration file\r\n effective_cache_size | 96GB | configuration file\r\n huge_pages | try | configuration file\r\n lc_messages | en_US.UTF-8 | configuration file\r\n lc_monetary | en_US.UTF-8 | configuration file\r\n lc_numeric | en_US.UTF-8 | configuration file\r\n lc_time | en_US.UTF-8 | configuration file\r\n listen_addresses | * | configuration file\r\n log_autovacuum_min_duration | 0 | configuration file\r\n log_checkpoints | on | configuration file\r\n log_connections | on | configuration file\r\n log_destination | stderr | configuration file\r\n log_directory | /var/log/postgresql | configuration file\r\n log_duration | on | configuration file\r\n log_file_mode | 0640 | configuration file\r\n log_filename | postgresql-%Y%m%d.log | configuration file\r\n log_line_prefix | %t [%p]: [%l-1] %h %d %u | configuration file\r\n log_lock_waits | on | configuration file\r\n log_min_duration_statement | 100ms | configuration file\r\n log_min_error_statement | warning | configuration file\r\n log_min_messages | warning | configuration file\r\n log_rotation_age | 1d | configuration file\r\n log_rotation_size | 0 | configuration file\r\n log_statement | ddl | configuration file\r\n log_timezone | US/Central | configuration file\r\n log_truncate_on_rotation | on | configuration file\r\n logging_collector | on | configuration file\r\n maintenance_work_mem | 1GB | configuration file\r\n max_connections | 110 | configuration file\r\n max_locks_per_transaction | 256 | configuration file\r\n max_stack_depth | 2MB | environment variable\r\n max_wal_senders | 3 | configuration file\r\n port | 5432 | configuration file\r\n shared_buffers | 64GB | configuration file\r\n TimeZone | US/Central | configuration file\r\n track_activities | on | configuration file\r\n track_counts | on | configuration file\r\n track_functions | none | configuration file\r\n track_io_timing | off | configuration file\r\n wal_keep_segments | 2000 | configuration file\r\n wal_level | hot_standby | configuration file\r\n work_mem | 768MB | configuration file\r\n\r\n\r\n\r\n\r\nLinux 2.6.32-504.30.3.el6.x86_64 #1 SMP Wed Jul 15 10:13:09 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n\r\n\r\nThank you!\r\nAniko\r\n\r\n\r\n\r\n\r\n\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Aug 2017 15:36:29 +0000",
"msg_from": "Aniko Belim <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partitioned table - scans through every partitions"
},
{
"msg_contents": "On Fri, Aug 25, 2017 at 03:36:29PM +0000, Aniko Belim wrote:\n> Hi,\n> \n> We have an issue with one of our partitioned tables. It has a column with timestamp without time zone type, and we had to partition it daily. To do that, we created the following constraints like this example:\n> CHECK (to_char(impression_time, 'YYYYMMDD'::text) = '20170202'::text)\n> \n> \n> The problem we’re facing is no matter how we’re trying to select from it, it scans through every partitions.\n\n\n> It scans through every partitions. Shouldn’t it only scan the dfp_in_network_impressions.dfp_in_network_impressions_20170202 child table? Or we missing something?\n> Any advice/help would highly appreciated.\n\nhttps://www.postgresql.org/docs/9.6/static/ddl-partitioning.html#DDL-PARTITIONING-CAVEATS\n|The following caveats apply to constraint exclusion:\n| Constraint exclusion only works when the query's WHERE clause contains\n| constants (or externally supplied parameters). For example, a comparison\n| against a non-immutable function such as CURRENT_TIMESTAMP cannot be\n| optimized, since the planner cannot know which partition the function value\n| might fall into at run time.\n\n...\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Aug 2017 10:44:34 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioned table - scans through every partitions"
},
{
"msg_contents": "Thank you, Justin! \r\n\r\nAniko\r\n\r\n\r\n\r\n\r\nOn 8/25/17, 10:44 AM, \"Justin Pryzby\" <[email protected]> wrote:\r\n\r\n>On Fri, Aug 25, 2017 at 03:36:29PM +0000, Aniko Belim wrote:\r\n>> Hi,\r\n>> \r\n>> We have an issue with one of our partitioned tables. It has a column with timestamp without time zone type, and we had to partition it daily. To do that, we created the following constraints like this example:\r\n>> CHECK (to_char(impression_time, 'YYYYMMDD'::text) = '20170202'::text)\r\n>> \r\n>> \r\n>> The problem we’re facing is no matter how we’re trying to select from it, it scans through every partitions.\r\n>\r\n>\r\n>> It scans through every partitions. Shouldn’t it only scan the dfp_in_network_impressions.dfp_in_network_impressions_20170202 child table? Or we missing something?\r\n>> Any advice/help would highly appreciated.\r\n>\r\n>https://www.postgresql.org/docs/9.6/static/ddl-partitioning.html#DDL-PARTITIONING-CAVEATS\r\n>|The following caveats apply to constraint exclusion:\r\n>| Constraint exclusion only works when the query's WHERE clause contains\r\n>| constants (or externally supplied parameters). For example, a comparison\r\n>| against a non-immutable function such as CURRENT_TIMESTAMP cannot be\r\n>| optimized, since the planner cannot know which partition the function value\r\n>| might fall into at run time.\r\n>\r\n>...\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Aug 2017 16:08:19 +0000",
"msg_from": "Aniko Belim <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitioned table - scans through every partitions"
}
] |
[
{
"msg_contents": "On Thu, 24 Aug 2017 16:15:19 +0300, Mariel Cherkassky \n<[email protected]> wrote:\n\n >I'm trying to understand what postgresql doing in an issue that I'm \nhaving.\n >Our app team wrote a function that runs with a cursor over the results \nof a\n >query and via the utl_file func they write some columns to a file. I dont\n >understand why, but postgresql write the data into the file in the fs in\n >parts. I mean that it runs the query and it takes time to get back results\n >and when I see that the results back postgresql write to file the data and\n >then suddenly stops for X minutes. After those x minutes it starts \nagain to\n >write the data and it continues that way until its done. The query returns\n >total *100* rows. I want to understand why it stops suddenly. There arent\n >any locks in the database during this operation.\n >\n >my function looks like that :\n >\n >func(a,b,c...)\n >cursor cr for\n >select ab,c,d,e.....\n >begin\n >raise notice - 'starting loop time - %',timeofday();\n > for cr_record in cr\n >��� Raise notice 'print to file - '%',timeofday();\n >��� utl_file.write(file,cr_record)\n > end loop\n >end\n >\n >I see the log of the running the next output :\n >\n >starting loop 16:00\n >print to file : 16:03\n >print to file : 16:03\n >print to file : 16:07\n >print to file : 16:07\n >print to file : 16:07\n >print to file : 16:010\n >\n >......\n >\n >Can somebody explain to me this kind of behavior ? Why is it taking some\n >much time to write and in different minutes after the query already been\n >executed and finished ? Mybe I'm getting from the cursor only part of the\n >rows ?\n\n\nFirst I'd ask where did you get� utl_file� from?� Postrgesql has no such \nfacility, so you must be using an extension.� And not one I'm familiar \nwith either -� EnterpriseDB has a utl_file implementation in their \nOracle compatibility stuff, but it uses \"get\" and \"put\" calls rather \nthan \"read\" and \"write\".\n\nSecond, raising notices can be slow - I assume you added them to see \nwhat was happening?� How does the execution time compare if you remove them?\n\nI saw someone else asked about the execution plan, but I'm not sure that \nwill help here because it would affect only the initial select ... the \ncursor would be working with the result set and should be able to skip \ndirectly to the target rows.� I might expect several seconds for the \nloop with I/O ... but certainly not minutes unless the server is \nseverely overloaded.\n\nOne thing you might look at is the isolation level of the query.� If you \nare using READ_COMMITTED or less, and the table is busy, other writing \nqueries may be stepping on yours and even potentially changing your \nresult set during the cursor loop.� I would try using REPEATABLE_READ \nand see what happens.\n\n\nGeorge\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 31 Aug 2017 09:24:35 -0400",
"msg_from": "George Neuner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: printing results of query to file in different times"
},
{
"msg_contents": "I'm using an extension that is called orafce.\nYes I add the raise notice in order to see what happening but it doesnt\nwork faster. The execution plan isnt relevant because It happens for many\nqueries not for a specific one. I didnt understand what do you mean by\nREPEATABLE_READ .\n\n2017-08-31 16:24 GMT+03:00 George Neuner <[email protected]>:\n\n> On Thu, 24 Aug 2017 16:15:19 +0300, Mariel Cherkassky <\n> [email protected]> wrote:\n>\n> >I'm trying to understand what postgresql doing in an issue that I'm\n> having.\n> >Our app team wrote a function that runs with a cursor over the results of\n> a\n> >query and via the utl_file func they write some columns to a file. I dont\n> >understand why, but postgresql write the data into the file in the fs in\n> >parts. I mean that it runs the query and it takes time to get back results\n> >and when I see that the results back postgresql write to file the data and\n> >then suddenly stops for X minutes. After those x minutes it starts again\n> to\n> >write the data and it continues that way until its done. The query returns\n> >total *100* rows. I want to understand why it stops suddenly. There arent\n>\n> >any locks in the database during this operation.\n> >\n> >my function looks like that :\n> >\n> >func(a,b,c...)\n> >cursor cr for\n> >select ab,c,d,e.....\n> >begin\n> >raise notice - 'starting loop time - %',timeofday();\n> > for cr_record in cr\n> > Raise notice 'print to file - '%',timeofday();\n> > utl_file.write(file,cr_record)\n> > end loop\n> >end\n> >\n> >I see the log of the running the next output :\n> >\n> >starting loop 16:00\n> >print to file : 16:03\n> >print to file : 16:03\n> >print to file : 16:07\n> >print to file : 16:07\n> >print to file : 16:07\n> >print to file : 16:010\n> >\n> >......\n> >\n> >Can somebody explain to me this kind of behavior ? Why is it taking some\n> >much time to write and in different minutes after the query already been\n> >executed and finished ? Mybe I'm getting from the cursor only part of the\n> >rows ?\n>\n>\n> First I'd ask where did you get utl_file from? Postrgesql has no such\n> facility, so you must be using an extension. And not one I'm familiar with\n> either - EnterpriseDB has a utl_file implementation in their Oracle\n> compatibility stuff, but it uses \"get\" and \"put\" calls rather than \"read\"\n> and \"write\".\n>\n> Second, raising notices can be slow - I assume you added them to see what\n> was happening? How does the execution time compare if you remove them?\n>\n> I saw someone else asked about the execution plan, but I'm not sure that\n> will help here because it would affect only the initial select ... the\n> cursor would be working with the result set and should be able to skip\n> directly to the target rows. I might expect several seconds for the loop\n> with I/O ... but certainly not minutes unless the server is severely\n> overloaded.\n>\n> One thing you might look at is the isolation level of the query. If you\n> are using READ_COMMITTED or less, and the table is busy, other writing\n> queries may be stepping on yours and even potentially changing your result\n> set during the cursor loop. I would try using REPEATABLE_READ and see what\n> happens.\n>\n>\n> George\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI'm using an extension that is called orafce. Yes I add the raise notice in order to see what happening but it doesnt work faster. The execution plan isnt relevant because It happens for many queries not for a specific one. I didnt understand what do you mean by REPEATABLE_READ .2017-08-31 16:24 GMT+03:00 George Neuner <[email protected]>:On Thu, 24 Aug 2017 16:15:19 +0300, Mariel Cherkassky <[email protected]> wrote:\n\n>I'm trying to understand what postgresql doing in an issue that I'm having.\n>Our app team wrote a function that runs with a cursor over the results of a\n>query and via the utl_file func they write some columns to a file. I dont\n>understand why, but postgresql write the data into the file in the fs in\n>parts. I mean that it runs the query and it takes time to get back results\n>and when I see that the results back postgresql write to file the data and\n>then suddenly stops for X minutes. After those x minutes it starts again to\n>write the data and it continues that way until its done. The query returns\n>total *100* rows. I want to understand why it stops suddenly. There arent\n>any locks in the database during this operation.\n>\n>my function looks like that :\n>\n>func(a,b,c...)\n>cursor cr for\n>select ab,c,d,e.....\n>begin\n>raise notice - 'starting loop time - %',timeofday();\n> for cr_record in cr\n> Raise notice 'print to file - '%',timeofday();\n> utl_file.write(file,cr_record)\n> end loop\n>end\n>\n>I see the log of the running the next output :\n>\n>starting loop 16:00\n>print to file : 16:03\n>print to file : 16:03\n>print to file : 16:07\n>print to file : 16:07\n>print to file : 16:07\n>print to file : 16:010\n>\n>......\n>\n>Can somebody explain to me this kind of behavior ? Why is it taking some\n>much time to write and in different minutes after the query already been\n>executed and finished ? Mybe I'm getting from the cursor only part of the\n>rows ?\n\n\nFirst I'd ask where did you get utl_file from? Postrgesql has no such facility, so you must be using an extension. And not one I'm familiar with either - EnterpriseDB has a utl_file implementation in their Oracle compatibility stuff, but it uses \"get\" and \"put\" calls rather than \"read\" and \"write\".\n\nSecond, raising notices can be slow - I assume you added them to see what was happening? How does the execution time compare if you remove them?\n\nI saw someone else asked about the execution plan, but I'm not sure that will help here because it would affect only the initial select ... the cursor would be working with the result set and should be able to skip directly to the target rows. I might expect several seconds for the loop with I/O ... but certainly not minutes unless the server is severely overloaded.\n\nOne thing you might look at is the isolation level of the query. If you are using READ_COMMITTED or less, and the table is busy, other writing queries may be stepping on yours and even potentially changing your result set during the cursor loop. I would try using REPEATABLE_READ and see what happens.\n\n\nGeorge\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 5 Sep 2017 14:28:44 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: printing results of query to file in different times"
},
{
"msg_contents": "Hi Mariel,\n\nPlease don't top post in the Postgresql groups.\n\nOn 9/5/2017 7:28 AM, Mariel Cherkassky wrote:\n> 2017-08-31 16:24 GMT+03:00 George Neuner <[email protected] > <mailto:[email protected]>>: > >> One thing you might look at is \nthe isolation level of the query. >> If you are using READ_COMMITTED or \nless, and the table is busy, >> other writing queries may be stepping on \nyours and even >> potentially changing your result set during the cursor \nloop. I >> would try using REPEATABLE_READ and see what happens. > > I \ndidn't understand what do you mean by REPEATABLE_READ .\nI was referring to transaction isolation levels.� When multiple \ntransactions are running concurrently, the DBMS can (or not) prevent \nthem from seeing changes made by one another.� Consider 2 transactions A \nand B running concurrently:\n\n � T1:�� A reads table X\n � T2:�� B writes to table X\n � T3:�� B commits\n � T4:�� A reads table X again.\n\nDepending on the isolation levels [and the specific query, obviously], A \nmay or may not be able to see what changes B made to X.\n\nThe default isolation level in Postgresql is READ COMMITTED, which does \nallow transactions to see committed writes made by concurrently running \ntransactions.� REPEATABLE READ is a higher level of isolation which \neffectively takes a snapshot of the table(s) when they are 1st read, and \nguarantees that any further reads (e.g., by cursors) of the tables made \nby the transaction continue to see the same results.\n\n\nMy thought was that your loop may be running slowly because the table is \nbeing changed underneath your cursor.� It may be better to pull the \nresults into a temporary table and run your cursor loop over that.\n\nFor more information, see:\nhttps://www.postgresql.org/docs/current/static/transaction-iso.html\nhttps://www.postgresql.org/docs/9.6/static/sql-begin.html\nhttps://stackoverflow.com/questions/6274457/set-isolation-level-for-postgresql-stored-procedures\n\nGeorge\n\n\n\n\n\n\n Hi Mariel,\n\n Please don't top post in the Postgresql groups.\n\n On 9/5/2017 7:28 AM, Mariel Cherkassky wrote:\n> 2017-08-31 16:24 GMT+03:00 George Neuner <[email protected]\n> <mailto:[email protected]>>:\n> \n>> One thing you might look at is the isolation level of the query.\n>> If you are using READ_COMMITTED or less, and the table is busy,\n>> other writing queries may be stepping on yours and even\n>> potentially changing your result set during the cursor loop. I\n>> would try using REPEATABLE_READ and see what happens.\n> \n> I didn't understand what do you mean by REPEATABLE_READ .\n\n I was referring to transaction isolation levels.� When multiple\n transactions are running concurrently, the DBMS can (or not) prevent\n them from seeing changes made by one another.� Consider 2\n transactions A and B running concurrently:\n\n � T1:�� A reads table X\n � T2:�� B writes to table X\n � T3:�� B commits\n � T4:�� A reads table X again.\n\n Depending on the isolation levels [and the specific query,\n obviously], A may or may not be able to see what changes B made to\n X.\n\n The default isolation level in Postgresql is READ COMMITTED, which\n does allow transactions to see committed writes made by concurrently\n running transactions.� REPEATABLE READ is a higher level of\n isolation which effectively takes a snapshot of the table(s) when\n they are 1st read, and guarantees that any further reads (e.g., by\n cursors) of the tables made by the transaction continue to see the\n same results.\n\n\n My thought was that your loop may be running slowly because the\n table is being changed underneath your cursor.� It may be better to\n pull the results into a temporary table and run your cursor loop\n over that.\n\n For more information, see: \nhttps://www.postgresql.org/docs/current/static/transaction-iso.html\nhttps://www.postgresql.org/docs/9.6/static/sql-begin.html\nhttps://stackoverflow.com/questions/6274457/set-isolation-level-for-postgresql-stored-procedures\n\n George",
"msg_date": "Tue, 5 Sep 2017 10:23:14 -0400",
"msg_from": "George Neuner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: printing results of query to file in different times"
},
{
"msg_contents": "\nGeez ... I just saw how my last message got mangled.\nTrying again.\n\n\nOn 9/5/2017 7:28 AM, Mariel Cherkassky wrote:\n> I didn't understand what do you mean by REPEATABLE_READ.\n\nI was referring to transaction isolation levels.� When multiple \ntransactions are running concurrently, the DBMS can (or not) prevent \nthem from seeing changes made by one another.� Consider 2 transactions A \nand B running concurrently:\n\n � T1:�� A reads table X\n � T2:�� B writes to table X\n � T3:�� B commits\n � T4:�� A reads table X again.\n\nDepending on the isolation levels [and the specific query, obviously], A \nmay or may not be able to see what changes B made to X.\n\nThe default isolation level in Postgresql is READ COMMITTED, which does \nallow transactions to see committed writes made by concurrently running \ntransactions.� REPEATABLE READ is a higher level of isolation which \neffectively takes a snapshot of the table(s) when they are 1st read, and \nguarantees that any further reads (e.g., by cursors) of the tables made \nby the transaction continue to see the same results.\n\n\nMy thought was that your loop may be running slowly because the table is \nbeing changed underneath your cursor.� It may be better to pull the \nresults into a temporary table and run your cursor loop over that.\n\nFor more information, see:\nhttps://www.postgresql.org/docs/current/static/transaction-iso.html\nhttps://www.postgresql.org/docs/9.6/static/sql-begin.html\nhttps://stackoverflow.com/questions/6274457/set-isolation-level-for-postgresql-stored-procedures \n\n\nGeorge\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 5 Sep 2017 13:12:24 -0400",
"msg_from": "George Neuner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: printing results of query to file in different times"
},
{
"msg_contents": "Hi Mariel,\n\nOn 9/6/2017 4:38 AM, Mariel Cherkassky wrote:\n> I'm sure that those tables arent involved in any other transaction \n> when the loop is running. Anything else that I can check ? I think \n> that mybe its connected to some fetching properties but Im not \n> familiar with what settings..\n\nThat's the problem.� There are a lot of things that can affect query \nperformance, but most of them _won't_ affect an open cursor unless \nisolation is low and the query's source tables are changing due to \nongoing operations.� Each time the cursor is accessed, the query's \nsource tables are checked for modifications, and if they have been \nchanged, the cursor's query is re-executed ... potentially changing the \nresult set.\n\nNot that it matters here, but you didn't show your actual query. Even if \nyou are only fetching 100 rows, the query may be doing a lot of work \n(joins, sorts, etc.) to identify those rows.� If a complicated query is \nbeing executed over and over due to ongoing table modifications ...��� \nThat's why I suggested using a temporary table that you know won't be \nmodified while the cursor is open on it - it's a way of side-stepping \nisolation issues that are beyond your control.\n\nIf there really is no contention for the source tables, the only other \npossibilities are a badly over-loaded (or mis-configured) server, a \nproblem with the storage system (e.g., a bad disk that is causing \nhiccups rather than outright failures), or some unknown issue with the \nextension you are using.\n\nI'm afraid I'm out of suggestions.\nGeorge\n\n\n\n\n\n\n Hi Mariel,\n\nOn 9/6/2017 4:38 AM, Mariel Cherkassky\n wrote:\n\n\n\nI'm sure that those tables arent involved in any\n other transaction when the loop is running. Anything else that\n I can check ? I think that mybe its connected to some fetching\n properties but Im not familiar with what settings..\n\n\n\n That's the problem.� There are a lot of things that can affect query\n performance, but most of them won't affect an open cursor\n unless isolation is low and the query's source tables are changing\n due to ongoing operations.� Each time the cursor is accessed, the\n query's source tables are checked for modifications, and if they\n have been changed, the cursor's query is re-executed ... potentially\n changing the result set.\n\n Not that it matters here, but you didn't show your actual query.�\n Even if you are only fetching 100 rows, the query may be doing a lot\n of work (joins, sorts, etc.) to identify those rows.� If a\n complicated query is being executed over and over due to ongoing\n table modifications ...��� That's why I suggested using a temporary\n table that you know won't be modified while the cursor is open on it\n - it's a way of side-stepping isolation issues that are beyond your\n control.\n\n If there really is no contention for the source tables, the only\n other possibilities are a badly over-loaded (or mis-configured)\n server, a problem with the storage system (e.g., a bad disk that is\n causing hiccups rather than outright failures), or some unknown\n issue with the extension you are using.\n\n I'm afraid I'm out of suggestions.\n George",
"msg_date": "Wed, 6 Sep 2017 12:55:32 -0400",
"msg_from": "George Neuner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: printing results of query to file in different times"
},
{
"msg_contents": "Hi Mariel,\n\nOn 9/7/2017 7:02 AM, Mariel Cherkassky wrote:\n> I'm pretty sure that the source tables are changing during the loop. I \n> have no problem showing the query :\n>\n> � � � �SELECT AREA,\n> � � � � � � � �PHONE,\n> � � � � � � � �TERM_CODE,\n> � � � � � � � �LINE_NO,\n> � � � � � � � �PAYMENT_START_DATE,\n> � � � � � � � �PAYMENT_END_DATE\n> � � � � FROM � TC_FINTERMS A\n> � � � � WHERE �AREA IN ('32', '75', '31', '35') -- Iditb 04/12/2011\n> � � � � AND � �TERM_TYPE = '1'\n> � � � � AND � �TERM_CODE NOT IN (15,255,180,182)\n> � � � � AND (PAYMENT_END_DATE IS NULL OR (PAYMENT_END_DATE > \n> current_timestamp AND\n> � � � � � � � date_trunc('day', PAYMENT_END_DATE) <> date_trunc('day', \n> PAYMENT_START_DATE)))\n> � � � � AND � �not exists(SELECT 3\n> � � � � � � � � FROM � BL_NLFR\n> � � � � � � � � WHERE �AREA IN ('75', '35') -- Iditb 04/12/2011\n> � � � � � � � � and � �phone = a.phone)\n> � � � � AND � �NOT EXISTS(SELECT 2\n> � � � � � � � � FROM � TV_FINTERM\n> � � � � � � � � WHERE �TERM_CODE = A.TERM_CODE\n> � � � � � � � � AND (TERM_TYPE_CODE = 556 or term_type_ss = 1::varchar))\n> � � � � AND � �NOT EXISTS(SELECT 1\n> � � � � � � � � FROM � NAP_IP_DEBIT � �B,\n> � � � � � � � � � � � �PS_RF_INST_PROD C\n> � � � � � � � � WHERE �B.INST_PROD_ID = C.INST_PROD_ID\n> � � � � � � � � AND � �C.SETID = 'SHARE'\n> � � � � � � � � AND � �C.NAP_MAKAT_CD <> 7\n> � � � � � � � � AND � �NAP_AREA2 = A.AREA\n> � � � � � � � � AND � �NAP_PHONE_NUM = A.PHONE\n> � � � � � � � � AND (B.NAP_FINTERM_END_DT IS NULL OR\n> � � � � � � � � � � � B.NAP_FINTERM_END_DT > current_timestamp)\n> � � � � � � � � AND � �NAP_BILLING_CATNUM = A.TERM_CODE\n> � � � � � � � � AND � �A.LINE_NO = B.NAP_FINTERMS_LINE);\n>\n>\n> expalin query :\n\nI don't understand the point of the not-exists sub-selects ... they \nreturn a single value (1, 2, or 3) or nothing depending - but then that \nvalue isn't compared to anything in TC_FINTERMS.�� If you were intending \nto \"select top <n>\" then you should know Postgresql doesn't support the \n\"top\" keyword ... you need to use \"limit\" [and sort first in the order \nmatters].\n\nAs far as I can see, those sub-selects are just wasting time without \naffecting the results at all.\n\n\n> -------------------------------\n> �Nested Loop Anti Join �(cost=67766.53..1008863.68 rows=1 width=38)\n> � �-> �Nested Loop Anti Join �(cost=67766.25..1008853.87 rows=1 width=38)\n> � � � � �-> �Hash Anti Join �(cost=67765.12..1008805.25 rows=1 width=38)\n> � � � � � � � �Hash Cond: (a.term_code = tv_finterm.term_code)\n> � � � � � � � �-> �Bitmap Heap Scan on tc_finterms a \n> �(cost=48268.39..843129.37 rows=1647089 width=38)\n> � � � � � � � � � � �Recheck Cond: ((((term_type)::text = '1'::text) \n> AND (payment_end_date IS NULL)) OR (((term_type)::text = '1'::\n> text) AND (payment_end_date > now())))\n> � � � � � � � � � � �Filter: (((area)::text = ANY \n> ('{32,75,31,35}'::text[])) AND (term_code <> ALL \n> ('{15,255,180,182}'::integer[]))\n> �AND ((payment_end_date IS NULL) OR ((payment_end_date > now()) AND \n> (date_trunc('day'::text, payment_end_date) <> date_trunc('day':\n> :text, payment_start_date)))))\n> � � � � � � � � � � �-> �BitmapOr �(cost=48268.39..48268.39 \n> rows=1867571 width=0)\n> � � � � � � � � � � � � � �-> �Bitmap Index Scan on mariel_tc_finterms \n> �(cost=0.00..32332.45 rows=1272789 width=0)\n> � � � � � � � � � � � � � � � � �Index Cond: (((term_type)::text = \n> '1'::text) AND (payment_end_date IS NULL))\n> � � � � � � � � � � � � � �-> �Bitmap Index Scan on mariel_tc_finterms \n> �(cost=0.00..15112.39 rows=594782 width=0)\n> � � � � � � � � � � � � � � � � �Index Cond: (((term_type)::text = \n> '1'::text) AND (payment_end_date > now()))\n> � � � � � � � �-> �Hash �(cost=18808.47..18808.47 rows=55061 width=4)\n> � � � � � � � � � � �-> �Seq Scan on tv_finterm �(cost=0.00..18808.47 \n> rows=55061 width=4)\n> � � � � � � � � � � � � � �Filter: ((term_type_code = 556) OR \n> ((term_type_ss)::text = '1'::text))\n> � � � � �-> �Nested Loop �(cost=1.13..24.87 rows=1 width=23)\n> � � � � � � � �-> �Index Scan using ps_rf_inst_prod_comb1 on \n> ps_rf_inst_prod c �(cost=0.56..12.23 rows=2 width=17)\n> � � � � � � � � � � �Index Cond: (((nap_phone_num)::text = \n> (a.phone)::text) AND ((nap_area2)::text = (a.area)::text))\n> � � � � � � � � � � �Filter: ((nap_makat_cd <> '7'::numeric) AND \n> ((setid)::text = 'SHARE'::text))\n> � � � � � � � �-> �Index Only Scan using mariel_nap on nap_ip_debit b \n> �(cost=0.56..6.31 rows=1 width=22)\n> � � � � � � � � � � �Index Cond: ((inst_prod_id = \n> (c.inst_prod_id)::text) AND (nap_billing_catnum = (a.term_code)::numeric))\n> � � � � � � � � � � �Filter: (((nap_finterm_end_dt IS NULL) OR \n> (nap_finterm_end_dt > now())) AND (a.line_no = (nap_finterms_line)::\n> double precision))\n> � �-> �Index Only Scan using bl_nlfr_ix1 on bl_nlfr �(cost=0.28..18.40 \n> rows=5 width=7)\n> � � � � �Index Cond: ((area = ANY ('{75,35}'::text[])) AND (phone = \n> (a.phone)::text))\n> -------------------------------\n\n\nJudging from the explaination [and modulo those odd sub-selects], this \nshould be reasonably quick as a normal batch query ... but according to \nthe estimates, some of those index scans are on 500K - 1M rows, which is \nnot something you want to be repeating many times under an open cursor.\n\n\n> How can I use temporary tables ?\n\nFirst run your complex query and place the results in a temp table. Then \nrun your cursor over the temp table.� And finally, drop the temp table \nbecause, if you want to run the function again, \"select into\" will fail \nif the target table already exists.\n\n ��� select� <result columns>\n � � � �� into temp table <tablename>\n �������� <the complex query>\n\n ��� open <cursor> for\n ��� ��� select * from <tablename>\n ��� :\n ��� close <cursor>\n\n ��� drop <tablename>\n\n\nsee https://www.postgresql.org/docs/current/static/sql-selectinto.html\n\nThe temp table won't change under the open cursor, and so there won't be \nany isolation issues.� If the performance is *still* bad after doing \nthis, then it's a server or extension issue.\n\nGeorge\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 7 Sep 2017 13:26:38 -0400",
"msg_from": "George Neuner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: printing results of query to file in different times"
}
] |
[
{
"msg_contents": "Hi team,\n\nI'm trying to configure postgres and pgbouncer to handle many inserts from\nmany connections.\n\nHere's some details about what i want to achieve :\n\n We have more than 3000 client connections, and my server program forks\nbackend process for each client connections.\n If backend processes send a request to its connected client, the client\nsend some text data(about 3000 bytes) to the backend process and wait for\n next request.\n The backend process execute insert text data using PQexec from libpq\nlbirary, if PQexec is done, backend process send request to\n client again.\n\n All the inserts using one, same table.\n\nThe problem is, clients wait too long due to insert process is too slow.\nIt seems to working fine at first, but getting slows down after couple of\nhours,\neach insert query takes 3000+ ms and keep growing.\n\nNeed some help to figure out an actual causes of this problem.\n\nSystem information :\n PGBouncer 1.7.2.\n PostgreSQL 9.6.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7\n20120313 (Red Hat 4.4.7-18), 64-bit on CentOS release 6.9 (Final).\n Kernel version 2.6.32-696.10.1.el6.x86_64\n Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz processor.\n 32GB ECC/REG-Buffered RAM.\n 128GB Samsung 840 evo SSD.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 4 Sep 2017 17:14:39 +0900",
"msg_from": "=?UTF-8?B?7Jqw7ISx66+8?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Handling small inserts from many connections."
},
{
"msg_contents": "\n\nWithout more information, this is my initial guess at your insert slowness problem:The symptom of this insert slowness/delayed action is delayed, granted, extend locks (locktype=extend) due to many concurrent connections trying to insert into the same table. Each insert request results in an extend lock (8k extension), which blocks other writers. What normally happens is that these extend locks happen so fast that you hardly seem them in the pg_locks table, except in the case where many concurrent connections are trying to do inserts into the same table. The following query will show if this is the case:select * from pg_locks where granted = false and locktype = 'extend';If this is your problem, then some kind of re-architecture is necessary to reduce the number of connections trying to do the inserts at the same time into the same table. My first hand problem like this goes back to 9.2, so perhaps some good stuff has happened in the newer versions of PG. Let's see what other good ideas come down the pike for this thread...Regards,Michael VitaleOn September 4, 2017 at 4:14 AM 우성민 <[email protected]> wrote:Hi team,I'm trying to configure postgres and pgbouncer to handle many inserts from many connections.Here's some details about what i want to achieve : We have more than 3000 client connections, and my server program forks backend process for each client connections. If backend processes send a request to its connected client, the client send some text data(about 3000 bytes) to the backend process and wait for next request. The backend process execute insert text data using PQexec from libpq lbirary, if PQexec is done, backend process send request to client again. All the inserts using one, same table.The problem is, clients wait too long due to insert process is too slow.It seems to working fine at first, but getting slows down after couple of hours,each insert query takes 3000+ ms and keep growing.Need some help to figure out an actual causes of this problem.System information : PGBouncer 1.7.2. PostgreSQL 9.6.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18), 64-bit on CentOS release 6.9 (Final). Kernel version 2.6.32-696.10.1.el6.x86_64 Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz processor. 32GB ECC/REG-Buffered RAM. 128GB Samsung 840 evo SSD.-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 4 Sep 2017 07:57:09 -0400 (EDT)",
"msg_from": "Michael Vitale <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Handling small inserts from many connections."
},
{
"msg_contents": "On Mon, Sep 4, 2017 at 2:14 AM, 우성민 <[email protected]> wrote:\n> Hi team,\n>\n> I'm trying to configure postgres and pgbouncer to handle many inserts from\n> many connections.\n>\n> Here's some details about what i want to achieve :\n>\n> We have more than 3000 client connections, and my server program forks\n> backend process for each client connections.\n\nThis is a terrible configuration for any kind of performance. Under\nload all 3,000 connections can quickly swamp your server resulting in\nit slowing to a crawl.\n\nGet a connection pooler involved. I suggest pgbouncer unless you have\nvery odd pooling needs. It's easy, small, and fast. Funnel those 3,000\nconnections down to <100 if you can. It will make a huge difference in\nperformance and reliability.\n\n> System information :\n> PGBouncer 1.7.2.\n> PostgreSQL 9.6.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7\n> 20120313 (Red Hat 4.4.7-18), 64-bit on CentOS release 6.9 (Final).\n> Kernel version 2.6.32-696.10.1.el6.x86_64\n> Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz processor.\n> 32GB ECC/REG-Buffered RAM.\n> 128GB Samsung 840 evo SSD.\n\nIf it's still slow after connection pooling is setup, then look at\nthrowing more SSDs at the problem. If you're using a HW RAID\ncontroller, turn off caching with SSDs unless you can prove it's\nfaster with it. It almost never is.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 4 Sep 2017 12:15:42 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Handling small inserts from many connections."
},
{
"msg_contents": "Jumping on Scott's observation, assuming you really do have a lot of active connections (idle ones usually are not a problem) a general rule of thumb for not overloading your system is keep your active connections less than (2-3) * (number so cpus).\n\nSent from my iPad\n\n> On Sep 4, 2017, at 2:15 PM, Scott Marlowe <[email protected]> wrote:\n> \n>> On Mon, Sep 4, 2017 at 2:14 AM, 우성민 <[email protected]> wrote:\n>> Hi team,\n>> \n>> I'm trying to configure postgres and pgbouncer to handle many inserts from\n>> many connections.\n>> \n>> Here's some details about what i want to achieve :\n>> \n>> We have more than 3000 client connections, and my server program forks\n>> backend process for each client connections.\n> \n> This is a terrible configuration for any kind of performance. Under\n> load all 3,000 connections can quickly swamp your server resulting in\n> it slowing to a crawl.\n> \n> Get a connection pooler involved. I suggest pgbouncer unless you have\n> very odd pooling needs. It's easy, small, and fast. Funnel those 3,000\n> connections down to <100 if you can. It will make a huge difference in\n> performance and reliability.\n> \n>> System information :\n>> PGBouncer 1.7.2.\n>> PostgreSQL 9.6.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7\n>> 20120313 (Red Hat 4.4.7-18), 64-bit on CentOS release 6.9 (Final).\n>> Kernel version 2.6.32-696.10.1.el6.x86_64\n>> Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz processor.\n>> 32GB ECC/REG-Buffered RAM.\n>> 128GB Samsung 840 evo SSD.\n> \n> If it's still slow after connection pooling is setup, then look at\n> throwing more SSDs at the problem. If you're using a HW RAID\n> controller, turn off caching with SSDs unless you can prove it's\n> faster with it. It almost never is.\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 4 Sep 2017 18:06:24 -0400",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Handling small inserts from many connections."
},
{
"msg_contents": "On Mon, Sep 4, 2017 at 1:14 AM, 우성민 <[email protected]> wrote:\n\n> Hi team,\n>\n> I'm trying to configure postgres and pgbouncer to handle many inserts from\n> many connections.\n>\n> Here's some details about what i want to achieve :\n>\n> We have more than 3000 client connections, and my server program forks\n> backend process for each client connections.\n> If backend processes send a request to its connected client, the client\n> send some text data(about 3000 bytes) to the backend process and wait for\n> next request.\n> The backend process execute insert text data using PQexec from libpq\n> lbirary, if PQexec is done, backend process send request to\n> client again.\n>\n> All the inserts using one, same table.\n>\n> The problem is, clients wait too long due to insert process is too slow.\n> It seems to working fine at first, but getting slows down after couple of\n> hours,\n> each insert query takes 3000+ ms and keep growing.\n>\n\nIf it takes a couple hours for it to slow down, then it sounds like you\nhave a leak somewhere in your code.\n\nRun \"top\" and see who is using the CPU time (or the io wait time, if that\nis what it is, and the memory)\n\nCheers,\n\nJeff\n\nOn Mon, Sep 4, 2017 at 1:14 AM, 우성민 <[email protected]> wrote:Hi team,I'm trying to configure postgres and pgbouncer to handle many inserts from many connections.Here's some details about what i want to achieve : We have more than 3000 client connections, and my server program forks backend process for each client connections. If backend processes send a request to its connected client, the client send some text data(about 3000 bytes) to the backend process and wait for next request. The backend process execute insert text data using PQexec from libpq lbirary, if PQexec is done, backend process send request to client again. All the inserts using one, same table.The problem is, clients wait too long due to insert process is too slow.It seems to working fine at first, but getting slows down after couple of hours,each insert query takes 3000+ ms and keep growing.If it takes a couple hours for it to slow down, then it sounds like you have a leak somewhere in your code.Run \"top\" and see who is using the CPU time (or the io wait time, if that is what it is, and the memory) Cheers,Jeff",
"msg_date": "Mon, 4 Sep 2017 15:27:06 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Handling small inserts from many connections."
},
{
"msg_contents": "I’am already using pgbouncer as a connection pooler and default_pool_size\n = 96.\n\ni checked “show pools”, the max_wait was as high as 70 or more while INSERT\nstatement duration is about 3000ms in postgres log.\nThese numbers increase over time.\n\nI’ll try RAID with more SSDs.\n\nThank you for your response.\n\n2017년 9월 5일 (화) 오전 3:15, Scott Marlowe <[email protected]>님이 작성:\n\n> On Mon, Sep 4, 2017 at 2:14 AM, 우성민 <[email protected]> wrote:\n> > Hi team,\n> >\n> > I'm trying to configure postgres and pgbouncer to handle many inserts\n> from\n> > many connections.\n> >\n> > Here's some details about what i want to achieve :\n> >\n> > We have more than 3000 client connections, and my server program forks\n> > backend process for each client connections.\n>\n> This is a terrible configuration for any kind of performance. Under\n> load all 3,000 connections can quickly swamp your server resulting in\n> it slowing to a crawl.\n>\n> Get a connection pooler involved. I suggest pgbouncer unless you have\n> very odd pooling needs. It's easy, small, and fast. Funnel those 3,000\n> connections down to <100 if you can. It will make a huge difference in\n> performance and reliability.\n>\n> > System information :\n> > PGBouncer 1.7.2.\n> > PostgreSQL 9.6.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7\n> > 20120313 (Red Hat 4.4.7-18), 64-bit on CentOS release 6.9 (Final).\n> > Kernel version 2.6.32-696.10.1.el6.x86_64\n> > Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz processor.\n> > 32GB ECC/REG-Buffered RAM.\n> > 128GB Samsung 840 evo SSD.\n>\n> If it's still slow after connection pooling is setup, then look at\n> throwing more SSDs at the problem. If you're using a HW RAID\n> controller, turn off caching with SSDs unless you can prove it's\n> faster with it. It almost never is.\n>\n\nI’am already using pgbouncer as a connection pooler and default_pool_size = 96.i checked “show pools”, the max_wait was as high as 70 or more while INSERT statement duration is about 3000ms in postgres log.These numbers increase over time.I’ll try RAID with more SSDs.Thank you for your response.2017년 9월 5일 (화) 오전 3:15, Scott Marlowe <[email protected]>님이 작성:On Mon, Sep 4, 2017 at 2:14 AM, 우성민 <[email protected]> wrote:\n> Hi team,\n>\n> I'm trying to configure postgres and pgbouncer to handle many inserts from\n> many connections.\n>\n> Here's some details about what i want to achieve :\n>\n> We have more than 3000 client connections, and my server program forks\n> backend process for each client connections.\n\nThis is a terrible configuration for any kind of performance. Under\nload all 3,000 connections can quickly swamp your server resulting in\nit slowing to a crawl.\n\nGet a connection pooler involved. I suggest pgbouncer unless you have\nvery odd pooling needs. It's easy, small, and fast. Funnel those 3,000\nconnections down to <100 if you can. It will make a huge difference in\nperformance and reliability.\n\n> System information :\n> PGBouncer 1.7.2.\n> PostgreSQL 9.6.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7\n> 20120313 (Red Hat 4.4.7-18), 64-bit on CentOS release 6.9 (Final).\n> Kernel version 2.6.32-696.10.1.el6.x86_64\n> Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz processor.\n> 32GB ECC/REG-Buffered RAM.\n> 128GB Samsung 840 evo SSD.\n\nIf it's still slow after connection pooling is setup, then look at\nthrowing more SSDs at the problem. If you're using a HW RAID\ncontroller, turn off caching with SSDs unless you can prove it's\nfaster with it. It almost never is.",
"msg_date": "Mon, 04 Sep 2017 22:54:03 +0000",
"msg_from": "=?UTF-8?B?7Jqw7ISx66+8?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Handling small inserts from many connections."
}
] |
[
{
"msg_contents": "Consider these 2 index scan produced by a query\n\n-> Index Scan using response_log_by_activity on public.response_log rl2\n (cost=0.00..51.53 rows=21 width=8) (actual time=9.017..9.056 rows=0\nloops=34098)\n Output: rl2.activity_id, rl2.feed_id\n Index Cond: (rl2.activity_id =\nrl.activity_id)\n Filter: rl2.success\n Buffers: shared hit=3357159\nread=153313\n -> Index Scan using activity_pkey on\npublic.activity a (cost=0.00..51.10 rows=1 width=12) (actual\ntime=0.126..0.127 rows=1 loops=34088)\n Output: a.status_id, a.activity_id,\na.visit_id\n Index Cond: (a.activity_id =\nrl.activity_id)\n Buffers: shared hit=137925 read=32728\n\n\nAnd it's size\n\nconscopy=# select\npg_size_pretty(pg_relation_size('response_log_by_activity'::regclass));\n pg_size_pretty\n----------------\n 7345 MB\n(1 row)\n\nconscopy=# select\npg_size_pretty(pg_relation_size('activity_pkey'::regclass));\n pg_size_pretty\n----------------\n 8110 MB\n(1 row)\n\nIndex scan on response_log_by_activity is far slower. The table has just\nbeen repacked, and index rebuilt, but still slow.\n\nIs there any other way to make it faster ?\n\nWhy Buffers: shared hit=3,357,159 read=153,313 on response_log_by_activity\nis much bigger than Buffers: shared hit=137925 read=32728 on activity_pkey\nwhile activity_pkey size is bigger ?\n\n-- \nRegards,\n\nSoni Maula Harriz\n\nConsider these 2 index scan produced by a query-> Index Scan using response_log_by_activity on public.response_log rl2 (cost=0.00..51.53 rows=21 width=8) (actual time=9.017..9.056 rows=0 loops=34098) Output: rl2.activity_id, rl2.feed_id Index Cond: (rl2.activity_id = rl.activity_id) Filter: rl2.success Buffers: shared hit=3357159 read=153313 -> Index Scan using activity_pkey on public.activity a (cost=0.00..51.10 rows=1 width=12) (actual time=0.126..0.127 rows=1 loops=34088) Output: a.status_id, a.activity_id, a.visit_id Index Cond: (a.activity_id = rl.activity_id) Buffers: shared hit=137925 read=32728And it's sizeconscopy=# select pg_size_pretty(pg_relation_size('response_log_by_activity'::regclass)); pg_size_pretty ---------------- 7345 MB(1 row)conscopy=# select pg_size_pretty(pg_relation_size('activity_pkey'::regclass)); pg_size_pretty ---------------- 8110 MB(1 row)Index scan on response_log_by_activity is far slower. The table has just been repacked, and index rebuilt, but still slow.Is there any other way to make it faster ?Why Buffers: shared hit=3,357,159 read=153,313 on response_log_by_activity is much bigger than Buffers: shared hit=137925 read=32728 on activity_pkey while activity_pkey size is bigger ?-- Regards,Soni Maula Harriz",
"msg_date": "Tue, 5 Sep 2017 20:24:24 +0700",
"msg_from": "Soni M <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow index scan performance"
},
{
"msg_contents": "It's Postgres 9.1.24 on RHEL 6.5\n\nOn Tue, Sep 5, 2017 at 8:24 PM, Soni M <[email protected]> wrote:\n\n> Consider these 2 index scan produced by a query\n>\n> -> Index Scan using response_log_by_activity on public.response_log rl2\n> (cost=0.00..51.53 rows=21 width=8) (actual time=9.017..9.056 rows=0\n> loops=34098)\n> Output: rl2.activity_id, rl2.feed_id\n> Index Cond: (rl2.activity_id =\n> rl.activity_id)\n> Filter: rl2.success\n> Buffers: shared hit=3357159\n> read=153313\n> -> Index Scan using activity_pkey on\n> public.activity a (cost=0.00..51.10 rows=1 width=12) (actual\n> time=0.126..0.127 rows=1 loops=34088)\n> Output: a.status_id, a.activity_id,\n> a.visit_id\n> Index Cond: (a.activity_id =\n> rl.activity_id)\n> Buffers: shared hit=137925 read=32728\n>\n>\n> And it's size\n>\n> conscopy=# select pg_size_pretty(pg_relation_size('response_log_by_\n> activity'::regclass));\n> pg_size_pretty\n> ----------------\n> 7345 MB\n> (1 row)\n>\n> conscopy=# select pg_size_pretty(pg_relation_size('activity_pkey'::\n> regclass));\n> pg_size_pretty\n> ----------------\n> 8110 MB\n> (1 row)\n>\n> Index scan on response_log_by_activity is far slower. The table has just\n> been repacked, and index rebuilt, but still slow.\n>\n> Is there any other way to make it faster ?\n>\n> Why Buffers: shared hit=3,357,159 read=153,313 on response_log_by_activity\n> is much bigger than Buffers: shared hit=137925 read=32728 on activity_pkey\n> while activity_pkey size is bigger ?\n>\n> --\n> Regards,\n>\n> Soni Maula Harriz\n>\n\n\n\n-- \nRegards,\n\nSoni Maula Harriz\n\nIt's Postgres 9.1.24 on RHEL 6.5On Tue, Sep 5, 2017 at 8:24 PM, Soni M <[email protected]> wrote:Consider these 2 index scan produced by a query-> Index Scan using response_log_by_activity on public.response_log rl2 (cost=0.00..51.53 rows=21 width=8) (actual time=9.017..9.056 rows=0 loops=34098) Output: rl2.activity_id, rl2.feed_id Index Cond: (rl2.activity_id = rl.activity_id) Filter: rl2.success Buffers: shared hit=3357159 read=153313 -> Index Scan using activity_pkey on public.activity a (cost=0.00..51.10 rows=1 width=12) (actual time=0.126..0.127 rows=1 loops=34088) Output: a.status_id, a.activity_id, a.visit_id Index Cond: (a.activity_id = rl.activity_id) Buffers: shared hit=137925 read=32728And it's sizeconscopy=# select pg_size_pretty(pg_relation_size('response_log_by_activity'::regclass)); pg_size_pretty ---------------- 7345 MB(1 row)conscopy=# select pg_size_pretty(pg_relation_size('activity_pkey'::regclass)); pg_size_pretty ---------------- 8110 MB(1 row)Index scan on response_log_by_activity is far slower. The table has just been repacked, and index rebuilt, but still slow.Is there any other way to make it faster ?Why Buffers: shared hit=3,357,159 read=153,313 on response_log_by_activity is much bigger than Buffers: shared hit=137925 read=32728 on activity_pkey while activity_pkey size is bigger ?-- Regards,Soni Maula Harriz\n\n-- Regards,Soni Maula Harriz",
"msg_date": "Tue, 5 Sep 2017 20:46:14 +0700",
"msg_from": "Soni M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow index scan performance"
},
{
"msg_contents": "Trying on another server, it gives different result.\n\n-> Index Scan using response_log_by_activity on public.response_log rl2\n (cost=0.00..50.29 rows=17 width=8) (actual time=0.955..0.967 rows=0\nloops=30895)\n Output: rl2.activity_id, rl2.feed_id\n Index Cond: (rl2.activity_id =\nrl.activity_id)\n Filter: rl2.success\n Buffers: shared hit=2311312\nread=132342\n -> Index Scan using activity_pkey on\npublic.activity a (cost=0.00..49.79 rows=1 width=12) (actual\ntime=13.747..13.762 rows=1 loops=30892)\n Output: a.status_id, a.activity_id,\na.visit_id\n Index Cond: (a.activity_id =\nrl.activity_id)\n Buffers: shared hit=124463 read=30175\n\nNow, index scan on activity_pkey which take much slower. Can someone please\nexplain these ?\n\nThanks\n\nOn Tue, Sep 5, 2017 at 8:46 PM, Soni M <[email protected]> wrote:\n\n> It's Postgres 9.1.24 on RHEL 6.5\n>\n> On Tue, Sep 5, 2017 at 8:24 PM, Soni M <[email protected]> wrote:\n>\n>> Consider these 2 index scan produced by a query\n>>\n>> -> Index Scan using response_log_by_activity on public.response_log rl2\n>> (cost=0.00..51.53 rows=21 width=8) (actual time=9.017..9.056 rows=0\n>> loops=34098)\n>> Output: rl2.activity_id,\n>> rl2.feed_id\n>> Index Cond: (rl2.activity_id =\n>> rl.activity_id)\n>> Filter: rl2.success\n>> Buffers: shared hit=3357159\n>> read=153313\n>> -> Index Scan using activity_pkey on\n>> public.activity a (cost=0.00..51.10 rows=1 width=12) (actual\n>> time=0.126..0.127 rows=1 loops=34088)\n>> Output: a.status_id, a.activity_id,\n>> a.visit_id\n>> Index Cond: (a.activity_id =\n>> rl.activity_id)\n>> Buffers: shared hit=137925 read=32728\n>>\n>>\n>> And it's size\n>>\n>> conscopy=# select pg_size_pretty(pg_relation_siz\n>> e('response_log_by_activity'::regclass));\n>> pg_size_pretty\n>> ----------------\n>> 7345 MB\n>> (1 row)\n>>\n>> conscopy=# select pg_size_pretty(pg_relation_siz\n>> e('activity_pkey'::regclass));\n>> pg_size_pretty\n>> ----------------\n>> 8110 MB\n>> (1 row)\n>>\n>> Index scan on response_log_by_activity is far slower. The table has just\n>> been repacked, and index rebuilt, but still slow.\n>>\n>> Is there any other way to make it faster ?\n>>\n>> Why Buffers: shared hit=3,357,159 read=153,313 on\n>> response_log_by_activity is much bigger than Buffers: shared hit=137925\n>> read=32728 on activity_pkey while activity_pkey size is bigger ?\n>>\n>> --\n>> Regards,\n>>\n>> Soni Maula Harriz\n>>\n>\n>\n>\n> --\n> Regards,\n>\n> Soni Maula Harriz\n>\n\n\n\n-- \nRegards,\n\nSoni Maula Harriz\n\nTrying on another server, it gives different result.-> Index Scan using response_log_by_activity on public.response_log rl2 (cost=0.00..50.29 rows=17 width=8) (actual time=0.955..0.967 rows=0 loops=30895) Output: rl2.activity_id, rl2.feed_id Index Cond: (rl2.activity_id = rl.activity_id) Filter: rl2.success Buffers: shared hit=2311312 read=132342 -> Index Scan using activity_pkey on public.activity a (cost=0.00..49.79 rows=1 width=12) (actual time=13.747..13.762 rows=1 loops=30892) Output: a.status_id, a.activity_id, a.visit_id Index Cond: (a.activity_id = rl.activity_id) Buffers: shared hit=124463 read=30175Now, index scan on activity_pkey which take much slower. Can someone please explain these ?ThanksOn Tue, Sep 5, 2017 at 8:46 PM, Soni M <[email protected]> wrote:It's Postgres 9.1.24 on RHEL 6.5On Tue, Sep 5, 2017 at 8:24 PM, Soni M <[email protected]> wrote:Consider these 2 index scan produced by a query-> Index Scan using response_log_by_activity on public.response_log rl2 (cost=0.00..51.53 rows=21 width=8) (actual time=9.017..9.056 rows=0 loops=34098) Output: rl2.activity_id, rl2.feed_id Index Cond: (rl2.activity_id = rl.activity_id) Filter: rl2.success Buffers: shared hit=3357159 read=153313 -> Index Scan using activity_pkey on public.activity a (cost=0.00..51.10 rows=1 width=12) (actual time=0.126..0.127 rows=1 loops=34088) Output: a.status_id, a.activity_id, a.visit_id Index Cond: (a.activity_id = rl.activity_id) Buffers: shared hit=137925 read=32728And it's sizeconscopy=# select pg_size_pretty(pg_relation_size('response_log_by_activity'::regclass)); pg_size_pretty ---------------- 7345 MB(1 row)conscopy=# select pg_size_pretty(pg_relation_size('activity_pkey'::regclass)); pg_size_pretty ---------------- 8110 MB(1 row)Index scan on response_log_by_activity is far slower. The table has just been repacked, and index rebuilt, but still slow.Is there any other way to make it faster ?Why Buffers: shared hit=3,357,159 read=153,313 on response_log_by_activity is much bigger than Buffers: shared hit=137925 read=32728 on activity_pkey while activity_pkey size is bigger ?-- Regards,Soni Maula Harriz\n\n-- Regards,Soni Maula Harriz\n\n-- Regards,Soni Maula Harriz",
"msg_date": "Tue, 5 Sep 2017 20:58:53 +0700",
"msg_from": "Soni M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow index scan performance"
}
] |
[
{
"msg_contents": "Hello All, I would like to know about how OS cache works for postgres table\nand index file.\n\nLet's say I have 10 year data, and commonly used data only the last 1 year.\nThis data is quite big, so each table and index file is divided into\nseveral file in PGDATA/base\n\nLet's say 1 index named order_by_date has relfilenode = 1870772348, and\nit's file consist of 1870772348, 1870772348.1, and 1870772348.2\n\nAnd for oftenly queried 1 year data, do ALL files for the order_by_date\npushed to OS cache ? or it's just 1 file that contains index to this 1 year\ndata.\n\nHow about index named order_by_customer, will ALL the index files pushed to\nOS cache ?\n\nCan someone please explain about how OS cache works for this condition.\n\nThanks very much for the explanation.\n\n-- \nRegards,\n\nSoni Maula Harriz\n\nHello All, I would like to know about how OS cache works for postgres table and index file.Let's say I have 10 year data, and commonly used data only the last 1 year. This data is quite big, so each table and index file is divided into several file in PGDATA/baseLet's say 1 index named order_by_date has relfilenode = 1870772348, and it's file consist of 1870772348, 1870772348.1, and 1870772348.2And for oftenly queried 1 year data, do ALL files for the order_by_date pushed to OS cache ? or it's just 1 file that contains index to this 1 year data.How about index named order_by_customer, will ALL the index files pushed to OS cache ?Can someone please explain about how OS cache works for this condition.Thanks very much for the explanation.-- Regards,Soni Maula Harriz",
"msg_date": "Wed, 6 Sep 2017 15:12:26 +0700",
"msg_from": "Soni M <[email protected]>",
"msg_from_op": true,
"msg_subject": "OS cache management"
},
{
"msg_contents": "\n\n----- Mensaje original -----\n> De: \"Soni M\" <[email protected]>\n> Para: [email protected]\n> Enviados: Miércoles, 6 de Septiembre 2017 5:12:26\n> Asunto: [PERFORM] OS cache management\n> \n> Hello All, I would like to know about how OS cache works for postgres table\n> and index file.\n> \n> Let's say I have 10 year data, and commonly used data only the last 1 year.\n> This data is quite big, so each table and index file is divided into\n> several file in PGDATA/base\n> \n> Let's say 1 index named order_by_date has relfilenode = 1870772348, and\n> it's file consist of 1870772348, 1870772348.1, and 1870772348.2\n> \n> And for oftenly queried 1 year data, do ALL files for the order_by_date\n> pushed to OS cache ? or it's just 1 file that contains index to this 1 year\n> data.\n> \n\nPostgres has its own cache (defined by the \"shared_buffers\" variable). Usually, the unit of movement in and out from the cache is a 8k page (defined at compilation time), so you cant put it directly in terms of files.\n\nThere is an extension that can inspect the cache contents:\nhttps://www.postgresql.org/docs/current/static/pgbuffercache.html\n\nHTH\nGerardo\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 6 Sep 2017 14:13:34 +0000 (UTC)",
"msg_from": "Gerardo Herzig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OS cache management"
},
{
"msg_contents": "Il 06/09/2017 10:12, Soni M ha scritto:\n>\n>\n> Let's say I have 10 year data, and commonly used data only the last 1 \n> year. This data is quite big, so each table and index file is divided \n> into several file in PGDATA/base\n>\nMay not be relevant to what you asked, but if you want to keep last yeat \ndata in a \"small and fast\" dataset separated (physically separated!) by \nold data (that's still available, but response times may vary), IMHO you \nshould consider partitioning...\nhttps://www.postgresql.org/docs/current/static/ddl-partitioning.html\n\nHTH,\nMoreno.-\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 6 Sep 2017 16:45:09 +0200",
"msg_from": "Moreno Andreo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SPAM] OS cache management"
},
{
"msg_contents": "In our environment, OS cache is much bigger than postgres buffers. Postgres\nbuffers around 8 GB, OS cache more than 100 GB. Maybe we should inspect\npgfincore\n\nOn Wed, Sep 6, 2017 at 9:13 PM, Gerardo Herzig <[email protected]> wrote:\n\n>\n>\n> ----- Mensaje original -----\n> > De: \"Soni M\" <[email protected]>\n> > Para: [email protected]\n> > Enviados: Miércoles, 6 de Septiembre 2017 5:12:26\n> > Asunto: [PERFORM] OS cache management\n> >\n> > Hello All, I would like to know about how OS cache works for postgres\n> table\n> > and index file.\n> >\n> > Let's say I have 10 year data, and commonly used data only the last 1\n> year.\n> > This data is quite big, so each table and index file is divided into\n> > several file in PGDATA/base\n> >\n> > Let's say 1 index named order_by_date has relfilenode = 1870772348, and\n> > it's file consist of 1870772348, 1870772348.1, and 1870772348.2\n> >\n> > And for oftenly queried 1 year data, do ALL files for the order_by_date\n> > pushed to OS cache ? or it's just 1 file that contains index to this 1\n> year\n> > data.\n> >\n>\n> Postgres has its own cache (defined by the \"shared_buffers\" variable).\n> Usually, the unit of movement in and out from the cache is a 8k page\n> (defined at compilation time), so you cant put it directly in terms of\n> files.\n>\n> There is an extension that can inspect the cache contents:\n> https://www.postgresql.org/docs/current/static/pgbuffercache.html\n>\n> HTH\n> Gerardo\n>\n\n\n\n-- \nRegards,\n\nSoni Maula Harriz\n\nIn our environment, OS cache is much bigger than postgres buffers. Postgres buffers around 8 GB, OS cache more than 100 GB. Maybe we should inspect pgfincoreOn Wed, Sep 6, 2017 at 9:13 PM, Gerardo Herzig <[email protected]> wrote:\n\n----- Mensaje original -----\n> De: \"Soni M\" <[email protected]>\n> Para: [email protected]\n> Enviados: Miércoles, 6 de Septiembre 2017 5:12:26\n> Asunto: [PERFORM] OS cache management\n>\n> Hello All, I would like to know about how OS cache works for postgres table\n> and index file.\n>\n> Let's say I have 10 year data, and commonly used data only the last 1 year.\n> This data is quite big, so each table and index file is divided into\n> several file in PGDATA/base\n>\n> Let's say 1 index named order_by_date has relfilenode = 1870772348, and\n> it's file consist of 1870772348, 1870772348.1, and 1870772348.2\n>\n> And for oftenly queried 1 year data, do ALL files for the order_by_date\n> pushed to OS cache ? or it's just 1 file that contains index to this 1 year\n> data.\n>\n\nPostgres has its own cache (defined by the \"shared_buffers\" variable). Usually, the unit of movement in and out from the cache is a 8k page (defined at compilation time), so you cant put it directly in terms of files.\n\nThere is an extension that can inspect the cache contents:\nhttps://www.postgresql.org/docs/current/static/pgbuffercache.html\n\nHTH\nGerardo\n-- Regards,Soni Maula Harriz",
"msg_date": "Sun, 10 Sep 2017 12:48:23 +0700",
"msg_from": "Soni M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OS cache management"
},
{
"msg_contents": "Yeah, thanks. We have it in count.\n\nOn Wed, Sep 6, 2017 at 9:45 PM, Moreno Andreo <[email protected]>\nwrote:\n\n> Il 06/09/2017 10:12, Soni M ha scritto:\n>\n>>\n>>\n>> Let's say I have 10 year data, and commonly used data only the last 1\n>> year. This data is quite big, so each table and index file is divided into\n>> several file in PGDATA/base\n>>\n>> May not be relevant to what you asked, but if you want to keep last yeat\n> data in a \"small and fast\" dataset separated (physically separated!) by old\n> data (that's still available, but response times may vary), IMHO you should\n> consider partitioning...\n> https://www.postgresql.org/docs/current/static/ddl-partitioning.html\n>\n> HTH,\n> Moreno.-\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nRegards,\n\nSoni Maula Harriz\n\nYeah, thanks. We have it in count.On Wed, Sep 6, 2017 at 9:45 PM, Moreno Andreo <[email protected]> wrote:Il 06/09/2017 10:12, Soni M ha scritto:\n\n\n\nLet's say I have 10 year data, and commonly used data only the last 1 year. This data is quite big, so each table and index file is divided into several file in PGDATA/base\n\n\nMay not be relevant to what you asked, but if you want to keep last yeat data in a \"small and fast\" dataset separated (physically separated!) by old data (that's still available, but response times may vary), IMHO you should consider partitioning...\nhttps://www.postgresql.org/docs/current/static/ddl-partitioning.html\n\nHTH,\nMoreno.-\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Regards,Soni Maula Harriz",
"msg_date": "Sun, 10 Sep 2017 12:49:41 +0700",
"msg_from": "Soni M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SPAM] OS cache management"
}
] |
[
{
"msg_contents": "I am using a GIST index on timestamp range, because it supports 'contains'\noperator ('@>'). Unfortunately, in large scale (billions of rows, index\nsize: almost 800 GB) vacuuming the index takes an order of magnitude longer\nthan btrees (days/weeks instead of hours).\nAccording to the code, during vacuum gist index is traversed in a logical\norder which translates into random disk acceses (function gistbulkdelete in\ngistvacuum.c). Btree indexes are vacuummed in physical order (function\nbtvacuumscan in nbtree.c).\n\nAs a workaround, I'm planning to replace all uses of 'contains' with the\nfollowing function:\n\n CREATE OR REPLACE FUNCTION tstzrange_contains(\n range tstzrange,\n ts timestamptz)\n RETURNS bool AS\n $$\n SELECT (ts >= lower(range) AND (lower_inc(range) OR ts > lower(range)))\n AND (ts <= upper(range) AND (upper_inc(range) OR ts < upper(range)))\n $$ LANGUAGE SQL IMMUTABLE;\n\nand create btree indexes on lower and upper bound:\n\n CREATE INDEX my_table_time_range_lower_idx ON my_table\n(lower(time_range));\n CREATE INDEX my_table_time_range_upper_idx ON my_table\n(upper(time_range));\n\nIs it the best approach?\n\n-- \nBest regards,\nMarcin Barczynski\n\nI am using a GIST index on timestamp range, because it supports 'contains' operator ('@>'). Unfortunately, in large scale (billions of rows, index size: almost 800 GB) vacuuming the index takes an order of magnitude longer than btrees (days/weeks instead of hours). According to the code, during vacuum gist index is traversed in a logical order which translates into random disk acceses (function gistbulkdelete in gistvacuum.c). Btree indexes are vacuummed in physical order (function btvacuumscan in nbtree.c).As a workaround, I'm planning to replace all uses of 'contains' with the following function: CREATE OR REPLACE FUNCTION tstzrange_contains( range tstzrange, ts timestamptz) RETURNS bool AS $$ SELECT (ts >= lower(range) AND (lower_inc(range) OR ts > lower(range))) AND (ts <= upper(range) AND (upper_inc(range) OR ts < upper(range))) $$ LANGUAGE SQL IMMUTABLE;and create btree indexes on lower and upper bound: CREATE INDEX my_table_time_range_lower_idx ON my_table (lower(time_range)); CREATE INDEX my_table_time_range_upper_idx ON my_table (upper(time_range));Is it the best approach?-- Best regards,Marcin Barczynski",
"msg_date": "Wed, 6 Sep 2017 13:57:25 +0200",
"msg_from": "Marcin Barczynski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow vacuum of GIST indexes,\n because of random reads on PostgreSQL 9.6"
}
] |
[
{
"msg_contents": "Hello\n\nTrying to setup table partitioning I noticed strange behavior that prevents\nme from using it.\n\nI set up master and child tables via inheritance, with range CHECK by date\nand with\ntrigger on 'insert', as described in the documentation.\n\nI was happy with insertion speed, it was about 30 megabytes per second that\nwas more than I expected,\nand server idle time was near 95 %. I used 100 parallel clients.\n\nHowever, when it came to updates things turned very bad.\nI set up a test with 30 running client making 10000 updates each in a\nrandom fashion.\nupdates via master table took 6 times longer and server idle time dropped\nto 15%, user CPU 75% with load average 15.\n\nTest details below\n\n300000 updates ( 30 processes 10000 selects each)\n\nvia master table 134 seconds\nvia child table 20 seconds\n\n300000 updates via master table without \"date1 >= '2017-09-06' and date1 <\n'2017-09-07'\" clause\n180 seconds\nThat means that constraint_exlusion works, however, the process of\nexclusion takes A LOT OF time.\n\nI tried to repeat the test with selects\n\n300000 selects ( 30 processes 10000 selects each)\n\nvia master table 50 seconds\nvia child table 8 seconds\n\nThis is very bad too.\n\nThe documentation says that it is not good to have 1000 partition, probably\n100 is OK, but I have only 40 partitions\nand have noticeable delays with only 5 partitions.\n\nWhat I also cannot understand, why time increase for 'select'\nis much higher (2.5 times) than time increase for 'update', considering\nthat 'where' clause is identical\nand assuming time is spent selecting relevant child tables.\n\nBest regards, Konstantin\n\nEnvironment description.\n\n\nPostgres 9.5 on linux\n\ndb=> select version();\n\nversion\n----------------------------------------------------------------------------------------------------------\n PostgreSQL 9.5.8 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5\n20150623 (Red Hat 4.8.5-11), 64-bit\n(1 row)\ndb=>\n\n\n16 CPU\n\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 45\nmodel name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz\n\n128GB ram\n\n32GB shared_buffers\n\n\nTable statistics\n\ndb=> select count(*) from my_log_daily;\n count\n--------\n 408568\n(1 row)\n\ndb=> select count(*) from my_log_daily_170906;\n count\n--------\n 408568\n(1 row)\n\ndb=>\n\nexplain (ANALYZE,BUFFERS) select stage+1 from my_log_daily_170906 where\ndate1 >= '2017-09-06' and date1 < '2017-09-07' and msgid1=3414253 and\nmsgid2=20756 and msgid3=1504712117 and instance='WS6';\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using my_log_daily_idx_170906 on my_log_daily_170906\n(cost=0.42..8.46 rows=1 width=4) (actual time=0.013..0.014 rows=1 loops=1)\n Index Cond: ((msgid1 = 3414253) AND (msgid2 = 20756) AND (msgid3 =\n1504712117) AND ((instance)::text = 'WS6'::text))\n Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp without time zone)\nAND (date1 < '2017-09-07 00:00:00'::timestamp without time zone))\n Buffers: shared hit=4\n Planning time: 0.135 ms\n Execution time: 0.029 ms\n(6 rows)\n\ndb=>\n\nexplain (ANALYZE,BUFFERS) select stage+1 from my_log_daily where date1\n>= '2017-09-06' and date1 < '2017-09-07' and msgid1=3414253 and\nmsgid2=20756 and msgid3=1504712117 and instance='WS6';\n\nQUERY\nPLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.00..8.46 rows=2 width=4) (actual time=0.016..0.017 rows=1\nloops=1)\n Buffers: shared hit=4\n -> Append (cost=0.00..8.45 rows=2 width=4) (actual time=0.013..0.014\nrows=1 loops=1)\n Buffers: shared hit=4\n -> Seq Scan on my_log_daily (cost=0.00..0.00 rows=1 width=4)\n(actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp without\ntime zone) AND (date1 < '2017-09-07 00:00:00'::timestamp without time zone)\nAND (msgid1 = 3414253) AND (msgid2 = 20756) AND (msgid3 = 1504712117) AND\n((instance)::text = 'WS6'::text))\n -> Index Scan using my_log_daily_idx_170906 on\nmy_log_daily_170906 (cost=0.42..8.45 rows=1 width=4) (actual\ntime=0.012..0.013 rows=1 loops=1)\n Index Cond: ((msgid1 = 3414253) AND (msgid2 = 20756) AND\n(msgid3 = 1504712117) AND ((instance)::text = 'WS6'::text))\n Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp without\ntime zone) AND (date1 < '2017-09-07 00:00:00'::timestamp without time zone))\n Buffers: shared hit=4\n Planning time: 2.501 ms\n Execution time: 0.042 ms\n(12 rows)\n\ndb=>\n\nexplain (ANALYZE,BUFFERS) update my_log_daily_170906 set stage=stage+1\nwhere date1 >= '2017-09-06' and date1 < '2017-09-07' and msgid1=3414253\nand msgid2=20756 and msgid3=1504712117 and instance='WS6';\n\nQUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Update on my_log_daily_170906 (cost=0.42..8.46 rows=1 width=186) (actual\ntime=0.133..0.133 rows=0 loops=1)\n Buffers: shared hit=5 dirtied=1\n -> Index Scan using my_log_daily_idx_170906 on my_log_daily_170906\n(cost=0.42..8.46 rows=1 width=186) (actual time=0.014..0.015 rows=1 loops=1)\n Index Cond: ((msgid1 = 3414253) AND (msgid2 = 20756) AND (msgid3 =\n1504712117) AND ((instance)::text = 'WS6'::text))\n Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp without time\nzone) AND (date1 < '2017-09-07 00:00:00'::timestamp without time zone))\n Buffers: shared hit=4\n Planning time: 0.488 ms\n Execution time: 0.177 ms\n(8 rows)\n\ndb=>\nexplain (ANALYZE,BUFFERS) update my_log_daily set stage=stage+1 where\ndate1 >= '2017-09-06' and date1 < '2017-09-07' and msgid1=3414253 and\nmsgid2=20756 and msgid3=1504712117 and instance='WS6';\n\nQUERY\nPLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Update on my_log_daily (cost=0.00..8.46 rows=2 width=587) (actual\ntime=0.052..0.052 rows=0 loops=1)\n Update on my_log_daily\n Update on my_log_daily_170906\n Buffers: shared hit=5\n -> Seq Scan on my_log_daily (cost=0.00..0.00 rows=1 width=988) (actual\ntime=0.001..0.001 rows=0 loops=1)\n Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp without time\nzone) AND (date1 < '2017-09-07 00:00:00'::timestamp without time zone) AND\n(msgid1 = 3414253) AND (msgid2 = 20756) AND (msgid3 = 1504712117) AND\n((instance)::text = 'WS6'::text))\n -> Index Scan using my_log_daily_idx_170906 on my_log_daily_170906\n(cost=0.42..8.46 rows=1 width=186) (actual time=0.019..0.020 rows=1 loops=1)\n Index Cond: ((msgid1 = 3414253) AND (msgid2 = 20756) AND (msgid3 =\n1504712117) AND ((instance)::text = 'WS6'::text))\n Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp without time\nzone) AND (date1 < '2017-09-07 00:00:00'::timestamp without time zone))\n Buffers: shared hit=4\n Planning time: 4.639 ms\n Execution time: 0.147 ms\n(12 rows)\n\n\ndb=> \\d my_log_daily\n Table \"public.my_log_daily\"\n Column | Type |\nModifiers\n------------+-----------------------------+----------------------------------------------------\n client_id | integer | not null\n pult | character varying(6) | not null\n opr | character varying(30) | not null\n handler | character varying(60) |\n msgid | integer |\n sclient_id | integer |\n stage | integer | default 0\n msgid1 | integer | default 0\n msgid2 | integer | default 0\n msgid3 | integer | default 0\n ended | smallint | default 0\n date1 | timestamp without time zone | default\n('now'::text)::timestamp without time zone\n date2 | timestamp without time zone |\n reserved1 | character varying(100) |\n reserved2 | character varying(100) |\n reserved3 | character varying(100) |\n atpco | smallint | not null default 0\n rsrvdnum1 | integer |\n rsrvdnum2 | integer |\n rsrvdnum3 | integer |\n instance | character varying(3) |\n duration | integer | default 0\n ip | integer |\nTriggers:\n insert_my_log_daily_trigger BEFORE INSERT ON my_log_daily FOR EACH ROW\nEXECUTE PROCEDURE my_log_daily_insert_trigger()\nNumber of child tables: 40 (Use \\d+ to list them.)\n\ndb=>\n\nIndexes:\n \"my_log_daily_idx_170906\" UNIQUE, btree (msgid1, msgid2, msgid3,\ninstance)\n \"my_log_daily_date_170906\" btree (date1)\n \"my_log_daily_handler_170906\" btree (handler)\n \"my_log_daily_pult_170906\" btree (pult)\n \"my_log_daily_reserved1_170906\" btree (reserved1)\n \"my_log_daily_src_170906\" btree (client_id, date1)\nCheck constraints:\n \"my_log_daily_170906_date1_check\" CHECK (date1 >= '2017-09-06\n00:00:00'::timestamp without time zone AND date1 < '2017-09-07\n00:00:00'::timestamp without time zone)\nInherits: my_log_daily\n\ndb=>\n\n\n\na complete list of child tables below.\ntable descriptions including CHECK and indexes ( as well as trigger\nfunction ) are autogenerated, so there is no human error.\n\n\n-----------------\ndb=> \\d+ my_log_daily\n Table\n\"public.my_log_daily\"\n Column | Type |\nModifiers | Storage | Stats target | Description\n------------+-----------------------------+----------------------------------------------------+----------+--------------+-------------\n client_id | integer | not\nnull | plain | |\n pult | character varying(6) | not\nnull | extended | |\n opr | character varying(30) | not\nnull | extended | |\n handler | character varying(60)\n| | extended\n| |\n msgid | integer\n| | plain\n| |\n sclient_id | integer\n| | plain\n| |\n stage | integer | default\n0 | plain | |\n msgid1 | integer | default\n0 | plain | |\n msgid2 | integer | default\n0 | plain | |\n msgid3 | integer | default\n0 | plain | |\n ended | smallint | default\n0 | plain | |\n date1 | timestamp without time zone | default\n('now'::text)::timestamp without time zone | plain | |\n date2 | timestamp without time zone\n| | plain\n| |\n reserved1 | character varying(100)\n| | extended\n| |\n reserved2 | character varying(100)\n| | extended\n| |\n reserved3 | character varying(100)\n| | extended\n| |\n atpco | smallint | not null default\n0 | plain | |\n rsrvdnum1 | integer\n| | plain\n| |\n rsrvdnum2 | integer\n| | plain\n| |\n rsrvdnum3 | integer\n| | plain\n| |\n instance | character varying(3)\n| | extended\n| |\n duration | integer | default\n0 | plain | |\n ip | integer\n| | plain\n| |\nTriggers:\n insert_my_log_daily_trigger BEFORE INSERT ON my_log_daily FOR EACH ROW\nEXECUTE PROCEDURE my_log_daily_insert_trigger()\nChild tables: my_log_daily_170901,\n my_log_daily_170902,\n my_log_daily_170903,\n my_log_daily_170904,\n my_log_daily_170905,\n my_log_daily_170906,\n my_log_daily_170907,\n my_log_daily_170908,\n my_log_daily_170909,\n my_log_daily_170910,\n my_log_daily_170911,\n my_log_daily_170912,\n my_log_daily_170913,\n my_log_daily_170914,\n my_log_daily_170915,\n my_log_daily_170916,\n my_log_daily_170917,\n my_log_daily_170918,\n my_log_daily_170919,\n my_log_daily_170920,\n my_log_daily_170921,\n my_log_daily_170922,\n my_log_daily_170923,\n my_log_daily_170924,\n my_log_daily_170925,\n my_log_daily_170926,\n my_log_daily_170927,\n my_log_daily_170928,\n my_log_daily_170929,\n my_log_daily_170930,\n my_log_daily_171001,\n my_log_daily_171002,\n my_log_daily_171003,\n my_log_daily_171004,\n my_log_daily_171005,\n my_log_daily_171006,\n my_log_daily_171007,\n my_log_daily_171008,\n my_log_daily_171009,\n my_log_daily_171010\n\ndb=>\n\nHello Trying to setup table partitioning I noticed strange behavior that prevents me from using it.I set up master and child tables via inheritance, with range CHECK by date and with trigger on 'insert', as described in the documentation.I was happy with insertion speed, it was about 30 megabytes per second that was more than I expected,and server idle time was near 95 %. I used 100 parallel clients.However, when it came to updates things turned very bad.I set up a test with 30 running client making 10000 updates each in a random fashion.updates via master table took 6 times longer and server idle time dropped to 15%, user CPU 75% with load average 15.Test details below 300000 updates ( 30 processes 10000 selects each)via master table 134 secondsvia child table 20 seconds300000 updates via master table without \"date1 >= '2017-09-06' and date1 < '2017-09-07'\" clause 180 secondsThat means that constraint_exlusion works, however, the process of exclusion takes A LOT OF time.I tried to repeat the test with selects300000 selects ( 30 processes 10000 selects each)via master table 50 secondsvia child table 8 secondsThis is very bad too.The documentation says that it is not good to have 1000 partition, probably 100 is OK, but I have only 40 partitionsand have noticeable delays with only 5 partitions. What I also cannot understand, why time increase for 'select' is much higher (2.5 times) than time increase for 'update', considering that 'where' clause is identicaland assuming time is spent selecting relevant child tables.Best regards, KonstantinEnvironment description.Postgres 9.5 on linuxdb=> select version(); version ---------------------------------------------------------------------------------------------------------- PostgreSQL 9.5.8 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit(1 row)db=>16 CPU vendor_id : GenuineIntelcpu family : 6model : 45model name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz128GB ram32GB shared_buffersTable statisticsdb=> select count(*) from my_log_daily; count -------- 408568(1 row)db=> select count(*) from my_log_daily_170906; count -------- 408568(1 row)db=> explain (ANALYZE,BUFFERS) select stage+1 from my_log_daily_170906 where date1 >= '2017-09-06' and date1 < '2017-09-07' and msgid1=3414253 and msgid2=20756 and msgid3=1504712117 and instance='WS6'; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------- Index Scan using my_log_daily_idx_170906 on my_log_daily_170906 (cost=0.42..8.46 rows=1 width=4) (actual time=0.013..0.014 rows=1 loops=1) Index Cond: ((msgid1 = 3414253) AND (msgid2 = 20756) AND (msgid3 = 1504712117) AND ((instance)::text = 'WS6'::text)) Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp without time zone) AND (date1 < '2017-09-07 00:00:00'::timestamp without time zone)) Buffers: shared hit=4 Planning time: 0.135 ms Execution time: 0.029 ms(6 rows)db=> explain (ANALYZE,BUFFERS) select stage+1 from my_log_daily where date1 >= '2017-09-06' and date1 < '2017-09-07' and msgid1=3414253 and msgid2=20756 and msgid3=1504712117 and instance='WS6'; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Result (cost=0.00..8.46 rows=2 width=4) (actual time=0.016..0.017 rows=1 loops=1) Buffers: shared hit=4 -> Append (cost=0.00..8.45 rows=2 width=4) (actual time=0.013..0.014 rows=1 loops=1) Buffers: shared hit=4 -> Seq Scan on my_log_daily (cost=0.00..0.00 rows=1 width=4) (actual time=0.000..0.000 rows=0 loops=1) Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp without time zone) AND (date1 < '2017-09-07 00:00:00'::timestamp without time zone) AND (msgid1 = 3414253) AND (msgid2 = 20756) AND (msgid3 = 1504712117) AND ((instance)::text = 'WS6'::text)) -> Index Scan using my_log_daily_idx_170906 on my_log_daily_170906 (cost=0.42..8.45 rows=1 width=4) (actual time=0.012..0.013 rows=1 loops=1) Index Cond: ((msgid1 = 3414253) AND (msgid2 = 20756) AND (msgid3 = 1504712117) AND ((instance)::text = 'WS6'::text)) Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp without time zone) AND (date1 < '2017-09-07 00:00:00'::timestamp without time zone)) Buffers: shared hit=4 Planning time: 2.501 ms Execution time: 0.042 ms(12 rows)db=> explain (ANALYZE,BUFFERS) update my_log_daily_170906 set stage=stage+1 where date1 >= '2017-09-06' and date1 < '2017-09-07' and msgid1=3414253 and msgid2=20756 and msgid3=1504712117 and instance='WS6'; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------------- Update on my_log_daily_170906 (cost=0.42..8.46 rows=1 width=186) (actual time=0.133..0.133 rows=0 loops=1) Buffers: shared hit=5 dirtied=1 -> Index Scan using my_log_daily_idx_170906 on my_log_daily_170906 (cost=0.42..8.46 rows=1 width=186) (actual time=0.014..0.015 rows=1 loops=1) Index Cond: ((msgid1 = 3414253) AND (msgid2 = 20756) AND (msgid3 = 1504712117) AND ((instance)::text = 'WS6'::text)) Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp without time zone) AND (date1 < '2017-09-07 00:00:00'::timestamp without time zone)) Buffers: shared hit=4 Planning time: 0.488 ms Execution time: 0.177 ms(8 rows)db=> explain (ANALYZE,BUFFERS) update my_log_daily set stage=stage+1 where date1 >= '2017-09-06' and date1 < '2017-09-07' and msgid1=3414253 and msgid2=20756 and msgid3=1504712117 and instance='WS6'; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Update on my_log_daily (cost=0.00..8.46 rows=2 width=587) (actual time=0.052..0.052 rows=0 loops=1) Update on my_log_daily Update on my_log_daily_170906 Buffers: shared hit=5 -> Seq Scan on my_log_daily (cost=0.00..0.00 rows=1 width=988) (actual time=0.001..0.001 rows=0 loops=1) Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp without time zone) AND (date1 < '2017-09-07 00:00:00'::timestamp without time zone) AND (msgid1 = 3414253) AND (msgid2 = 20756) AND (msgid3 = 1504712117) AND ((instance)::text = 'WS6'::text)) -> Index Scan using my_log_daily_idx_170906 on my_log_daily_170906 (cost=0.42..8.46 rows=1 width=186) (actual time=0.019..0.020 rows=1 loops=1) Index Cond: ((msgid1 = 3414253) AND (msgid2 = 20756) AND (msgid3 = 1504712117) AND ((instance)::text = 'WS6'::text)) Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp without time zone) AND (date1 < '2017-09-07 00:00:00'::timestamp without time zone)) Buffers: shared hit=4 Planning time: 4.639 ms Execution time: 0.147 ms(12 rows)db=> \\d my_log_daily Table \"public.my_log_daily\" Column | Type | Modifiers ------------+-----------------------------+---------------------------------------------------- client_id | integer | not null pult | character varying(6) | not null opr | character varying(30) | not null handler | character varying(60) | msgid | integer | sclient_id | integer | stage | integer | default 0 msgid1 | integer | default 0 msgid2 | integer | default 0 msgid3 | integer | default 0 ended | smallint | default 0 date1 | timestamp without time zone | default ('now'::text)::timestamp without time zone date2 | timestamp without time zone | reserved1 | character varying(100) | reserved2 | character varying(100) | reserved3 | character varying(100) | atpco | smallint | not null default 0 rsrvdnum1 | integer | rsrvdnum2 | integer | rsrvdnum3 | integer | instance | character varying(3) | duration | integer | default 0 ip | integer | Triggers: insert_my_log_daily_trigger BEFORE INSERT ON my_log_daily FOR EACH ROW EXECUTE PROCEDURE my_log_daily_insert_trigger()Number of child tables: 40 (Use \\d+ to list them.)db=> Indexes: \"my_log_daily_idx_170906\" UNIQUE, btree (msgid1, msgid2, msgid3, instance) \"my_log_daily_date_170906\" btree (date1) \"my_log_daily_handler_170906\" btree (handler) \"my_log_daily_pult_170906\" btree (pult) \"my_log_daily_reserved1_170906\" btree (reserved1) \"my_log_daily_src_170906\" btree (client_id, date1)Check constraints: \"my_log_daily_170906_date1_check\" CHECK (date1 >= '2017-09-06 00:00:00'::timestamp without time zone AND date1 < '2017-09-07 00:00:00'::timestamp without time zone)Inherits: my_log_dailydb=> a complete list of child tables below.table descriptions including CHECK and indexes ( as well as trigger function ) are autogenerated, so there is no human error.-----------------db=> \\d+ my_log_daily Table \"public.my_log_daily\" Column | Type | Modifiers | Storage | Stats target | Description ------------+-----------------------------+----------------------------------------------------+----------+--------------+------------- client_id | integer | not null | plain | | pult | character varying(6) | not null | extended | | opr | character varying(30) | not null | extended | | handler | character varying(60) | | extended | | msgid | integer | | plain | | sclient_id | integer | | plain | | stage | integer | default 0 | plain | | msgid1 | integer | default 0 | plain | | msgid2 | integer | default 0 | plain | | msgid3 | integer | default 0 | plain | | ended | smallint | default 0 | plain | | date1 | timestamp without time zone | default ('now'::text)::timestamp without time zone | plain | | date2 | timestamp without time zone | | plain | | reserved1 | character varying(100) | | extended | | reserved2 | character varying(100) | | extended | | reserved3 | character varying(100) | | extended | | atpco | smallint | not null default 0 | plain | | rsrvdnum1 | integer | | plain | | rsrvdnum2 | integer | | plain | | rsrvdnum3 | integer | | plain | | instance | character varying(3) | | extended | | duration | integer | default 0 | plain | | ip | integer | | plain | | Triggers: insert_my_log_daily_trigger BEFORE INSERT ON my_log_daily FOR EACH ROW EXECUTE PROCEDURE my_log_daily_insert_trigger()Child tables: my_log_daily_170901, my_log_daily_170902, my_log_daily_170903, my_log_daily_170904, my_log_daily_170905, my_log_daily_170906, my_log_daily_170907, my_log_daily_170908, my_log_daily_170909, my_log_daily_170910, my_log_daily_170911, my_log_daily_170912, my_log_daily_170913, my_log_daily_170914, my_log_daily_170915, my_log_daily_170916, my_log_daily_170917, my_log_daily_170918, my_log_daily_170919, my_log_daily_170920, my_log_daily_170921, my_log_daily_170922, my_log_daily_170923, my_log_daily_170924, my_log_daily_170925, my_log_daily_170926, my_log_daily_170927, my_log_daily_170928, my_log_daily_170929, my_log_daily_170930, my_log_daily_171001, my_log_daily_171002, my_log_daily_171003, my_log_daily_171004, my_log_daily_171005, my_log_daily_171006, my_log_daily_171007, my_log_daily_171008, my_log_daily_171009, my_log_daily_171010db=>",
"msg_date": "Thu, 07 Sep 2017 15:13:48 +0000",
"msg_from": "Konstantin Kivi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Poor perfomance of update (and select) on partitioned tables"
}
] |
[
{
"msg_contents": "Dear,\nI'm trying to interpret an Explain Analyze, but I did not understand this:\n\n-> According to the Postgresql documentation at: https://www.postgresql.org/\ndocs/9.6/static/using-explain.html\n\n\" the loops value reports the total number of executions of the node, and\nthe actual time and rows values shown are averages per-execution.\n.... Multiply by the loops value to get the total time actually spent in\nthe node\"\n\nBut look at this case, in which the total query time was 66 minutes.\n(Explain Analyze complete and Query at this link: https://goo.gl/Kp45fu )\n\nWhat interests me is this section:\n\n################################ ###################################\n -> Index Scan using idx_l_partkeylineitem000x on lineitem (cost =\n0.57..97.65 rows = 26 width = 36)\n (current time = 23.615..419.113 rows = 30 loops = 26469)\n Index Cond: (l_partkey = part.p_partkey)\n################################################## #################\nAccording to the documentation, one should multiply the Actual Time by the\nnumber of Loops.\nThat is: 419113 ms -> 419113/1000/60 = 6.9 minutes * 26469 (loops) = 182.6\nminutes.\n\nBut how does this stretch take 182.6 minutes, if the entire query ran in 66\nminutes?\n\nOf course I'm making a miscalculation, but if anyone can give me a hint as\nto how I would calculate this time.\nWhat I need to know is the time spent go through the\nidx_l_partkeylineitem000x index, remembering that I did an Explain Analyze\nwhich is theoretically the actual time spent and not an estimate\nas happens with the simple Explain .\n\nthank you and best regards\n[] 's Neto\n\nDear,I'm trying to interpret an Explain Analyze, but I did not understand this:-> According to the Postgresql documentation at: https://www.postgresql.org/docs/9.6/static/using-explain.html\" the loops value reports the total number of executions of the node, and the actual time and rows values shown are averages per-execution. .... Multiply by the loops value to get the total time actually spent in the node\"But look at this case, in which the total query time was 66 minutes.(Explain Analyze complete and Query at this link: https://goo.gl/Kp45fu )What interests me is this section:################################ ################################### -> Index Scan using idx_l_partkeylineitem000x on lineitem (cost = 0.57..97.65 rows = 26 width = 36) (current time = 23.615..419.113 rows = 30 loops = 26469) Index Cond: (l_partkey = part.p_partkey)################################################## #################According to the documentation, one should multiply the Actual Time by the number of Loops.That is: 419113 ms -> 419113/1000/60 = 6.9 minutes * 26469 (loops) = 182.6 minutes.But how does this stretch take 182.6 minutes, if the entire query ran in 66 minutes?Of course I'm making a miscalculation, but if anyone can give me a hint as to how I would calculate this time.What I need to know is the time spent go through the idx_l_partkeylineitem000x index, remembering that I did an Explain Analyze which is theoretically the actual time spent and not an estimate as happens with the simple Explain .thank you and best regards[] 's Neto",
"msg_date": "Thu, 7 Sep 2017 20:17:15 -0700",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Explain Analyze - actual time in loops"
},
{
"msg_contents": "From: [email protected] [mailto:[email protected]] On Behalf Of Neto pr\r\nSent: Thursday, September 07, 2017 11:17 PM\r\nTo: [email protected]\r\nSubject: [PERFORM] Explain Analyze - actual time in loops\r\n\r\n…\r\n################################ ###################################\r\n -> Index Scan using idx_l_partkeylineitem000x on lineitem (cost = 0.57..97.65 rows = 26 width = 36)\r\n (current time = 23.615..419.113 rows = 30 loops = 26469)\r\n Index Cond: (l_partkey = part.p_partkey)\r\n################################################## #################\r\nAccording to the documentation, one should multiply the Actual Time by the number of Loops.\r\nThat is: 419113 ms -> 419113/1000/60 = 6.9 minutes * 26469 (loops) = 182.6 minutes.\r\n\r\nBut how does this stretch take 182.6 minutes, if the entire query ran in 66 minutes?\r\n\r\n……………….\r\nthank you and best regards\r\n[] 's Neto\r\nNeto,\r\nThe time you see there is in ms, so the point (‘.’) you see is the digital point.\r\nSo, it is 419.113ms or a little less than half a second (0.419sec).\r\nIgor Neyman\r\n\n\n\n\n\n\n\n\n\nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of Neto pr\nSent: Thursday, September 07, 2017 11:17 PM\nTo: [email protected]\nSubject: [PERFORM] Explain Analyze - actual time in loops\n \n\n\n…\n\n################################ ###################################\r\n -> Index Scan using idx_l_partkeylineitem000x on lineitem (cost = 0.57..97.65 rows = 26 width = 36)\r\n (current time = 23.615..419.113 rows = 30 loops = 26469)\r\n Index Cond: (l_partkey = part.p_partkey)\r\n################################################## #################\r\nAccording to the documentation, one should multiply the Actual Time by the number of Loops.\r\nThat is: 419113 ms -> 419113/1000/60 = 6.9 minutes * 26469 (loops) = 182.6 minutes.\n\r\nBut how does this stretch take 182.6 minutes, if the entire query ran in 66 minutes?\n\n……………….\r\nthank you and best regards\r\n[] 's Neto\n\nNeto,\nThe time you see there is in ms, so the point (‘.’) you see is the digital point.\nSo, it is\r\n419.113ms or a little less than half a second (0.419sec).\nIgor Neyman",
"msg_date": "Fri, 8 Sep 2017 12:46:45 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Explain Analyze - actual time in loops"
},
{
"msg_contents": "Igor,\n\nYou're right, I confused the radix character.\n\nBut even so the result is approximate to the previous message, 182 minutes, see\nbelow:\n\n419.113 / 1000 = 0.41 seconds * 26469 (loops) = 11093.50 seconds or 184\nminutes\n\nAfter analyzing, I saw that in some places of the plan, it is being used\nParallelism. Does this explain why the final value spent (in minutes) to go\nthrough the index (184 minutes) is greater than the total query time (66\nminutes)?\n\nRegards\nNeto\n\n2017-09-08 5:46 GMT-07:00 Igor Neyman <[email protected]>:\n\n> *From:* [email protected] [mailto:pgsql-performance-\n> [email protected]] *On Behalf Of *Neto pr\n> *Sent:* Thursday, September 07, 2017 11:17 PM\n> *To:* [email protected]\n> *Subject:* [PERFORM] Explain Analyze - actual time in loops\n>\n>\n>\n> …\n>\n> ################################ ###################################\n> -> Index Scan using idx_l_partkeylineitem000x on lineitem (cost =\n> 0.57..97.65 rows = 26 width = 36)\n> (current time = 23.615..419.113 rows = 30 loops = 26469)\n> Index Cond: (l_partkey = part.p_partkey)\n> ################################################## #################\n> According to the documentation, one should multiply the Actual Time by the\n> number of Loops.\n> That is: 419113 ms -> 419113/1000/60 = 6.9 minutes * 26469 (loops) = 182.6\n> minutes.\n>\n> But how does this stretch take 182.6 minutes, if the entire query ran in\n> 66 minutes?\n>\n> ……………….\n> thank you and best regards\n> [] 's Neto\n>\n> Neto,\n>\n> The time you see there is in ms, so the point (‘.’) you see is the digital\n> point.\n>\n> So, it is 419.113ms or a little less than half a second (0.419sec).\n>\n> Igor Neyman\n>\n\nIgor, You're right, I confused the radix character.But even so the result is approximate to the previous message, 182 minutes, see below:419.113 / 1000 = 0.41 seconds * 26469 (loops) = 11093.50 seconds or 184 minutesAfter analyzing, I saw that in some places of the plan, it is being used Parallelism. Does\n this explain why the final value spent (in minutes) to go through the \nindex (184 minutes) is greater than the total query time (66 minutes)?Regards Neto2017-09-08 5:46 GMT-07:00 Igor Neyman <[email protected]>:\n\n\nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of Neto pr\nSent: Thursday, September 07, 2017 11:17 PM\nTo: [email protected]\nSubject: [PERFORM] Explain Analyze - actual time in loops\n \n\n\n…\n\n################################ ###################################\n -> Index Scan using idx_l_partkeylineitem000x on lineitem (cost = 0.57..97.65 rows = 26 width = 36)\n (current time = 23.615..419.113 rows = 30 loops = 26469)\n Index Cond: (l_partkey = part.p_partkey)\n################################################## #################\nAccording to the documentation, one should multiply the Actual Time by the number of Loops.\nThat is: 419113 ms -> 419113/1000/60 = 6.9 minutes * 26469 (loops) = 182.6 minutes.\n\nBut how does this stretch take 182.6 minutes, if the entire query ran in 66 minutes?\n\n……………….\nthank you and best regards\n[] 's Neto\n\nNeto,\nThe time you see there is in ms, so the point (‘.’) you see is the digital point.\nSo, it is\n419.113ms or a little less than half a second (0.419sec).\nIgor Neyman",
"msg_date": "Fri, 8 Sep 2017 06:08:08 -0700",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Explain Analyze - actual time in loops"
},
{
"msg_contents": "Neto pr <[email protected]> writes:\n> After analyzing, I saw that in some places of the plan, it is being used\n> Parallelism. Does this explain why the final value spent (in minutes) to go\n> through the index (184 minutes) is greater than the total query time (66\n> minutes)?\n\nI was just about to ask you about that. If this is under a Gather node,\nI believe that the numbers include time expended in all processes.\nSo if you had three or more workers these results would make sense.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 08 Sep 2017 09:44:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Explain Analyze - actual time in loops"
},
{
"msg_contents": "Thanks for reply Tom and Igor.\n\nJust only more information:\n\nI need to know the height of a B-tree index (level of the leaf node\nfarthest from the root).\n\nI tried to find this data in PG_INDEXES and PG_CLASS views, but I did not\nfind it.\nDoes anyone know if Postgresql stores this information, referring to the\nheight of the index tree?\n\nRegards\n\n\n2017-09-08 6:44 GMT-07:00 Tom Lane <[email protected]>:\n\n> Neto pr <[email protected]> writes:\n> > After analyzing, I saw that in some places of the plan, it is being used\n> > Parallelism. Does this explain why the final value spent (in minutes) to\n> go\n> > through the index (184 minutes) is greater than the total query time (66\n> > minutes)?\n>\n> I was just about to ask you about that. If this is under a Gather node,\n> I believe that the numbers include time expended in all processes.\n> So if you had three or more workers these results would make sense.\n>\n> regards, tom lane\n>\n\nThanks for reply Tom and Igor.Just only more information:I need to know the height of a B-tree index (level of the leaf node farthest from the root).I tried to find this data in PG_INDEXES and PG_CLASS views, but I did not find it.Does anyone know if Postgresql stores this information, referring to the height of the index tree?Regards2017-09-08 6:44 GMT-07:00 Tom Lane <[email protected]>:Neto pr <[email protected]> writes:\n> After analyzing, I saw that in some places of the plan, it is being used\n> Parallelism. Does this explain why the final value spent (in minutes) to go\n> through the index (184 minutes) is greater than the total query time (66\n> minutes)?\n\nI was just about to ask you about that. If this is under a Gather node,\nI believe that the numbers include time expended in all processes.\nSo if you had three or more workers these results would make sense.\n\n regards, tom lane",
"msg_date": "Fri, 8 Sep 2017 07:02:37 -0700",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Explain Analyze - actual time in loops"
},
{
"msg_contents": "Neto pr <[email protected]> writes:\n> I need to know the height of a B-tree index (level of the leaf node\n> farthest from the root).\n\npageinspect's bt_metap() will give you that --- it's the \"level\"\nfield, I believe.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 08 Sep 2017 10:07:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Explain Analyze - actual time in loops"
}
] |
[
{
"msg_contents": "I want to check something regarding postgresql performance during my app is\nrunning.\n\nMy app does the next things on 20 tables in a loop :\n\n1.truncate table.\n2.drop constraints on table\n3.drop indexes on table\n4.insert into local_table select * from remote_oracle_table\n4.1.Recently I'm getting an error in this part : SQLERRM = could not extend\n file \"base/16400/124810.23\": wrote only 4096 of 8192 bytes at block\n 3092001\n5.create constraints on table\n6.create indexes on table.\n\nThis operation runs every night. Most of the tables are small 500M-2G but\nfew tables are pretty big 24G-45G.\n\nMy wals and my data directory are on different fs. My data directory fs\nsize is 400G. During this operation the data directory fs becomes full.\nHowever, after this operation 100G are freed which means that 300G are used\nfrom the 400g of the data directory fs. Something regarding those sizes\ndoesnt seems ok.\n\nWhen I check my database size :\n\nmydb=# SELECT\nmydb-# pg_database.datname,\nmydb-# pg_size_pretty(pg_database_size(pg_database.datname)) AS size\nmydb-# FROM pg_database;\n datname | size\n -----------+---------\n template0 | 7265 kB\n mydb | 246 GB\n postgres | 568 MB\n template1 | 7865 kB\n (4 rows)\n\nWhen I check all the tables in mydb database :\n\nmydb-# relname as \"Table\",\nmydb-# pg_size_pretty(pg_total_relation_size(relid)) As \"Size\",\nmydb-# pg_size_pretty(pg_total_relation_size(relid) -\n pg_relation_size(relid)) as \"External Size\"\nmydb-# FROM pg_catalog.pg_statio_user_tables ORDER BY\n pg_total_relation_size(relid) DESC;\n Table | Size | External Size\n -------------------+------------+---------------\n table 1| 45 GB | 13 GB\n table 2| 15 GB | 6330 MB\n table 3| 9506 MB | 3800 MB\n table 4| 7473 MB | 1838 MB\n table 5| 7267 MB | 2652 MB\n table 6| 5347 MB | 1701 MB\n table 7| 3402 MB | 1377 MB\n table 8| 3092 MB | 1318 MB\n table 9| 2145 MB | 724 MB\n table 10| 1804 MB | 381 MB\n table 11 293 MB | 83 MB\n table 12| 268 MB | 103 MB\n table 13| 225 MB | 108 MB\n table 14| 217 MB | 40 MB\n table 15| 172 MB | 47 MB\n table 16| 134 MB | 36 MB\n table 17| 102 MB | 27 MB\n table 18| 86 MB | 22 MB\n .....\n\nIn the data directory the base directory`s size is 240G. I have 16G of ram\nin my machine.\n\nWaiting for help, thanks.\n\nI want to check something regarding postgresql performance during my app is running.My app does the next things on 20 tables in a loop :1.truncate table.2.drop constraints on table3.drop indexes on table4.insert into local_table select * from remote_oracle_table4.1.Recently I'm getting an error in this part : SQLERRM = could not extend file \"base/16400/124810.23\": wrote only 4096 of 8192 bytes at block 30920015.create constraints on table6.create indexes on table.This operation runs every night. Most of the tables are small 500M-2G but few tables are pretty big 24G-45G.My wals and my data directory are on different fs. My data directory fs size is 400G. During this operation the data directory fs becomes full. However, after this operation 100G are freed which means that 300G are used from the 400g of the data directory fs. Something regarding those sizes doesnt seems ok.When I check my database size :mydb=# SELECTmydb-# pg_database.datname,mydb-# pg_size_pretty(pg_database_size(pg_database.datname)) AS sizemydb-# FROM pg_database; datname | size -----------+--------- template0 | 7265 kB mydb | 246 GB postgres | 568 MB template1 | 7865 kB (4 rows)When I check all the tables in mydb database :mydb-# relname as \"Table\",mydb-# pg_size_pretty(pg_total_relation_size(relid)) As \"Size\",mydb-# pg_size_pretty(pg_total_relation_size(relid) - pg_relation_size(relid)) as \"External Size\"mydb-# FROM pg_catalog.pg_statio_user_tables ORDER BY pg_total_relation_size(relid) DESC; Table | Size | External Size -------------------+------------+--------------- table 1| 45 GB | 13 GB table 2| 15 GB | 6330 MB table 3| 9506 MB | 3800 MB table 4| 7473 MB | 1838 MB table 5| 7267 MB | 2652 MB table 6| 5347 MB | 1701 MB table 7| 3402 MB | 1377 MB table 8| 3092 MB | 1318 MB table 9| 2145 MB | 724 MB table 10| 1804 MB | 381 MB table 11 293 MB | 83 MB table 12| 268 MB | 103 MB table 13| 225 MB | 108 MB table 14| 217 MB | 40 MB table 15| 172 MB | 47 MB table 16| 134 MB | 36 MB table 17| 102 MB | 27 MB table 18| 86 MB | 22 MB .....In the data directory the base directory`s size is 240G. I have 16G of ram in my machine.Waiting for help, thanks.",
"msg_date": "Mon, 11 Sep 2017 12:42:01 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgresql 9.6 data directory fs becomes full"
},
{
"msg_contents": "Mariel Cherkassky <[email protected]> writes:\n> My app does the next things on 20 tables in a loop :\n\n> 1.truncate table.\n> 2.drop constraints on table\n> 3.drop indexes on table\n> 4.insert into local_table select * from remote_oracle_table\n> 4.1.Recently I'm getting an error in this part : SQLERRM = could not extend\n> file \"base/16400/124810.23\": wrote only 4096 of 8192 bytes at block\n> 3092001\n> 5.create constraints on table\n> 6.create indexes on table.\n\nHm, are you committing anywhere in this loop? If not, the old data\nremains on disk till you do end the transaction.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 11 Sep 2017 08:02:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql 9.6 data directory fs becomes full"
},
{
"msg_contents": "Mariel Cherkassky <[email protected]> writes:\n\n> I want to check something regarding postgresql performance during my\n> app is running.\n>\n> My app does the next things on 20 tables in a loop :\n>\n> 1.truncate table.\n> 2.drop constraints on table\n> 3.drop indexes on table\n> 4.insert into local_table select * from remote_oracle_table\n> 4.1.Recently I'm getting an error in this part : SQLERRM = could not extend \n> file \"base/16400/124810.23\": wrote only 4096 of 8192 bytes at block \n> 3092001\n> 5.create constraints on table\n> 6.create indexes on table.\n>\n> This operation runs every night. Most of the tables are small 500M-2G\n> but few tables are pretty big 24G-45G.\n>\n> My wals and my data directory are on different fs. My data directory\n> fs size is 400G. During this operation the data directory fs becomes\n> full. However, after this operation 100G are freed which means that\n> 300G are used from the 400g of the data directory fs. Something\n> regarding those sizes doesnt seems ok.\n>\n> When I check my database size :\n>\n> mydb=# SELECT\n> mydb-# pg_database.datname,\n> mydb-# pg_size_pretty(pg_database_size(pg_database.datname)) AS size\n> mydb-# FROM pg_database;\n> datname | size \n> -----------+---------\n> template0 | 7265 kB\n> mydb | 246 GB\n> postgres | 568 MB\n> template1 | 7865 kB\n> (4 rows)\n>\n> When I check all the tables in mydb database :\n>\n> mydb-# relname as \"Table\",\n> mydb-# pg_size_pretty(pg_total_relation_size(relid)) As \"Size\",\n> mydb-# pg_size_pretty(pg_total_relation_size(relid) - \n> pg_relation_size(relid)) as \"External Size\"\n> mydb-# FROM pg_catalog.pg_statio_user_tables ORDER BY \n> pg_total_relation_size(relid) DESC;\n> Table | Size | External Size \n> -------------------+------------+---------------\n> table 1| 45 GB | 13 GB\n> table 2| 15 GB | 6330 MB\n> table 3| 9506 MB | 3800 MB\n> table 4| 7473 MB | 1838 MB\n> table 5| 7267 MB | 2652 MB\n> table 6| 5347 MB | 1701 MB\n> table 7| 3402 MB | 1377 MB\n> table 8| 3092 MB | 1318 MB\n> table 9| 2145 MB | 724 MB\n> table 10| 1804 MB | 381 MB\n> table 11 293 MB | 83 MB\n> table 12| 268 MB | 103 MB\n> table 13| 225 MB | 108 MB\n> table 14| 217 MB | 40 MB\n> table 15| 172 MB | 47 MB\n> table 16| 134 MB | 36 MB\n> table 17| 102 MB | 27 MB\n> table 18| 86 MB | 22 MB\n> .....\n>\n> In the data directory the base directory`s size is 240G. I have 16G\n> of ram in my machine.\n>\n> Waiting for help, thanks.\n\nYou didn't say but if I can assume you're doing this work in a\ntransaction...\n\nYou understand that space is *not* freed by the truncate until commit, right?\n\n\n\n>\n>\n>\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: [email protected]\np: 312.241.7800\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 11 Sep 2017 12:07:56 -0500",
"msg_from": "Jerry Sievers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql 9.6 data directory fs becomes full"
}
] |
[
{
"msg_contents": "Postgres 9.5\n\nI have a query of a partitioned table that uses the partition index in\nproduction but uses sequence scans in qa. The only major difference I can\ntell is the partitions are much smaller in qa. In production the\npartitions range in size from around 25 million rows to around 60 million\nrows, in QA the partitions are between 4 and 12 million rows. I would\nthink this would be big enough to get the planner to prefer the index but\nthis is the major difference between the two database as far as I can tell.\n\nWhen I run the query in qa with enable seqscan=false I get the much faster\nplan. Both systems are manually vacuumed and analyzed each night. Both\nsystems have identical settings for memory and are allocated the same for\nsystem resources. Neither system is showing substantial index or table\nbloat above .1-1% for any of the key indexes in question.\n\n\n\nHere is the query with the seq scan plan in qa:\n\n explain select rankings from (select\n\ne.body->>'SID' as temp_SID,\n\nCASE WHEN e.source_id = 168 THEN e.body->>'Main Menu' ELSE e.body->>'Prompt\nSelection 1' END as temp_ivr_selection_prompt1,\n\ne.body->>'Existing Customer' as temp_ivr_selection_prompt2,\n\ne.body->>'Business Services' as temp_ivr_selection_prompt3,\n\ne.body->>'Prompt for ZIP' as temp_ivr_selection_zip,\n\nrank() over (Partition by e.body->>'SID' order by e.body->>'Timestamp'\ndesc) as rank1\n\nfrom stage.event e\n\nwhere e.validation_status_code = 'P'\n\nAND e.body->>'SID' is not null --So that matches are not made on NULL values\n\n\nAND exists (select 1 from t_sap where e.landing_id = t_sap.landing_id)) as\nrankings;\n\n┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\n\n│ QUERY PLAN\n │\n\n├───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤\n\n│ Subquery Scan on rankings (cost=42370434.66..44254952.76 rows=37690362\nwidth=24) │\n\n│ -> WindowAgg (cost=42370434.66..43878049.14 rows=37690362 width=769)\n │\n\n│ -> Sort (cost=42370434.66..42464660.56 rows=37690362 width=769)\n │\n\n│ Sort Key: ((e.body ->> 'SID'::text)), ((e.body ->>\n'Timestamp'::text)) DESC │\n\n│ -> Hash Join (cost=46.38..22904737.49 rows=37690362\nwidth=769) │\n\n│ Hash Cond: (e.landing_id = t_sap.landing_id)\n │\n\n│ -> Append (cost=0.00..22568797.21 rows=75380725\nwidth=773) │\n\n│ -> Seq Scan on event e (cost=0.00..1.36\nrows=1 width=97) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar)) │\n\n│ -> Seq Scan on event__99999999 e_1\n(cost=0.00..2527918.06\nrows=11457484 width=782) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar)) │\n\n│ -> Seq Scan on event__00069000 e_2\n(cost=0.00..1462329.01\nrows=5922843 width=772) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar)) │\n\n│ -> Seq Scan on event__00070000 e_3\n(cost=0.00..1534324.60\nrows=6003826 width=785) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar)) │\n\n│ -> Seq Scan on event__00071000 e_4\n(cost=0.00..2203954.48\nrows=6508965 width=780) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar)) │\n\n│ -> Seq Scan on event__00072000 e_5\n(cost=0.00..1530805.89\nrows=5759797 width=792) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar)) │\n\n│ -> Seq Scan on event__00073000 e_6\n(cost=0.00..1384818.75\nrows=5888869 width=759) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar)) │\n\n│ -> Seq Scan on event__00074000 e_7\n(cost=0.00..1288777.54\nrows=4734867 width=806) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar)) │\n\n│ -> Seq Scan on event__00075000 e_8\n(cost=0.00..1231949.17\nrows=3934318 width=788) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar)) │\n\n│ -> Seq Scan on event__00076000 e_9\n(cost=0.00..1426221.05\nrows=3706123 width=718) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar)) │\n\n│ -> Seq Scan on event__00077000 e_10\n(cost=0.00..1432111.14\nrows=4093124 width=718) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar)) │\n\n│ -> Seq Scan on event__00078000 e_11\n(cost=0.00..1736628.35\nrows=4197864 width=703) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar)) │\n\n│ -> Seq Scan on event__00079000 e_12\n(cost=0.00..1870095.09\nrows=4550502 width=771) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar)) │\n\n│ -> Seq Scan on event__00080000 e_13\n(cost=0.00..1909692.50\nrows=5020831 width=791) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar)) │\n\n│ -> Seq Scan on event__00081000 e_14\n(cost=0.00..1029159.30\nrows=3601310 width=823) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar)) │\n\n│ -> Seq Scan on event__00000000 e_15\n(cost=0.00..10.90\nrows=1 width=40) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar)) │\n\n│ -> Hash (cost=43.88..43.88 rows=200 width=4)\n │\n\n│ -> HashAggregate (cost=41.88..43.88 rows=200\nwidth=4) │\n\n│ Group Key: t_sap.landing_id\n │\n\n│ -> Seq Scan on t_sap (cost=0.00..35.50\nrows=2550 width=4) │\n\n\nAnd here is the query with index scan from production:\n\nexplain select rankings from (select\n\ne.body->>'SID' as temp_SID,\n\nCASE WHEN e.source_id = 168 THEN e.body->>'Main Menu' ELSE e.body->>'Prompt\nSelection 1' END as temp_ivr_selection_prompt1,\n\ne.body->>'Existing Customer' as temp_ivr_selection_prompt2,\n\ne.body->>'Business Services' as temp_ivr_selection_prompt3,\n\ne.body->>'Prompt for ZIP' as temp_ivr_selection_zip,\n\nrank() over (Partition by e.body->>'SID' order by e.body->>'Timestamp'\ndesc) as rank1\n\nfrom stage.event e\n\nwhere e.validation_status_code = 'P'\n\nAND e.body->>'SID' is not null\n\nAND exists (select 1 from t_sap where e.landing_id = t_sap.landing_id)) as\nrankings;\n\n┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\n\n│\n QUERY\nPLAN\n │\n\n├───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤\n\n│ Subquery Scan on rankings (cost=239975317.06..256673146.81\nrows=333956595 width=24)\n │\n\n│ -> WindowAgg (cost=239975317.06..253333580.86 rows=333956595\nwidth=719)\n │\n\n│ -> Sort (cost=239975317.06..240810208.54 rows=333956595\nwidth=719)\n │\n\n│ Sort Key: ((e.body ->> 'SID'::text)), ((e.body ->>\n'Timestamp'::text)) DESC\n │\n\n│ -> Nested Loop (cost=41.88..71375097.58 rows=333956595\nwidth=719)\n │\n\n│ -> HashAggregate (cost=41.88..43.88 rows=200\nwidth=4)\n │\n\n│ Group Key: t_sap.landing_id\n\n │\n\n│ -> Seq Scan on t_sap (cost=0.00..35.50\nrows=2550 width=4)\n │\n\n│ -> Append (cost=0.00..351670.76 rows=520451\nwidth=687)\n │\n\n│ -> Seq Scan on event e (cost=0.00..0.00\nrows=1 width=40)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar) AND (t_sap.landing_id =\nlanding_id)) │\n\n│ -> Index Scan using\nix_event__00011162_landing_id on event__00011162 e_1 (cost=0.56..15476.59\nrows=23400 width=572) │\n\n│ Index Cond: (landing_id =\nt_sap.landing_id)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ -> Index Scan using\nix_event__00012707_landing_id on event__00012707 e_2 (cost=0.56..25383.27\nrows=36716 width=552) │\n\n│ Index Cond: (landing_id =\nt_sap.landing_id)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ -> Index Scan using\nix_event__00014695_landing_id on event__00014695 e_3 (cost=0.56..39137.89\nrows=37697 width=564) │\n\n│ Index Cond: (landing_id =\nt_sap.landing_id)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ -> Index Scan using\nix_event__00016874_landing_id on event__00016874 e_4 (cost=0.43..24521.55\nrows=26072 width=591) │\n\n│ Index Cond: (landing_id =\nt_sap.landing_id)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ -> Seq Scan on event__00017048 e_5\n(cost=0.00..9845.19\nrows=45827 width=597) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar) AND (t_sap.landing_id =\nlanding_id)) │\n\n│ -> Index Scan using\nix_event__00017049_landing_id on event__00017049 e_6 (cost=0.56..31594.23\nrows=28708 width=616) │\n\n│ Index Cond: (landing_id =\nt_sap.landing_id)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ -> Index Scan using\nix_event__00018387_landing_id on event__00018387 e_7 (cost=0.56..22343.55\nrows=26953 width=657) │\n\n│ Index Cond: (landing_id =\nt_sap.landing_id)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ -> Index Scan using\nix_event__00022500_landing_id on event__00022500 e_8 (cost=0.56..31845.78\nrows=32011 width=701) │\n\n│ Index Cond: (landing_id =\nt_sap.landing_id)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ -> Index Scan using\nix_event__00025594_landing_id on event__00025594 e_9 (cost=0.56..19097.50\nrows=25077 width=717) │\n\n│ Index Cond: (landing_id =\nt_sap.landing_id)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ -> Index Scan using\nix_event__00030035_landing_id on event__00030035 e_10 (cost=0.56..21510.00\nrows=30867 width=678) │\n\n│ Index Cond: (landing_id =\nt_sap.landing_id)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ -> Index Scan using\nix_event__00034082_landing_id on event__00034082 e_11 (cost=0.56..28686.63\nrows=32609 width=785) │\n\n│ Index Cond: (landing_id =\nt_sap.landing_id)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ -> Index Scan using\nix_event__00037667_landing_id on event__00037667 e_12 (cost=0.56..19990.15\nrows=23948 width=710) │\n\n│ Index Cond: (landing_id =\nt_sap.landing_id)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ -> Index Scan using\nix_event__00043603_landing_id on event__00043603 e_13 (cost=0.56..7554.78\nrows=17043 width=563) │\n\n│ Index Cond: (landing_id =\nt_sap.landing_id)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ -> Index Scan using\nix_event__00049785_landing_id on event__00049785 e_14 (cost=0.57..18857.27\nrows=51295 width=863) │\n\n│ Index Cond: (landing_id =\nt_sap.landing_id)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ -> Index Scan using\nix_event__00056056_landing_id on event__00056056 e_15 (cost=0.56..8595.30\nrows=21346 width=865) │\n\n│ Index Cond: (landing_id =\nt_sap.landing_id)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ -> Index Scan using\nix_event__00062926_landing_id on event__00062926 e_16 (cost=0.56..5120.32\nrows=14816 width=790) │\n\n│ Index Cond: (landing_id =\nt_sap.landing_id)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ -> Index Scan using\nix_event__00071267_landing_id on event__00071267 e_17 (cost=0.56..8471.75\nrows=14092 width=793) │\n\n│ Index Cond: (landing_id =\nt_sap.landing_id)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ -> Index Scan using\nix_event__00076729_landing_id on event__00076729 e_18 (cost=0.56..4593.36\nrows=11599 width=796) │\n\n│ Index Cond: (landing_id =\nt_sap.landing_id)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ -> Index Scan using\nix_event__00078600_landing_id on event__00078600 e_19 (cost=0.56..4940.39\nrows=13528 width=804) │\n\n│ Index Cond: (landing_id =\nt_sap.landing_id)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ -> Index Scan using\nix_event__00080741_landing_id on event__00080741 e_20 (cost=0.56..4105.25\nrows=6846 width=760) │\n\n│ Index Cond: (landing_id =\nt_sap.landing_id)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n\nAny ideas for how to convince postgres to choose the faster plan in qa?\nThanks!\nMike\n\nPostgres 9.5 I have a query of a partitioned table that uses the partition index in production but uses sequence scans in qa. The only major difference I can tell is the partitions are much smaller in qa. In production the partitions range in size from around 25 million rows to around 60 million rows, in QA the partitions are between 4 and 12 million rows. I would think this would be big enough to get the planner to prefer the index but this is the major difference between the two database as far as I can tell.When I run the query in qa with enable seqscan=false I get the much faster plan. Both systems are manually vacuumed and analyzed each night. Both systems have identical settings for memory and are allocated the same for system resources. Neither system is showing substantial index or table bloat above .1-1% for any of the key indexes in question.Here is the query with the seq scan plan in qa:\n explain select rankings from (select \ne.body->>'SID' as temp_SID, \nCASE WHEN e.source_id = 168 THEN e.body->>'Main Menu' ELSE e.body->>'Prompt Selection 1' END as temp_ivr_selection_prompt1,\ne.body->>'Existing Customer' as temp_ivr_selection_prompt2, \ne.body->>'Business Services' as temp_ivr_selection_prompt3, \ne.body->>'Prompt for ZIP' as temp_ivr_selection_zip, \nrank() over (Partition by e.body->>'SID' order by e.body->>'Timestamp' desc) as rank1 \nfrom stage.event e \nwhere e.validation_status_code = 'P' \nAND e.body->>'SID' is not null --So that matches are not made on NULL values \nAND exists (select 1 from t_sap where e.landing_id = t_sap.landing_id)) as rankings; \n┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\n│ QUERY PLAN │\n├───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤\n│ Subquery Scan on rankings (cost=42370434.66..44254952.76 rows=37690362 width=24) │\n│ -> WindowAgg (cost=42370434.66..43878049.14 rows=37690362 width=769) │\n│ -> Sort (cost=42370434.66..42464660.56 rows=37690362 width=769) │\n│ Sort Key: ((e.body ->> 'SID'::text)), ((e.body ->> 'Timestamp'::text)) DESC │\n│ -> Hash Join (cost=46.38..22904737.49 rows=37690362 width=769) │\n│ Hash Cond: (e.landing_id = t_sap.landing_id) │\n│ -> Append (cost=0.00..22568797.21 rows=75380725 width=773) │\n│ -> Seq Scan on event e (cost=0.00..1.36 rows=1 width=97) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Seq Scan on event__99999999 e_1 (cost=0.00..2527918.06 rows=11457484 width=782) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Seq Scan on event__00069000 e_2 (cost=0.00..1462329.01 rows=5922843 width=772) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Seq Scan on event__00070000 e_3 (cost=0.00..1534324.60 rows=6003826 width=785) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Seq Scan on event__00071000 e_4 (cost=0.00..2203954.48 rows=6508965 width=780) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Seq Scan on event__00072000 e_5 (cost=0.00..1530805.89 rows=5759797 width=792) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Seq Scan on event__00073000 e_6 (cost=0.00..1384818.75 rows=5888869 width=759) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Seq Scan on event__00074000 e_7 (cost=0.00..1288777.54 rows=4734867 width=806) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Seq Scan on event__00075000 e_8 (cost=0.00..1231949.17 rows=3934318 width=788) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Seq Scan on event__00076000 e_9 (cost=0.00..1426221.05 rows=3706123 width=718) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Seq Scan on event__00077000 e_10 (cost=0.00..1432111.14 rows=4093124 width=718) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Seq Scan on event__00078000 e_11 (cost=0.00..1736628.35 rows=4197864 width=703) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Seq Scan on event__00079000 e_12 (cost=0.00..1870095.09 rows=4550502 width=771) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Seq Scan on event__00080000 e_13 (cost=0.00..1909692.50 rows=5020831 width=791) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Seq Scan on event__00081000 e_14 (cost=0.00..1029159.30 rows=3601310 width=823) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Seq Scan on event__00000000 e_15 (cost=0.00..10.90 rows=1 width=40) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Hash (cost=43.88..43.88 rows=200 width=4) │\n│ -> HashAggregate (cost=41.88..43.88 rows=200 width=4) │\n│ Group Key: t_sap.landing_id │\n│ -> Seq Scan on t_sap (cost=0.00..35.50 rows=2550 width=4) │And here is the query with index scan from production:\nexplain select rankings from (select \ne.body->>'SID' as temp_SID, \nCASE WHEN e.source_id = 168 THEN e.body->>'Main Menu' ELSE e.body->>'Prompt Selection 1' END as temp_ivr_selection_prompt1,\ne.body->>'Existing Customer' as temp_ivr_selection_prompt2, \ne.body->>'Business Services' as temp_ivr_selection_prompt3, \ne.body->>'Prompt for ZIP' as temp_ivr_selection_zip, \nrank() over (Partition by e.body->>'SID' order by e.body->>'Timestamp' desc) as rank1 \nfrom stage.event e \nwhere e.validation_status_code = 'P' \nAND e.body->>'SID' is not null \nAND exists (select 1 from t_sap where e.landing_id = t_sap.landing_id)) as rankings; \n┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\n│ QUERY PLAN │\n├───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤\n│ Subquery Scan on rankings (cost=239975317.06..256673146.81 rows=333956595 width=24) │\n│ -> WindowAgg (cost=239975317.06..253333580.86 rows=333956595 width=719) │\n│ -> Sort (cost=239975317.06..240810208.54 rows=333956595 width=719) │\n│ Sort Key: ((e.body ->> 'SID'::text)), ((e.body ->> 'Timestamp'::text)) DESC │\n│ -> Nested Loop (cost=41.88..71375097.58 rows=333956595 width=719) │\n│ -> HashAggregate (cost=41.88..43.88 rows=200 width=4) │\n│ Group Key: t_sap.landing_id │\n│ -> Seq Scan on t_sap (cost=0.00..35.50 rows=2550 width=4) │\n│ -> Append (cost=0.00..351670.76 rows=520451 width=687) │\n│ -> Seq Scan on event e (cost=0.00..0.00 rows=1 width=40) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar) AND (t_sap.landing_id = landing_id)) │\n│ -> Index Scan using ix_event__00011162_landing_id on event__00011162 e_1 (cost=0.56..15476.59 rows=23400 width=572) │\n│ Index Cond: (landing_id = t_sap.landing_id) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Index Scan using ix_event__00012707_landing_id on event__00012707 e_2 (cost=0.56..25383.27 rows=36716 width=552) │\n│ Index Cond: (landing_id = t_sap.landing_id) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Index Scan using ix_event__00014695_landing_id on event__00014695 e_3 (cost=0.56..39137.89 rows=37697 width=564) │\n│ Index Cond: (landing_id = t_sap.landing_id) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Index Scan using ix_event__00016874_landing_id on event__00016874 e_4 (cost=0.43..24521.55 rows=26072 width=591) │\n│ Index Cond: (landing_id = t_sap.landing_id) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Seq Scan on event__00017048 e_5 (cost=0.00..9845.19 rows=45827 width=597) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar) AND (t_sap.landing_id = landing_id)) │\n│ -> Index Scan using ix_event__00017049_landing_id on event__00017049 e_6 (cost=0.56..31594.23 rows=28708 width=616) │\n│ Index Cond: (landing_id = t_sap.landing_id) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Index Scan using ix_event__00018387_landing_id on event__00018387 e_7 (cost=0.56..22343.55 rows=26953 width=657) │\n│ Index Cond: (landing_id = t_sap.landing_id) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Index Scan using ix_event__00022500_landing_id on event__00022500 e_8 (cost=0.56..31845.78 rows=32011 width=701) │\n│ Index Cond: (landing_id = t_sap.landing_id) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Index Scan using ix_event__00025594_landing_id on event__00025594 e_9 (cost=0.56..19097.50 rows=25077 width=717) │\n│ Index Cond: (landing_id = t_sap.landing_id) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Index Scan using ix_event__00030035_landing_id on event__00030035 e_10 (cost=0.56..21510.00 rows=30867 width=678) │\n│ Index Cond: (landing_id = t_sap.landing_id) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Index Scan using ix_event__00034082_landing_id on event__00034082 e_11 (cost=0.56..28686.63 rows=32609 width=785) │\n│ Index Cond: (landing_id = t_sap.landing_id) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Index Scan using ix_event__00037667_landing_id on event__00037667 e_12 (cost=0.56..19990.15 rows=23948 width=710) │\n│ Index Cond: (landing_id = t_sap.landing_id) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Index Scan using ix_event__00043603_landing_id on event__00043603 e_13 (cost=0.56..7554.78 rows=17043 width=563) │\n│ Index Cond: (landing_id = t_sap.landing_id) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Index Scan using ix_event__00049785_landing_id on event__00049785 e_14 (cost=0.57..18857.27 rows=51295 width=863) │\n│ Index Cond: (landing_id = t_sap.landing_id) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Index Scan using ix_event__00056056_landing_id on event__00056056 e_15 (cost=0.56..8595.30 rows=21346 width=865) │\n│ Index Cond: (landing_id = t_sap.landing_id) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Index Scan using ix_event__00062926_landing_id on event__00062926 e_16 (cost=0.56..5120.32 rows=14816 width=790) │\n│ Index Cond: (landing_id = t_sap.landing_id) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Index Scan using ix_event__00071267_landing_id on event__00071267 e_17 (cost=0.56..8471.75 rows=14092 width=793) │\n│ Index Cond: (landing_id = t_sap.landing_id) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Index Scan using ix_event__00076729_landing_id on event__00076729 e_18 (cost=0.56..4593.36 rows=11599 width=796) │\n│ Index Cond: (landing_id = t_sap.landing_id) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Index Scan using ix_event__00078600_landing_id on event__00078600 e_19 (cost=0.56..4940.39 rows=13528 width=804) │\n│ Index Cond: (landing_id = t_sap.landing_id) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Index Scan using ix_event__00080741_landing_id on event__00080741 e_20 (cost=0.56..4105.25 rows=6846 width=760) │\n│ Index Cond: (landing_id = t_sap.landing_id) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────Any ideas for how to convince postgres to choose the faster plan in qa? Thanks!Mike",
"msg_date": "Wed, 13 Sep 2017 15:28:18 -0500",
"msg_from": "Mike Broers <[email protected]>",
"msg_from_op": true,
"msg_subject": "query of partitioned object doesnt use index in qa"
},
{
"msg_contents": "On 14 September 2017 at 08:28, Mike Broers <[email protected]> wrote:\n> I have a query of a partitioned table that uses the partition index in\n> production but uses sequence scans in qa. The only major difference I can\n> tell is the partitions are much smaller in qa. In production the partitions\n> range in size from around 25 million rows to around 60 million rows, in QA\n> the partitions are between 4 and 12 million rows. I would think this would\n> be big enough to get the planner to prefer the index but this is the major\n> difference between the two database as far as I can tell.\n\n\nQA:\n\n> │ -> Seq Scan on event__99999999 e_1\n> (cost=0.00..2527918.06 rows=11457484 width=782) │\n>\n\nProduction:\n>\n> │ -> Index Scan using\n> ix_event__00011162_landing_id on event__00011162 e_1 (cost=0.56..15476.59\n> rows=23400 width=572) │\n\n\nIf QA has between 4 and 12 million rows, then the planner's row\nestimate for the condition thinks 11457484 are going to match, so a\nSeqscan is likely best here. If those estimates are off then it might\nbe worth double checking your nightly analyze is working correctly on\nQA.\n\nThe planner may be able to be coaxed into using the index with a\nhigher effective_cache_size and/or a lower random_page_cost setting,\nalthough you really should be looking at those row estimates first.\nShowing us the EXPLAIN ANALYZE would have been much more useful so\nthat we could have seen if those are accurate or not.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Sep 2017 09:57:54 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query of partitioned object doesnt use index in qa"
},
{
"msg_contents": "Thanks for the suggestions, I'll futz with random_page_cost and\neffective_cache_size a bit and follow up, as well as try to provide an\nexplain analyze on both (if the longer query ever returns!)\n\nMost appreciated.\n\nOn Wed, Sep 13, 2017 at 4:57 PM, David Rowley <[email protected]>\nwrote:\n\n> On 14 September 2017 at 08:28, Mike Broers <[email protected]> wrote:\n> > I have a query of a partitioned table that uses the partition index in\n> > production but uses sequence scans in qa. The only major difference I\n> can\n> > tell is the partitions are much smaller in qa. In production the\n> partitions\n> > range in size from around 25 million rows to around 60 million rows, in\n> QA\n> > the partitions are between 4 and 12 million rows. I would think this\n> would\n> > be big enough to get the planner to prefer the index but this is the\n> major\n> > difference between the two database as far as I can tell.\n>\n>\n> QA:\n>\n> > │ -> Seq Scan on event__99999999 e_1\n> > (cost=0.00..2527918.06 rows=11457484 width=782) │\n> >\n>\n> Production:\n> >\n> > │ -> Index Scan using\n> > ix_event__00011162_landing_id on event__00011162 e_1\n> (cost=0.56..15476.59\n> > rows=23400 width=572) │\n>\n>\n> If QA has between 4 and 12 million rows, then the planner's row\n> estimate for the condition thinks 11457484 are going to match, so a\n> Seqscan is likely best here. If those estimates are off then it might\n> be worth double checking your nightly analyze is working correctly on\n> QA.\n>\n> The planner may be able to be coaxed into using the index with a\n> higher effective_cache_size and/or a lower random_page_cost setting,\n> although you really should be looking at those row estimates first.\n> Showing us the EXPLAIN ANALYZE would have been much more useful so\n> that we could have seen if those are accurate or not.\n>\n> --\n> David Rowley http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n\nThanks for the suggestions, I'll futz with random_page_cost and effective_cache_size a bit and follow up, as well as try to provide an explain analyze on both (if the longer query ever returns!)Most appreciated.On Wed, Sep 13, 2017 at 4:57 PM, David Rowley <[email protected]> wrote:On 14 September 2017 at 08:28, Mike Broers <[email protected]> wrote:\n> I have a query of a partitioned table that uses the partition index in\n> production but uses sequence scans in qa. The only major difference I can\n> tell is the partitions are much smaller in qa. In production the partitions\n> range in size from around 25 million rows to around 60 million rows, in QA\n> the partitions are between 4 and 12 million rows. I would think this would\n> be big enough to get the planner to prefer the index but this is the major\n> difference between the two database as far as I can tell.\n\n\nQA:\n\n> │ -> Seq Scan on event__99999999 e_1\n> (cost=0.00..2527918.06 rows=11457484 width=782) │\n>\n\nProduction:\n>\n> │ -> Index Scan using\n> ix_event__00011162_landing_id on event__00011162 e_1 (cost=0.56..15476.59\n> rows=23400 width=572) │\n\n\nIf QA has between 4 and 12 million rows, then the planner's row\nestimate for the condition thinks 11457484 are going to match, so a\nSeqscan is likely best here. If those estimates are off then it might\nbe worth double checking your nightly analyze is working correctly on\nQA.\n\nThe planner may be able to be coaxed into using the index with a\nhigher effective_cache_size and/or a lower random_page_cost setting,\nalthough you really should be looking at those row estimates first.\nShowing us the EXPLAIN ANALYZE would have been much more useful so\nthat we could have seen if those are accurate or not.\n\n--\n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Thu, 14 Sep 2017 08:25:22 -0500",
"msg_from": "Mike Broers <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query of partitioned object doesnt use index in qa"
},
{
"msg_contents": "Query finally came back with an explain analyze :)\n\nIf Im reading this correctly postgres thinks the partition will return 6.5\nmillion matching rows but actually comes back with 162k. Is this a case\nwhere something is wrong with the analyze job?\n\nSeq Scan on event__00071000 e_4 (cost=0.00..2204374.94 rows=6523419\nwidth=785) (actual time=7020.509..448368.247 rows=162912 loops=1)\n\n\n\n\n┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\n\n│\n QUERY PLAN\n │\n\n├───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤\n\n│ Subquery Scan on rankings (cost=45357272.27..47351629.37 rows=39887142\nwidth=24) (actual time=6117566.189..6117619.805 rows=25190 loops=1)\n │\n\n│ -> WindowAgg (cost=45357272.27..46952757.95 rows=39887142 width=772)\n(actual time=6117566.101..6117611.266 rows=25190 loops=1)\n │\n\n│ -> Sort (cost=45357272.27..45456990.12 rows=39887142 width=772)\n(actual time=6117566.054..6117572.121 rows=25190 loops=1)\n │\n\n│ Sort Key: ((e.body ->> 'SID'::text)), ((e.body ->>\n'Timestamp'::text)) DESC\n │\n\n│ Sort Method: quicksort Memory: 13757kB\n\n │\n\n│ -> Hash Join (cost=46.38..24740720.18 rows=39887142\nwidth=772) (actual time=1511499.761..6117335.382 rows=25190 loops=1)\n │\n\n│ Hash Cond: (e.landing_id = t_sap.landing_id)\n\n │\n\n│ -> Append (cost=0.00..24387085.38 rows=79774283\nwidth=776) (actual time=25522.442..6116672.504 rows=2481659 loops=1)\n │\n\n│ -> Seq Scan on event e (cost=0.00..1.36\nrows=1 width=97) (actual time=0.049..0.049 rows=0 loops=1)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ Rows Removed by Filter: 24\n\n │\n\n│ -> Seq Scan on event__99999999 e_1\n(cost=0.00..2527828.05\nrows=11383021 width=778) (actual time=25522.389..747238.885 rows=42 loops=1)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ Rows Removed by Filter: 12172186\n\n │\n\n│ -> Seq Scan on event__00069000 e_2\n(cost=0.00..1462613.93\nrows=5957018 width=771) (actual time=4486.295..370098.760 rows=183696\nloops=1) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ Rows Removed by Filter: 6956029\n\n │\n\n│ -> Seq Scan on event__00070000 e_3\n(cost=0.00..1534702.41\nrows=5991507 width=787) (actual time=3415.907..361606.800 rows=199081\nloops=1) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ Rows Removed by Filter: 7177444\n\n │\n\n│ -> Seq Scan on event__00071000 e_4\n(cost=0.00..2204374.94\nrows=6523419 width=785) (actual time=7020.509..448368.247 rows=162912\nloops=1) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ Rows Removed by Filter: 8091470\n\n │\n\n│ -> Seq Scan on event__00072000 e_5\n(cost=0.00..1531430.89\nrows=5814704 width=792) (actual time=25.304..343612.826 rows=214891 loops=1)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ Rows Removed by Filter: 7301151\n\n │\n\n│ -> Seq Scan on event__00073000 e_6\n(cost=0.00..1384865.48\nrows=5876959 width=767) (actual time=1631.133..424827.603 rows=163959\nloops=1) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ Rows Removed by Filter: 6523673\n\n │\n\n│ -> Seq Scan on event__00074000 e_7\n(cost=0.00..1289048.37\nrows=4747343 width=801) (actual time=3287.286..280317.057 rows=204394\nloops=1) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ Rows Removed by Filter: 5646711\n\n │\n\n│ -> Seq Scan on event__00075000 e_8\n(cost=0.00..1232277.70\nrows=3956864 width=790) (actual time=4806.148..259851.848 rows=183035\nloops=1) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ Rows Removed by Filter: 4798388\n\n │\n\n│ -> Seq Scan on event__00076000 e_9\n(cost=0.00..1426748.09\nrows=3730410 width=709) (actual time=7361.010..462819.583 rows=165404\nloops=1) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ Rows Removed by Filter: 4984478\n\n │\n\n│ -> Seq Scan on event__00077000 e_10\n(cost=0.00..1432209.39\nrows=4060602 width=728) (actual time=866.053..415228.726 rows=173185\nloops=1) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ Rows Removed by Filter: 4901988\n\n │\n\n│ -> Seq Scan on event__00078000 e_11\n(cost=0.00..1737134.71\nrows=4242651 width=699) (actual time=125.287..475699.803 rows=241807\nloops=1) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ Rows Removed by Filter: 5667558\n\n │\n\n│ -> Seq Scan on event__00079000 e_12\n(cost=0.00..1870531.43\nrows=4600400 width=783) (actual time=13.365..442326.202 rows=137087\nloops=1) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ Rows Removed by Filter: 5885216\n\n │\n\n│ -> Seq Scan on event__00080000 e_13\n(cost=0.00..1910751.06\nrows=5099576 width=794) (actual time=2.943..465024.506 rows=233592 loops=1)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ Rows Removed by Filter: 7475651\n\n │\n\n│ -> Seq Scan on event__00081000 e_14\n(cost=0.00..1455499.14\nrows=4358939 width=813) (actual time=25.965..341225.174 rows=157935\nloops=1) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ Rows Removed by Filter: 5368644\n\n │\n\n│ -> Seq Scan on event__00000000 e_15\n(cost=0.00..10.90\nrows=1 width=40) (actual time=0.002..0.002 rows=0 loops=1)\n │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ -> Seq Scan on event__00082000 e_16\n(cost=0.00..1387057.53\nrows=3430868 width=819) (actual time=99775.810..277914.901 rows=60639\nloops=1) │\n\n│ Filter: (((body ->> 'SID'::text) IS NOT\nNULL) AND (validation_status_code = 'P'::bpchar))\n │\n\n│ Rows Removed by Filter: 3144705\n\n │\n\n│ -> Hash (cost=43.88..43.88 rows=200 width=4)\n(actual time=0.084..0.084 rows=45 loops=1)\n │\n\n│ Buckets: 1024 Batches: 1 Memory Usage: 10kB\n\n │\n\n│ -> HashAggregate (cost=41.88..43.88 rows=200\nwidth=4) (actual time=0.054..0.067 rows=45 loops=1)\n │\n\n│ Group Key: t_sap.landing_id\n\n │\n\n│ -> Seq Scan on t_sap (cost=0.00..35.50\nrows=2550 width=4) (actual time=0.013..0.019 rows=45 loops=1)\n │\n\n│ Planning time: 4.955 ms\n\n │\n\n│ Execution time: 6117625.390 ms\n\n │\n\n└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\n\nOn Wed, Sep 13, 2017 at 4:57 PM, David Rowley <[email protected]>\nwrote:\n\n> On 14 September 2017 at 08:28, Mike Broers <[email protected]> wrote:\n> > I have a query of a partitioned table that uses the partition index in\n> > production but uses sequence scans in qa. The only major difference I\n> can\n> > tell is the partitions are much smaller in qa. In production the\n> partitions\n> > range in size from around 25 million rows to around 60 million rows, in\n> QA\n> > the partitions are between 4 and 12 million rows. I would think this\n> would\n> > be big enough to get the planner to prefer the index but this is the\n> major\n> > difference between the two database as far as I can tell.\n>\n>\n> QA:\n>\n> > │ -> Seq Scan on event__99999999 e_1\n> > (cost=0.00..2527918.06 rows=11457484 width=782) │\n> >\n>\n> Production:\n> >\n> > │ -> Index Scan using\n> > ix_event__00011162_landing_id on event__00011162 e_1\n> (cost=0.56..15476.59\n> > rows=23400 width=572) │\n>\n>\n> If QA has between 4 and 12 million rows, then the planner's row\n> estimate for the condition thinks 11457484 are going to match, so a\n> Seqscan is likely best here. If those estimates are off then it might\n> be worth double checking your nightly analyze is working correctly on\n> QA.\n>\n> The planner may be able to be coaxed into using the index with a\n> higher effective_cache_size and/or a lower random_page_cost setting,\n> although you really should be looking at those row estimates first.\n> Showing us the EXPLAIN ANALYZE would have been much more useful so\n> that we could have seen if those are accurate or not.\n>\n> --\n> David Rowley http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n\nQuery finally came back with an explain analyze :)If Im reading this correctly postgres thinks the partition will return 6.5 million matching rows but actually comes back with 162k. Is this a case where something is wrong with the analyze job?Seq Scan on event__00071000 e_4 (cost=0.00..2204374.94 rows=6523419 width=785) (actual time=7020.509..448368.247 rows=162912 loops=1)\n ┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\n│ QUERY PLAN │\n├───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤\n│ Subquery Scan on rankings (cost=45357272.27..47351629.37 rows=39887142 width=24) (actual time=6117566.189..6117619.805 rows=25190 loops=1) │\n│ -> WindowAgg (cost=45357272.27..46952757.95 rows=39887142 width=772) (actual time=6117566.101..6117611.266 rows=25190 loops=1) │\n│ -> Sort (cost=45357272.27..45456990.12 rows=39887142 width=772) (actual time=6117566.054..6117572.121 rows=25190 loops=1) │\n│ Sort Key: ((e.body ->> 'SID'::text)), ((e.body ->> 'Timestamp'::text)) DESC │\n│ Sort Method: quicksort Memory: 13757kB │\n│ -> Hash Join (cost=46.38..24740720.18 rows=39887142 width=772) (actual time=1511499.761..6117335.382 rows=25190 loops=1) │\n│ Hash Cond: (e.landing_id = t_sap.landing_id) │\n│ -> Append (cost=0.00..24387085.38 rows=79774283 width=776) (actual time=25522.442..6116672.504 rows=2481659 loops=1) │\n│ -> Seq Scan on event e (cost=0.00..1.36 rows=1 width=97) (actual time=0.049..0.049 rows=0 loops=1) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ Rows Removed by Filter: 24 │\n│ -> Seq Scan on event__99999999 e_1 (cost=0.00..2527828.05 rows=11383021 width=778) (actual time=25522.389..747238.885 rows=42 loops=1) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ Rows Removed by Filter: 12172186 │\n│ -> Seq Scan on event__00069000 e_2 (cost=0.00..1462613.93 rows=5957018 width=771) (actual time=4486.295..370098.760 rows=183696 loops=1) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ Rows Removed by Filter: 6956029 │\n│ -> Seq Scan on event__00070000 e_3 (cost=0.00..1534702.41 rows=5991507 width=787) (actual time=3415.907..361606.800 rows=199081 loops=1) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ Rows Removed by Filter: 7177444 │\n│ -> Seq Scan on event__00071000 e_4 (cost=0.00..2204374.94 rows=6523419 width=785) (actual time=7020.509..448368.247 rows=162912 loops=1) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ Rows Removed by Filter: 8091470 │\n│ -> Seq Scan on event__00072000 e_5 (cost=0.00..1531430.89 rows=5814704 width=792) (actual time=25.304..343612.826 rows=214891 loops=1) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ Rows Removed by Filter: 7301151 │\n│ -> Seq Scan on event__00073000 e_6 (cost=0.00..1384865.48 rows=5876959 width=767) (actual time=1631.133..424827.603 rows=163959 loops=1) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ Rows Removed by Filter: 6523673 │\n│ -> Seq Scan on event__00074000 e_7 (cost=0.00..1289048.37 rows=4747343 width=801) (actual time=3287.286..280317.057 rows=204394 loops=1) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ Rows Removed by Filter: 5646711 │\n│ -> Seq Scan on event__00075000 e_8 (cost=0.00..1232277.70 rows=3956864 width=790) (actual time=4806.148..259851.848 rows=183035 loops=1) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ Rows Removed by Filter: 4798388 │\n│ -> Seq Scan on event__00076000 e_9 (cost=0.00..1426748.09 rows=3730410 width=709) (actual time=7361.010..462819.583 rows=165404 loops=1) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ Rows Removed by Filter: 4984478 │\n│ -> Seq Scan on event__00077000 e_10 (cost=0.00..1432209.39 rows=4060602 width=728) (actual time=866.053..415228.726 rows=173185 loops=1) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ Rows Removed by Filter: 4901988 │\n│ -> Seq Scan on event__00078000 e_11 (cost=0.00..1737134.71 rows=4242651 width=699) (actual time=125.287..475699.803 rows=241807 loops=1) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ Rows Removed by Filter: 5667558 │\n│ -> Seq Scan on event__00079000 e_12 (cost=0.00..1870531.43 rows=4600400 width=783) (actual time=13.365..442326.202 rows=137087 loops=1) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ Rows Removed by Filter: 5885216 │\n│ -> Seq Scan on event__00080000 e_13 (cost=0.00..1910751.06 rows=5099576 width=794) (actual time=2.943..465024.506 rows=233592 loops=1) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ Rows Removed by Filter: 7475651 │\n│ -> Seq Scan on event__00081000 e_14 (cost=0.00..1455499.14 rows=4358939 width=813) (actual time=25.965..341225.174 rows=157935 loops=1) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ Rows Removed by Filter: 5368644 │\n│ -> Seq Scan on event__00000000 e_15 (cost=0.00..10.90 rows=1 width=40) (actual time=0.002..0.002 rows=0 loops=1) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ -> Seq Scan on event__00082000 e_16 (cost=0.00..1387057.53 rows=3430868 width=819) (actual time=99775.810..277914.901 rows=60639 loops=1) │\n│ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar)) │\n│ Rows Removed by Filter: 3144705 │\n│ -> Hash (cost=43.88..43.88 rows=200 width=4) (actual time=0.084..0.084 rows=45 loops=1) │\n│ Buckets: 1024 Batches: 1 Memory Usage: 10kB │\n│ -> HashAggregate (cost=41.88..43.88 rows=200 width=4) (actual time=0.054..0.067 rows=45 loops=1) │\n│ Group Key: t_sap.landing_id │\n│ -> Seq Scan on t_sap (cost=0.00..35.50 rows=2550 width=4) (actual time=0.013..0.019 rows=45 loops=1) │\n│ Planning time: 4.955 ms │\n│ Execution time: 6117625.390 ms │\n└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘On Wed, Sep 13, 2017 at 4:57 PM, David Rowley <[email protected]> wrote:On 14 September 2017 at 08:28, Mike Broers <[email protected]> wrote:\r\n> I have a query of a partitioned table that uses the partition index in\r\n> production but uses sequence scans in qa. The only major difference I can\r\n> tell is the partitions are much smaller in qa. In production the partitions\r\n> range in size from around 25 million rows to around 60 million rows, in QA\r\n> the partitions are between 4 and 12 million rows. I would think this would\r\n> be big enough to get the planner to prefer the index but this is the major\r\n> difference between the two database as far as I can tell.\n\n\nQA:\n\r\n> │ -> Seq Scan on event__99999999 e_1\r\n> (cost=0.00..2527918.06 rows=11457484 width=782) │\r\n>\n\nProduction:\n>\r\n> │ -> Index Scan using\r\n> ix_event__00011162_landing_id on event__00011162 e_1 (cost=0.56..15476.59\r\n> rows=23400 width=572) │\n\n\nIf QA has between 4 and 12 million rows, then the planner's row\r\nestimate for the condition thinks 11457484 are going to match, so a\r\nSeqscan is likely best here. If those estimates are off then it might\r\nbe worth double checking your nightly analyze is working correctly on\r\nQA.\n\r\nThe planner may be able to be coaxed into using the index with a\r\nhigher effective_cache_size and/or a lower random_page_cost setting,\r\nalthough you really should be looking at those row estimates first.\r\nShowing us the EXPLAIN ANALYZE would have been much more useful so\r\nthat we could have seen if those are accurate or not.\n\r\n--\r\n David Rowley http://www.2ndQuadrant.com/\r\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Fri, 15 Sep 2017 15:18:59 -0500",
"msg_from": "Mike Broers <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query of partitioned object doesnt use index in qa"
},
{
"msg_contents": "Mike Broers <[email protected]> writes:\n> If Im reading this correctly postgres thinks the partition will return 6.5\n> million matching rows but actually comes back with 162k. Is this a case\n> where something is wrong with the analyze job?\n\nYou've got a lot of scans there that're using conditions like\n\n> │ -> Seq Scan on event__99999999 e_1 (cost=0.00..2527828.05 rows=11383021 width=778) (actual time=25522.389..747238.885 rows=42 loops=1)\n> │ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar))\n> │ Rows Removed by Filter: 12172186\n\nWhile I'd expect the planner to be pretty solid on estimating the\nvalidation_status_code condition, it's not going to have any idea about\nthat JSON field test. That's apparently very selective, but you're just\ngetting a default estimate, which is not going to think that a NOT NULL\ntest will exclude lots of rows.\n\nOne thing you could consider doing about this is creating an index\non (body ->> 'SID'::text), which would prompt ANALYZE to gather statistics\nabout that expression. Even if the index weren't actually used in the\nplan, this might improve the estimates and the resulting planning choices\nenough to make it worth maintaining such an index.\n\nOr you could think about pulling that field out and storing it on its own.\nJSON columns are great for storing random unstructured data, but they are\nless great when you want to do relational-ish things on subfields.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 15 Sep 2017 16:42:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query of partitioned object doesnt use index in qa"
},
{
"msg_contents": "That makes a lot of sense, thanks for taking a look. An index like you\nsuggest would probably further improve the query. Is that suggestion\nsidestepping the original problem that production is evaluating the\nlanding_id bit with the partition index and qa is sequence scanning instead?\n\nAND exists (select 1 from t_sap where e.landing_id = t_sap.landing_id)) as\nrankings;\n\nBased on the difference in row estimate I am attempting an analyze with a\nhigher default_statistic_target (currently 100) to see if that helps.\n\n\n\n\nOn Fri, Sep 15, 2017 at 3:42 PM, Tom Lane <[email protected]> wrote:\n\n> Mike Broers <[email protected]> writes:\n> > If Im reading this correctly postgres thinks the partition will return\n> 6.5\n> > million matching rows but actually comes back with 162k. Is this a case\n> > where something is wrong with the analyze job?\n>\n> You've got a lot of scans there that're using conditions like\n>\n> > │ -> Seq Scan on event__99999999 e_1\n> (cost=0.00..2527828.05 rows=11383021 width=778) (actual\n> time=25522.389..747238.885 rows=42 loops=1)\n> > │ Filter: (((body ->> 'SID'::text) IS\n> NOT NULL) AND (validation_status_code = 'P'::bpchar))\n> > │ Rows Removed by Filter: 12172186\n>\n> While I'd expect the planner to be pretty solid on estimating the\n> validation_status_code condition, it's not going to have any idea about\n> that JSON field test. That's apparently very selective, but you're just\n> getting a default estimate, which is not going to think that a NOT NULL\n> test will exclude lots of rows.\n>\n> One thing you could consider doing about this is creating an index\n> on (body ->> 'SID'::text), which would prompt ANALYZE to gather statistics\n> about that expression. Even if the index weren't actually used in the\n> plan, this might improve the estimates and the resulting planning choices\n> enough to make it worth maintaining such an index.\n>\n> Or you could think about pulling that field out and storing it on its own.\n> JSON columns are great for storing random unstructured data, but they are\n> less great when you want to do relational-ish things on subfields.\n>\n> regards, tom lane\n>\n\nThat makes a lot of sense, thanks for taking a look. An index like you suggest would probably further improve the query. Is that suggestion sidestepping the original problem that production is evaluating the landing_id bit with the partition index and qa is sequence scanning instead?AND exists (select 1 from t_sap where e.landing_id = t_sap.landing_id)) as rankings; Based on the difference in row estimate I am attempting an analyze with a higher default_statistic_target (currently 100) to see if that helps.On Fri, Sep 15, 2017 at 3:42 PM, Tom Lane <[email protected]> wrote:Mike Broers <[email protected]> writes:\n> If Im reading this correctly postgres thinks the partition will return 6.5\n> million matching rows but actually comes back with 162k. Is this a case\n> where something is wrong with the analyze job?\n\nYou've got a lot of scans there that're using conditions like\n\n> │ -> Seq Scan on event__99999999 e_1 (cost=0.00..2527828.05 rows=11383021 width=778) (actual time=25522.389..747238.885 rows=42 loops=1)\n> │ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar))\n> │ Rows Removed by Filter: 12172186\n\nWhile I'd expect the planner to be pretty solid on estimating the\nvalidation_status_code condition, it's not going to have any idea about\nthat JSON field test. That's apparently very selective, but you're just\ngetting a default estimate, which is not going to think that a NOT NULL\ntest will exclude lots of rows.\n\nOne thing you could consider doing about this is creating an index\non (body ->> 'SID'::text), which would prompt ANALYZE to gather statistics\nabout that expression. Even if the index weren't actually used in the\nplan, this might improve the estimates and the resulting planning choices\nenough to make it worth maintaining such an index.\n\nOr you could think about pulling that field out and storing it on its own.\nJSON columns are great for storing random unstructured data, but they are\nless great when you want to do relational-ish things on subfields.\n\n regards, tom lane",
"msg_date": "Fri, 15 Sep 2017 15:59:17 -0500",
"msg_from": "Mike Broers <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query of partitioned object doesnt use index in qa"
},
{
"msg_contents": "\n\nOn September 15, 2017 1:42:23 PM PDT, Tom Lane <[email protected]> wrote:\n>One thing you could consider doing about this is creating an index\n>on (body ->> 'SID'::text), which would prompt ANALYZE to gather\n>statistics\n>about that expression. Even if the index weren't actually used in the\n>plan, this might improve the estimates and the resulting planning\n>choices\n>enough to make it worth maintaining such an index.\n\nI'm wondering if we should extend the new CREATE STATISTICS framework to be able to do that without requiring an index. I.e. allow expressions and add a new type of stats that just correspond to what normal columns have. Could even create that implicitly for expression indexes, but allow to drop it, if the overtrading isn't worth it.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 15 Sep 2017 14:03:58 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query of partitioned object doesnt use index in qa"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> I'm wondering if we should extend the new CREATE STATISTICS framework to\n> be able to do that without requiring an index.\n\nI think that's already on the roadmap --- it's one of the reasons we\nended up with a SELECT-like syntax for CREATE STATISTICS. But it\ndidn't get done for v10.\n\nIf we do look at that as a substitute for \"make an expression index just\nso you get some stats\", it would be good to have a way to specify that you\nonly want the standard ANALYZE stats on that value and not the extended\nones.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 15 Sep 2017 18:05:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query of partitioned object doesnt use index in qa"
},
{
"msg_contents": "On 09/16/2017 12:05 AM, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n>> I'm wondering if we should extend the new CREATE STATISTICS\n>> framework to be able to do that without requiring an index.\n> \n> I think that's already on the roadmap --- it's one of the reasons we \n> ended up with a SELECT-like syntax for CREATE STATISTICS. But it \n> didn't get done for v10.\n> \n\nRight. It's one of the things I'd like to be working on after getting in\nthe more complex statistics types (MCV & histograms).\n\n> If we do look at that as a substitute for \"make an expression index\n> just so you get some stats\", it would be good to have a way to\n> specify that you only want the standard ANALYZE stats on that value\n> and not the extended ones.\n> \n\nNot sure I understand what you mean by \"extended\" - the statistics we\ncollect for expression indexes, or the CREATE STATISTICS stuff? I assume\nthe former, because if you don't want the latter then just don't create\nthe statistics. Or am I missing something?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 16 Sep 2017 00:26:06 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query of partitioned object doesnt use index in qa"
},
{
"msg_contents": "I was able to add the suggested indexes\nlike stage.event__00075000((body->>'SID'::text)); and indeed these helped\nthe QA environment use those indexes instead of sequence scanning.\n\nI'm still perplexed by my original question, why production uses the\npartition index and qa does not?\n\nIndex Scan using ix_event__00014695_landing_id on event__00014695 e_3\n(cost=0.56..39137.89\nrows=37697 width=564) │\n\n│ Index Cond: (landing_id =\nt_sap.landing_id)\n\n\nUltimately I think this is just highlighting the need in my environment to\nset random_page_cost lower (we are on an SSD SAN anyway..), but I dont\nthink I have a satisfactory reason by the row estimates are so bad in the\nQA planner and why it doesnt use that partition index there.\n\n\n\n\nOn Fri, Sep 15, 2017 at 3:59 PM, Mike Broers <[email protected]> wrote:\n\n> That makes a lot of sense, thanks for taking a look. An index like you\n> suggest would probably further improve the query. Is that suggestion\n> sidestepping the original problem that production is evaluating the\n> landing_id bit with the partition index and qa is sequence scanning instead?\n>\n> AND exists (select 1 from t_sap where e.landing_id = t_sap.landing_id)) as\n> rankings;\n>\n> Based on the difference in row estimate I am attempting an analyze with a\n> higher default_statistic_target (currently 100) to see if that helps.\n>\n>\n>\n>\n> On Fri, Sep 15, 2017 at 3:42 PM, Tom Lane <[email protected]> wrote:\n>\n>> Mike Broers <[email protected]> writes:\n>> > If Im reading this correctly postgres thinks the partition will return\n>> 6.5\n>> > million matching rows but actually comes back with 162k. Is this a case\n>> > where something is wrong with the analyze job?\n>>\n>> You've got a lot of scans there that're using conditions like\n>>\n>> > │ -> Seq Scan on event__99999999 e_1\n>> (cost=0.00..2527828.05 rows=11383021 width=778) (actual\n>> time=25522.389..747238.885 rows=42 loops=1)\n>> > │ Filter: (((body ->> 'SID'::text) IS\n>> NOT NULL) AND (validation_status_code = 'P'::bpchar))\n>> > │ Rows Removed by Filter: 12172186\n>>\n>> While I'd expect the planner to be pretty solid on estimating the\n>> validation_status_code condition, it's not going to have any idea about\n>> that JSON field test. That's apparently very selective, but you're just\n>> getting a default estimate, which is not going to think that a NOT NULL\n>> test will exclude lots of rows.\n>>\n>> One thing you could consider doing about this is creating an index\n>> on (body ->> 'SID'::text), which would prompt ANALYZE to gather statistics\n>> about that expression. Even if the index weren't actually used in the\n>> plan, this might improve the estimates and the resulting planning choices\n>> enough to make it worth maintaining such an index.\n>>\n>> Or you could think about pulling that field out and storing it on its own.\n>> JSON columns are great for storing random unstructured data, but they are\n>> less great when you want to do relational-ish things on subfields.\n>>\n>> regards, tom lane\n>>\n>\n>\n\nI was able to add the suggested indexes like stage.event__00075000((body->>'SID'::text)); and indeed these helped the QA environment use those indexes instead of sequence scanning. I'm still perplexed by my original question, why production uses the partition index and qa does not?Index Scan using ix_event__00014695_landing_id on event__00014695 e_3 (cost=0.56..39137.89 rows=37697 width=564) ││ Index Cond: (landing_id = t_sap.landing_id) Ultimately I think this is just highlighting the need in my environment to set random_page_cost lower (we are on an SSD SAN anyway..), but I dont think I have a satisfactory reason by the row estimates are so bad in the QA planner and why it doesnt use that partition index there.On Fri, Sep 15, 2017 at 3:59 PM, Mike Broers <[email protected]> wrote:That makes a lot of sense, thanks for taking a look. An index like you suggest would probably further improve the query. Is that suggestion sidestepping the original problem that production is evaluating the landing_id bit with the partition index and qa is sequence scanning instead?AND exists (select 1 from t_sap where e.landing_id = t_sap.landing_id)) as rankings; Based on the difference in row estimate I am attempting an analyze with a higher default_statistic_target (currently 100) to see if that helps.On Fri, Sep 15, 2017 at 3:42 PM, Tom Lane <[email protected]> wrote:Mike Broers <[email protected]> writes:\n> If Im reading this correctly postgres thinks the partition will return 6.5\n> million matching rows but actually comes back with 162k. Is this a case\n> where something is wrong with the analyze job?\n\nYou've got a lot of scans there that're using conditions like\n\n> │ -> Seq Scan on event__99999999 e_1 (cost=0.00..2527828.05 rows=11383021 width=778) (actual time=25522.389..747238.885 rows=42 loops=1)\n> │ Filter: (((body ->> 'SID'::text) IS NOT NULL) AND (validation_status_code = 'P'::bpchar))\n> │ Rows Removed by Filter: 12172186\n\nWhile I'd expect the planner to be pretty solid on estimating the\nvalidation_status_code condition, it's not going to have any idea about\nthat JSON field test. That's apparently very selective, but you're just\ngetting a default estimate, which is not going to think that a NOT NULL\ntest will exclude lots of rows.\n\nOne thing you could consider doing about this is creating an index\non (body ->> 'SID'::text), which would prompt ANALYZE to gather statistics\nabout that expression. Even if the index weren't actually used in the\nplan, this might improve the estimates and the resulting planning choices\nenough to make it worth maintaining such an index.\n\nOr you could think about pulling that field out and storing it on its own.\nJSON columns are great for storing random unstructured data, but they are\nless great when you want to do relational-ish things on subfields.\n\n regards, tom lane",
"msg_date": "Wed, 20 Sep 2017 11:15:53 -0500",
"msg_from": "Mike Broers <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query of partitioned object doesnt use index in qa"
},
{
"msg_contents": "On 21 September 2017 at 04:15, Mike Broers <[email protected]> wrote:\n> Ultimately I think this is just highlighting the need in my environment to\n> set random_page_cost lower (we are on an SSD SAN anyway..), but I dont think\n> I have a satisfactory reason by the row estimates are so bad in the QA\n> planner and why it doesnt use that partition index there.\n\nWithout the index there are no stats to allow the planner to perform a\ngood estimate on \"e.body->>'SID' is not null\", so it applies a default\nof 99.5%. So, as a simple example, if you have a partition with 1\nmillion rows. If you apply 99.5% to that you get 995000 rows. Now if\nyou add the selectivity for \"e.validation_status_code = 'P' \", let's\nsay that's 50%, the row estimate for the entire WHERE clause would be\n497500 (1000000 * 0.995 * 0.5). Since the 99.5% is applied in both\ncases, then the only variable part is validation_status_code. Perhaps\nvalidation_status_code = 'P' is much more common in QA than in\nproduction.\n\nYou can look at the stats as gathered by ANALYZE with:\n\n\\x on\nselect * from pg_stats where tablename = 'event__99999999' and attname\n= 'validation_status_code';\n\\x off\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Sep 2017 11:05:43 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query of partitioned object doesnt use index in qa"
},
{
"msg_contents": "Very helpful thank you for the additional insight - I'd never checked into\npg_stats and that does reveal a difference in the distribution of the\nvalidation_status_code between qa and production:\n\nprod:\n│ most_common_vals │ {P,F} │\n│ most_common_freqs │ {0.925967,0.000933333} │\n│ histogram_bounds │ ❏ │\n│ correlation │ 0.995533 │\n\nqa:\n│ most_common_vals │ {P} │\n│ most_common_freqs │ {0.861633} │\n│ histogram_bounds │ ❏ │\n│ correlation │ 0.999961 │\n\nso the way I am reading this is that there is likely no sensible way to\navoid postgres thinking it will just have to scan the whole table because\nof these statistics. I can force it by setting session parameters for this\nparticular query but I probably shouldnt be looking at system settings to\nbrutally force random fetches.\n\nthanks again for the assistance!\n\n\n\nOn Wed, Sep 20, 2017 at 6:05 PM, David Rowley <[email protected]>\nwrote:\n\n> On 21 September 2017 at 04:15, Mike Broers <[email protected]> wrote:\n> > Ultimately I think this is just highlighting the need in my environment\n> to\n> > set random_page_cost lower (we are on an SSD SAN anyway..), but I dont\n> think\n> > I have a satisfactory reason by the row estimates are so bad in the QA\n> > planner and why it doesnt use that partition index there.\n>\n> Without the index there are no stats to allow the planner to perform a\n> good estimate on \"e.body->>'SID' is not null\", so it applies a default\n> of 99.5%. So, as a simple example, if you have a partition with 1\n> million rows. If you apply 99.5% to that you get 995000 rows. Now if\n> you add the selectivity for \"e.validation_status_code = 'P' \", let's\n> say that's 50%, the row estimate for the entire WHERE clause would be\n> 497500 (1000000 * 0.995 * 0.5). Since the 99.5% is applied in both\n> cases, then the only variable part is validation_status_code. Perhaps\n> validation_status_code = 'P' is much more common in QA than in\n> production.\n>\n> You can look at the stats as gathered by ANALYZE with:\n>\n> \\x on\n> select * from pg_stats where tablename = 'event__99999999' and attname\n> = 'validation_status_code';\n> \\x off\n>\n> --\n> David Rowley http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n\nVery helpful thank you for the additional insight - I'd never checked into pg_stats and that does reveal a difference in the distribution of the validation_status_code between qa and production:prod:│ most_common_vals │ {P,F} ││ most_common_freqs │ {0.925967,0.000933333} ││ histogram_bounds │ ❏ ││ correlation │ 0.995533 │qa:│ most_common_vals │ {P} │ │ most_common_freqs │ {0.861633} │ │ histogram_bounds │ ❏ │ │ correlation │ 0.999961 │ so the way I am reading this is that there is likely no sensible way to avoid postgres thinking it will just have to scan the whole table because of these statistics. I can force it by setting session parameters for this particular query but I probably shouldnt be looking at system settings to brutally force random fetches.thanks again for the assistance!On Wed, Sep 20, 2017 at 6:05 PM, David Rowley <[email protected]> wrote:On 21 September 2017 at 04:15, Mike Broers <[email protected]> wrote:\n> Ultimately I think this is just highlighting the need in my environment to\n> set random_page_cost lower (we are on an SSD SAN anyway..), but I dont think\n> I have a satisfactory reason by the row estimates are so bad in the QA\n> planner and why it doesnt use that partition index there.\n\nWithout the index there are no stats to allow the planner to perform a\ngood estimate on \"e.body->>'SID' is not null\", so it applies a default\nof 99.5%. So, as a simple example, if you have a partition with 1\nmillion rows. If you apply 99.5% to that you get 995000 rows. Now if\nyou add the selectivity for \"e.validation_status_code = 'P' \", let's\nsay that's 50%, the row estimate for the entire WHERE clause would be\n497500 (1000000 * 0.995 * 0.5). Since the 99.5% is applied in both\ncases, then the only variable part is validation_status_code. Perhaps\nvalidation_status_code = 'P' is much more common in QA than in\nproduction.\n\nYou can look at the stats as gathered by ANALYZE with:\n\n\\x on\nselect * from pg_stats where tablename = 'event__99999999' and attname\n= 'validation_status_code';\n\\x off\n\n--\n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 25 Sep 2017 11:21:41 -0500",
"msg_from": "Mike Broers <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query of partitioned object doesnt use index in qa"
}
] |
[
{
"msg_contents": "Hi\n\n*Requirement :- *\nWe need to retrieve latest health of around 1.5 million objects for a given\ntime.\n\n*Implementation :-*\nWe are storing hourly data of each object in single row. Given below is the\nschema :-\n\n*CREATE TABLE health_timeseries (*\n\n* mobid text NOT NULL,\n hour bigint NOT NULL,\n health real[]\n );*\n\n\nmobId - Object ID\n\nhour - Epoch hour\n\nhealth - Array of health values for a given hour of that object.\n\n\nEach object has 2 hours of health data (i.e. 2 rows for each object)\nso total no. of rows is around 3 million.\n\n\nWith the above approach the query to retrieve the latest health of all\nobjects for a given time duration is taking around *85 seconds*. I\nhave tried to increase the work_mem, effective_cache, shared_buffer to\n4 GB of PostgreSQL but still there was no improvement in the query\nexecution time.\n\n\n*Query :-*\n\n*select distinct on (health_timeseries.mobid) mobid,\nhealth_timeseries.health, health_timeseries.hour from\nhealth_timeseries where hour >=(1505211054000/(3600*1000))-1 and hour\n<= 1505211054000/(3600*1000) ORDER BY health_timeseries.mobid DESC,\nhealth_timeseries.hour DESC;*\n\n\n\n*Hardware Configuration of PostgreSQL VM :-*\n\n1. OS - Centos.\n\n2. Postgresql version - 9.6.2\n\n3. RAM - 16 GB RAM\n\n4. CPU - 8 vCPU\n\n\nPlease let us know the hardware configuration of PostgreSQL for such\nhuge dataset?\n\n\nAnd also let us know if there is any better schema/query to retrieve this data?\n\n\nThanks and Regards\n\nSubramaniam\n\nHiRequirement :- We need to retrieve latest health of around 1.5 million objects for a given time.Implementation :-We are storing hourly data of each object in single row. Given below is the schema :- CREATE TABLE health_timeseries ( mobid text NOT NULL,\n hour bigint NOT NULL,\n health real[]\n );mobId - Object IDhour - Epoch hourhealth - Array of health values for a given hour of that object.Each object has 2 hours of health data (i.e. 2 rows for each object) so total no. of rows is around 3 million.With the above approach the query to retrieve the latest health of all objects for a given time duration is taking around 85 seconds. I have tried to increase the work_mem, effective_cache, shared_buffer to 4 GB of PostgreSQL but still there was no improvement in the query execution time.Query :-select distinct on (health_timeseries.mobid) mobid, health_timeseries.health, health_timeseries.hour from health_timeseries where hour >=(1505211054000/(3600*1000))-1 and hour <= 1505211054000/(3600*1000) ORDER BY health_timeseries.mobid DESC, health_timeseries.hour DESC;Hardware Configuration of PostgreSQL VM :-1. OS - Centos.2. Postgresql version - 9.6.23. RAM - 16 GB RAM4. CPU - 8 vCPUPlease let us know the hardware configuration of PostgreSQL for such huge dataset?And also let us know if there is any better schema/query to retrieve this data?Thanks and RegardsSubramaniam",
"msg_date": "Thu, 14 Sep 2017 17:21:20 +0530",
"msg_from": "Subramaniam C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Store/Retrieve time series data from PostgreSQL"
},
{
"msg_contents": "On 2017-09-14 13:51, Subramaniam C wrote:\n> Hi\n> \n> QUERY :-\n> \n> _select distinct on (health_timeseries.mobid) mobid,\n> health_timeseries.health, health_timeseries.hour from\n> health_timeseries where hour >=(1505211054000/(3600*1000))-1 and hour\n> <= 1505211054000/(3600*1000) ORDER BY health_timeseries.mobid DESC,\n> health_timeseries.hour DESC;_\n> \n\nDid you run EXPLAIN on this query to see what it is actually doing?\n\nWhat you are doing how is selecting all rows from the last hour,\nsorting them by mobid and hour, and then DISTINCT filters out al \nduplicates.\n\nSorting on mobid is therefor useless, DISTINCT still has to check all \nrows.\n\nSorting on mobid and hour will take a long time if there is no index for \nit,\nso if you don't have an index on the mobid and hour together then you \nshould probably try that.\n\n\nBut, see what EXPLAIN tells you first.\n\nRegards,\nVincent.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Sep 2017 14:03:20 +0200",
"msg_from": "vinny <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Store/Retrieve time series data from PostgreSQL"
},
{
"msg_contents": "I created index on morbid and hour together. Given below is the EXPLAIN\noutput\n\n------------------------------------------------------------------------------------------\n\n Unique (cost=606127.16..621098.42 rows=1087028 width=200)\n\n -> Sort (cost=606127.16..613612.79 rows=2994252 width=200)\n\n Sort Key: mobid DESC, hour DESC\n\n -> Seq Scan on health_timeseries (cost=0.00..284039.00\nrows=2994252 width=200)\n\n Filter: ((hour >= '418134'::bigint) AND (hour <=\n'418135'::bigint))\n\nOn Thu, Sep 14, 2017 at 5:33 PM, vinny <[email protected]> wrote:\n\n> On 2017-09-14 13:51, Subramaniam C wrote:\n>\n>> Hi\n>>\n>> QUERY :-\n>>\n>> _select distinct on (health_timeseries.mobid) mobid,\n>> health_timeseries.health, health_timeseries.hour from\n>> health_timeseries where hour >=(1505211054000/(3600*1000))-1 and hour\n>> <= 1505211054000/(3600*1000) ORDER BY health_timeseries.mobid DESC,\n>> health_timeseries.hour DESC;_\n>>\n>>\n> Did you run EXPLAIN on this query to see what it is actually doing?\n>\n> What you are doing how is selecting all rows from the last hour,\n> sorting them by mobid and hour, and then DISTINCT filters out al\n> duplicates.\n>\n> Sorting on mobid is therefor useless, DISTINCT still has to check all rows.\n>\n> Sorting on mobid and hour will take a long time if there is no index for\n> it,\n> so if you don't have an index on the mobid and hour together then you\n> should probably try that.\n>\n>\n> But, see what EXPLAIN tells you first.\n>\n> Regards,\n> Vincent.\n>\n\nI created index on morbid and hour together. Given below is the EXPLAIN output------------------------------------------------------------------------------------------\n Unique (cost=606127.16..621098.42 rows=1087028 width=200)\n -> Sort (cost=606127.16..613612.79 rows=2994252 width=200)\n Sort Key: mobid DESC, hour DESC\n -> Seq Scan on health_timeseries (cost=0.00..284039.00 rows=2994252 width=200)\n Filter: ((hour >= '418134'::bigint) AND (hour <= '418135'::bigint))On Thu, Sep 14, 2017 at 5:33 PM, vinny <[email protected]> wrote:On 2017-09-14 13:51, Subramaniam C wrote:\n\nHi\n\nQUERY :-\n\n_select distinct on (health_timeseries.mobid) mobid,\nhealth_timeseries.health, health_timeseries.hour from\nhealth_timeseries where hour >=(1505211054000/(3600*1000))-1 and hour\n<= 1505211054000/(3600*1000) ORDER BY health_timeseries.mobid DESC,\nhealth_timeseries.hour DESC;_\n\n\n\nDid you run EXPLAIN on this query to see what it is actually doing?\n\nWhat you are doing how is selecting all rows from the last hour,\nsorting them by mobid and hour, and then DISTINCT filters out al duplicates.\n\nSorting on mobid is therefor useless, DISTINCT still has to check all rows.\n\nSorting on mobid and hour will take a long time if there is no index for it,\nso if you don't have an index on the mobid and hour together then you should probably try that.\n\n\nBut, see what EXPLAIN tells you first.\n\nRegards,\nVincent.",
"msg_date": "Thu, 14 Sep 2017 18:03:58 +0530",
"msg_from": "Subramaniam C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Store/Retrieve time series data from PostgreSQL"
},
{
"msg_contents": "With this query I am trying to get the latest hour for a given timestamp so\nthat I can get whole health array of all object for a given hour. So I am\ndoing DISTINCT on mobid and order by hour and mobid DESC.\n\nOn Thu, Sep 14, 2017 at 6:03 PM, Subramaniam C <[email protected]>\nwrote:\n\n> I created index on morbid and hour together. Given below is the EXPLAIN\n> output\n>\n> ------------------------------------------------------------\n> ------------------------------\n>\n> Unique (cost=606127.16..621098.42 rows=1087028 width=200)\n>\n> -> Sort (cost=606127.16..613612.79 rows=2994252 width=200)\n>\n> Sort Key: mobid DESC, hour DESC\n>\n> -> Seq Scan on health_timeseries (cost=0.00..284039.00\n> rows=2994252 width=200)\n>\n> Filter: ((hour >= '418134'::bigint) AND (hour <=\n> '418135'::bigint))\n>\n> On Thu, Sep 14, 2017 at 5:33 PM, vinny <[email protected]> wrote:\n>\n>> On 2017-09-14 13:51, Subramaniam C wrote:\n>>\n>>> Hi\n>>>\n>>> QUERY :-\n>>>\n>>> _select distinct on (health_timeseries.mobid) mobid,\n>>> health_timeseries.health, health_timeseries.hour from\n>>> health_timeseries where hour >=(1505211054000/(3600*1000))-1 and hour\n>>> <= 1505211054000/(3600*1000) ORDER BY health_timeseries.mobid DESC,\n>>> health_timeseries.hour DESC;_\n>>>\n>>>\n>> Did you run EXPLAIN on this query to see what it is actually doing?\n>>\n>> What you are doing how is selecting all rows from the last hour,\n>> sorting them by mobid and hour, and then DISTINCT filters out al\n>> duplicates.\n>>\n>> Sorting on mobid is therefor useless, DISTINCT still has to check all\n>> rows.\n>>\n>> Sorting on mobid and hour will take a long time if there is no index for\n>> it,\n>> so if you don't have an index on the mobid and hour together then you\n>> should probably try that.\n>>\n>>\n>> But, see what EXPLAIN tells you first.\n>>\n>> Regards,\n>> Vincent.\n>>\n>\n>\n\nWith this query I am trying to get the latest hour for a given timestamp so that I can get whole health array of all object for a given hour. So I am doing DISTINCT on mobid and order by hour and mobid DESC.On Thu, Sep 14, 2017 at 6:03 PM, Subramaniam C <[email protected]> wrote:I created index on morbid and hour together. Given below is the EXPLAIN output------------------------------------------------------------------------------------------\n Unique (cost=606127.16..621098.42 rows=1087028 width=200)\n -> Sort (cost=606127.16..613612.79 rows=2994252 width=200)\n Sort Key: mobid DESC, hour DESC\n -> Seq Scan on health_timeseries (cost=0.00..284039.00 rows=2994252 width=200)\n Filter: ((hour >= '418134'::bigint) AND (hour <= '418135'::bigint))On Thu, Sep 14, 2017 at 5:33 PM, vinny <[email protected]> wrote:On 2017-09-14 13:51, Subramaniam C wrote:\n\nHi\n\nQUERY :-\n\n_select distinct on (health_timeseries.mobid) mobid,\nhealth_timeseries.health, health_timeseries.hour from\nhealth_timeseries where hour >=(1505211054000/(3600*1000))-1 and hour\n<= 1505211054000/(3600*1000) ORDER BY health_timeseries.mobid DESC,\nhealth_timeseries.hour DESC;_\n\n\n\nDid you run EXPLAIN on this query to see what it is actually doing?\n\nWhat you are doing how is selecting all rows from the last hour,\nsorting them by mobid and hour, and then DISTINCT filters out al duplicates.\n\nSorting on mobid is therefor useless, DISTINCT still has to check all rows.\n\nSorting on mobid and hour will take a long time if there is no index for it,\nso if you don't have an index on the mobid and hour together then you should probably try that.\n\n\nBut, see what EXPLAIN tells you first.\n\nRegards,\nVincent.",
"msg_date": "Thu, 14 Sep 2017 18:08:27 +0530",
"msg_from": "Subramaniam C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Store/Retrieve time series data from PostgreSQL"
}
] |
[
{
"msg_contents": "I have a user who is trying to match overlapping duplicate phone info but\nfor different customer_ids.\n\nThe intended conditional could be expressed:\nIF the intersection of the sets\n{c.main_phone, c.secondary_phone}\nand\n{c1.main_phone, c1.secondary_phone}\nis not empty\nTHEN join\nEXCEPT where the intersection of the sets =\n{'0000000000'}\n\nHe wants a join like this:\n\nFROM customers c\nINNER JOIN customers c1 on (array[c.main_phone, c.secondary_phone] &&\n array[nullif(c1.main_phone, '0000000000') , nullif(c1.secondary_phone,\n'0000000000')])\n(array[c.main_phone, c.secondary_phone] && array[nullif(c1.main_phone,\n'0000000000') , nullif(c1.secondary_phone, '0000000000')])\nWHERE c.customer_id = 1;\n\nI want to index this part:\narray[nullif(c1.main_phone, '0000000000') , nullif(c1.secondary_phone,\n'0000000000')]\n\nFirst of all I see I can't create a btree index on an array. And with\nbtree_gin, this index is not being used:\n\nCREATE INDEX ON customers USING gin ((NULLIF(main_phone,\n'0000000000'::text)), (NULLIF(secondary_phone, '0000000000'::text)));\n\nWhat am I missing here? Is there a way to support a condition like this?\n\nThank you!\n\nI have a user who is trying to match overlapping duplicate phone info but for different customer_ids. The intended conditional could be expressed: IF the intersection of the sets{c.main_phone, c.secondary_phone}and{c1.main_phone, c1.secondary_phone}is not empty THEN join EXCEPT where the intersection of the sets ={'0000000000'}He wants a join like this:FROM customers cINNER JOIN customers c1 on (array[c.main_phone, c.secondary_phone] && array[nullif(c1.main_phone, '0000000000') , nullif(c1.secondary_phone, '0000000000')])(array[c.main_phone, c.secondary_phone] && array[nullif(c1.main_phone, '0000000000') , nullif(c1.secondary_phone, '0000000000')])WHERE c.customer_id = 1;I want to index this part:array[nullif(c1.main_phone, '0000000000') , nullif(c1.secondary_phone, '0000000000')]First of all I see I can't create a btree index on an array. And with btree_gin, this index is not being used:CREATE INDEX ON customers USING gin ((NULLIF(main_phone, '0000000000'::text)), (NULLIF(secondary_phone, '0000000000'::text)));What am I missing here? Is there a way to support a condition like this?Thank you!",
"msg_date": "Fri, 15 Sep 2017 15:51:01 -0500",
"msg_from": "Jeremy Finzel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Indexing an array of two separate columns"
}
] |
[
{
"msg_contents": "I tried to use partitioning and have problem with it,\nas I get very bad perfomance. I cannot understand, what I am doing wrong.\n\n\nI set up master and child tables via inheritance, with range CHECK by date\nand with\ntrigger on 'insert', as described in the documentation.\n\nI was happy with insertion speed, it was about 30 megabytes per second that\nwas more than I expected,\nand server idle time was near 95 %. I used 100 parallel clients.\n\nHowever, when it came to updates things turned very bad.\nI set up a test with 30 running client making 10000 updates each in a\nrandom fashion.\nupdates via master table took 6 times longer and server idle time dropped\nto 15%, user CPU 75% with load average 15.\n\nTest details below\n\n300000 updates ( 30 processes 10000 selects each)\n\nvia master table 134 seconds\nvia child table 20 seconds\n\n300000 updates via master table without \"date1 >= '2017-09-06' and date1 <\n'2017-09-07'\" clause\n180 seconds\nThat means that constraint_exlusion works, however, the process of\nexclusion takes A LOT OF time.\n\nI tried to repeat the test with selects\n\n300000 selects ( 30 processes 10000 selects each)\n\nvia master table 50 seconds\nvia child table 8 seconds\n\nThis is very bad too.\n\nThe documentation says that it is not good to have 1000 partition, probably\n100 is OK, but I have only 40 partitions\nand have noticeable delays with only 5 partitions.\n\nWhat I also cannot understand, why time increase for 'select'\nis much higher (2.5 times) than time increase for 'update', considering\nthat 'where' clause is identical\nand assuming time is spent selecting relevant child tables.\n\nBest regards, Konstantin\n\nEnvironment description.\n\n\nPostgres 9.5 on linux\n\ndb=> select version();\n\nversion\n----------------------------------------------------------------------------------------------------------\n PostgreSQL 9.5.8 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5\n20150623 (Red Hat 4.8.5-11), 64-bit\n(1 row)\ndb=>\n\n\n16 CPU\n\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 45\nmodel name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz\n\n128GB ram\n\n32GB shared_buffers\n\n\nTable statistics\n\ndb=> select count(*) from my_log_daily;\n count\n--------\n 408568\n(1 row)\n\ndb=> select count(*) from my_log_daily_170906;\n count\n--------\n 408568\n(1 row)\n\ndb=>\n\nexplain (ANALYZE,BUFFERS) select stage+1 from my_log_daily_170906 where\ndate1 >= '2017-09-06' and date1 < '2017-09-07' and msgid1=3414253 and\nmsgid2=20756 and msgid3=1504712117 and instance='WS6';\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using my_log_daily_idx_170906 on my_log_daily_170906\n(cost=0.42..8.46 rows=1 width=4) (actual time=0.013..0.014 rows=1 loops=1)\n Index Cond: ((msgid1 = 3414253) AND (msgid2 = 20756) AND (msgid3 =\n1504712117) AND ((instance)::text = 'WS6'::text))\n Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp without time zone)\nAND (date1 < '2017-09-07 00:00:00'::timestamp without time zone))\n Buffers: shared hit=4\n Planning time: 0.135 ms\n Execution time: 0.029 ms\n(6 rows)\n\ndb=>\n\nexplain (ANALYZE,BUFFERS) select stage+1 from my_log_daily where date1\n>= '2017-09-06' and date1 < '2017-09-07' and msgid1=3414253 and\nmsgid2=20756 and msgid3=1504712117 and instance='WS6';\n\nQUERY\nPLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.00..8.46 rows=2 width=4) (actual time=0.016..0.017 rows=1\nloops=1)\n Buffers: shared hit=4\n -> Append (cost=0.00..8.45 rows=2 width=4) (actual time=0.013..0.014\nrows=1 loops=1)\n Buffers: shared hit=4\n -> Seq Scan on my_log_daily (cost=0.00..0.00 rows=1 width=4)\n(actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp without\ntime zone) AND (date1 < '2017-09-07 00:00:00'::timestamp without time zone)\nAND (msgid1 = 3414253) AND (msgid2 = 20756) AND (msgid3 = 1504712117) AND\n((instance)::text = 'WS6'::text))\n -> Index Scan using my_log_daily_idx_170906 on\nmy_log_daily_170906 (cost=0.42..8.45 rows=1 width=4) (actual\ntime=0.012..0.013 rows=1 loops=1)\n Index Cond: ((msgid1 = 3414253) AND (msgid2 = 20756) AND\n(msgid3 = 1504712117) AND ((instance)::text = 'WS6'::text))\n Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp without\ntime zone) AND (date1 < '2017-09-07 00:00:00'::timestamp without time zone))\n Buffers: shared hit=4\n Planning time: 2.501 ms\n Execution time: 0.042 ms\n(12 rows)\n\ndb=>\n\nexplain (ANALYZE,BUFFERS) update my_log_daily_170906 set stage=stage+1\nwhere date1 >= '2017-09-06' and date1 < '2017-09-07' and msgid1=3414253\nand msgid2=20756 and msgid3=1504712117 and instance='WS6';\n\nQUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Update on my_log_daily_170906 (cost=0.42..8.46 rows=1 width=186) (actual\ntime=0.133..0.133 rows=0 loops=1)\n Buffers: shared hit=5 dirtied=1\n -> Index Scan using my_log_daily_idx_170906 on my_log_daily_170906\n(cost=0.42..8.46 rows=1 width=186) (actual time=0.014..0.015 rows=1 loops=1)\n Index Cond: ((msgid1 = 3414253) AND (msgid2 = 20756) AND (msgid3 =\n1504712117) AND ((instance)::text = 'WS6'::text))\n Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp without time\nzone) AND (date1 < '2017-09-07 00:00:00'::timestamp without time zone))\n Buffers: shared hit=4\n Planning time: 0.488 ms\n Execution time: 0.177 ms\n(8 rows)\n\ndb=>\nexplain (ANALYZE,BUFFERS) update my_log_daily set stage=stage+1 where\ndate1 >= '2017-09-06' and date1 < '2017-09-07' and msgid1=3414253 and\nmsgid2=20756 and msgid3=1504712117 and instance='WS6';\n\nQUERY\nPLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Update on my_log_daily (cost=0.00..8.46 rows=2 width=587) (actual\ntime=0.052..0.052 rows=0 loops=1)\n Update on my_log_daily\n Update on my_log_daily_170906\n Buffers: shared hit=5\n -> Seq Scan on my_log_daily (cost=0.00..0.00 rows=1 width=988) (actual\ntime=0.001..0.001 rows=0 loops=1)\n Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp without time\nzone) AND (date1 < '2017-09-07 00:00:00'::timestamp without time zone) AND\n(msgid1 = 3414253) AND (msgid2 = 20756) AND (msgid3 = 1504712117) AND\n((instance)::text = 'WS6'::text))\n -> Index Scan using my_log_daily_idx_170906 on my_log_daily_170906\n(cost=0.42..8.46 rows=1 width=186) (actual time=0.019..0.020 rows=1 loops=1)\n Index Cond: ((msgid1 = 3414253) AND (msgid2 = 20756) AND (msgid3 =\n1504712117) AND ((instance)::text = 'WS6'::text))\n Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp without time\nzone) AND (date1 < '2017-09-07 00:00:00'::timestamp without time zone))\n Buffers: shared hit=4\n Planning time: 4.639 ms\n Execution time: 0.147 ms\n(12 rows)\n\n\ndb=> \\d my_log_daily\n Table \"public.my_log_daily\"\n Column | Type |\nModifiers\n------------+-----------------------------+----------------------------------------------------\n client_id | integer | not null\n pult | character varying(6) | not null\n opr | character varying(30) | not null\n handler | character varying(60) |\n msgid | integer |\n sclient_id | integer |\n stage | integer | default 0\n msgid1 | integer | default 0\n msgid2 | integer | default 0\n msgid3 | integer | default 0\n ended | smallint | default 0\n date1 | timestamp without time zone | default\n('now'::text)::timestamp without time zone\n date2 | timestamp without time zone |\n reserved1 | character varying(100) |\n reserved2 | character varying(100) |\n reserved3 | character varying(100) |\n atpco | smallint | not null default 0\n rsrvdnum1 | integer |\n rsrvdnum2 | integer |\n rsrvdnum3 | integer |\n instance | character varying(3) |\n duration | integer | default 0\n ip | integer |\nTriggers:\n insert_my_log_daily_trigger BEFORE INSERT ON my_log_daily FOR EACH ROW\nEXECUTE PROCEDURE my_log_daily_insert_trigger()\nNumber of child tables: 40 (Use \\d+ to list them.)\n\ndb=>\n\nIndexes:\n \"my_log_daily_idx_170906\" UNIQUE, btree (msgid1, msgid2, msgid3,\ninstance)\n \"my_log_daily_date_170906\" btree (date1)\n \"my_log_daily_handler_170906\" btree (handler)\n \"my_log_daily_pult_170906\" btree (pult)\n \"my_log_daily_reserved1_170906\" btree (reserved1)\n \"my_log_daily_src_170906\" btree (client_id, date1)\nCheck constraints:\n \"my_log_daily_170906_date1_check\" CHECK (date1 >= '2017-09-06\n00:00:00'::timestamp without time zone AND date1 < '2017-09-07\n00:00:00'::timestamp without time zone)\nInherits: my_log_daily\n\ndb=>\n\n\n\na complete list of child tables below.\ntable descriptions including CHECK and indexes ( as well as trigger\nfunction ) are autogenerated, so there is no human error.\n\n\n-----------------\ndb=> \\d+ my_log_daily\n Table\n\"public.my_log_daily\"\n Column | Type |\nModifiers | Storage | Stats target | Description\n------------+-----------------------------+----------------------------------------------------+----------+--------------+-------------\n client_id | integer | not\nnull | plain | |\n pult | character varying(6) | not\nnull | extended | |\n opr | character varying(30) | not\nnull | extended | |\n handler | character varying(60)\n| | extended\n| |\n msgid | integer\n| | plain\n| |\n sclient_id | integer\n| | plain\n| |\n stage | integer | default\n0 | plain | |\n msgid1 | integer | default\n0 | plain | |\n msgid2 | integer | default\n0 | plain | |\n msgid3 | integer | default\n0 | plain | |\n ended | smallint | default\n0 | plain | |\n date1 | timestamp without time zone | default\n('now'::text)::timestamp without time zone | plain | |\n date2 | timestamp without time zone\n| | plain\n| |\n reserved1 | character varying(100)\n| | extended\n| |\n reserved2 | character varying(100)\n| | extended\n| |\n reserved3 | character varying(100)\n| | extended\n| |\n atpco | smallint | not null default\n0 | plain | |\n rsrvdnum1 | integer\n| | plain\n| |\n rsrvdnum2 | integer\n| | plain\n| |\n rsrvdnum3 | integer\n| | plain\n| |\n instance | character varying(3)\n| | extended\n| |\n duration | integer | default\n0 | plain | |\n ip | integer\n| | plain\n| |\nTriggers:\n insert_my_log_daily_trigger BEFORE INSERT ON my_log_daily FOR EACH ROW\nEXECUTE PROCEDURE my_log_daily_insert_trigger()\nChild tables: my_log_daily_170901,\n my_log_daily_170902,\n my_log_daily_170903,\n my_log_daily_170904,\n my_log_daily_170905,\n my_log_daily_170906,\n my_log_daily_170907,\n my_log_daily_170908,\n my_log_daily_170909,\n my_log_daily_170910,\n my_log_daily_170911,\n my_log_daily_170912,\n my_log_daily_170913,\n my_log_daily_170914,\n my_log_daily_170915,\n my_log_daily_170916,\n my_log_daily_170917,\n my_log_daily_170918,\n my_log_daily_170919,\n my_log_daily_170920,\n my_log_daily_170921,\n my_log_daily_170922,\n my_log_daily_170923,\n my_log_daily_170924,\n my_log_daily_170925,\n my_log_daily_170926,\n my_log_daily_170927,\n my_log_daily_170928,\n my_log_daily_170929,\n my_log_daily_170930,\n my_log_daily_171001,\n my_log_daily_171002,\n my_log_daily_171003,\n my_log_daily_171004,\n my_log_daily_171005,\n my_log_daily_171006,\n hh my_log_daily_171007,\n my_log_daily_171008,\n my_log_daily_171009,\n my_log_daily_171010\n\ndb=>\n\nI tried to use partitioning and have problem with it,as I get very bad perfomance. I cannot understand, what I am doing wrong.I set up master and child tables via inheritance, with range CHECK by dateand withtrigger on 'insert', as described in the documentation.I was happy with insertion speed, it was about 30 megabytes per second thatwas more than I expected,and server idle time was near 95 %. I used 100 parallel clients.However, when it came to updates things turned very bad.I set up a test with 30 running client making 10000 updates each in arandom fashion.updates via master table took 6 times longer and server idle time droppedto 15%, user CPU 75% with load average 15.Test details below300000 updates ( 30 processes 10000 selects each)via master table 134 secondsvia child table 20 seconds300000 updates via master table without \"date1 >= '2017-09-06' and date1 <'2017-09-07'\" clause180 secondsThat means that constraint_exlusion works, however, the process ofexclusion takes A LOT OF time.I tried to repeat the test with selects300000 selects ( 30 processes 10000 selects each)via master table 50 secondsvia child table 8 secondsThis is very bad too.The documentation says that it is not good to have 1000 partition, probably100 is OK, but I have only 40 partitionsand have noticeable delays with only 5 partitions.What I also cannot understand, why time increase for 'select'is much higher (2.5 times) than time increase for 'update', consideringthat 'where' clause is identicaland assuming time is spent selecting relevant child tables.Best regards, KonstantinEnvironment description.Postgres 9.5 on linuxdb=> select version();version---------------------------------------------------------------------------------------------------------- PostgreSQL 9.5.8 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.520150623 (Red Hat 4.8.5-11), 64-bit(1 row)db=>16 CPUvendor_id : GenuineIntelcpu family : 6model : 45model name : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz128GB ram32GB shared_buffersTable statisticsdb=> select count(*) from my_log_daily; count-------- 408568(1 row)db=> select count(*) from my_log_daily_170906; count-------- 408568(1 row)db=>explain (ANALYZE,BUFFERS) select stage+1 from my_log_daily_170906 wheredate1 >= '2017-09-06' and date1 < '2017-09-07' and msgid1=3414253 andmsgid2=20756 and msgid3=1504712117 and instance='WS6'; QUERYPLAN------------------------------------------------------------------------------------------------------------------------------------------------- Index Scan using my_log_daily_idx_170906 on my_log_daily_170906(cost=0.42..8.46 rows=1 width=4) (actual time=0.013..0.014 rows=1 loops=1) Index Cond: ((msgid1 = 3414253) AND (msgid2 = 20756) AND (msgid3 =1504712117) AND ((instance)::text = 'WS6'::text)) Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp without time zone)AND (date1 < '2017-09-07 00:00:00'::timestamp without time zone)) Buffers: shared hit=4 Planning time: 0.135 ms Execution time: 0.029 ms(6 rows)db=>explain (ANALYZE,BUFFERS) select stage+1 from my_log_daily where date1>= '2017-09-06' and date1 < '2017-09-07' and msgid1=3414253 andmsgid2=20756 and msgid3=1504712117 and instance='WS6';QUERYPLAN------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Result (cost=0.00..8.46 rows=2 width=4) (actual time=0.016..0.017 rows=1loops=1) Buffers: shared hit=4 -> Append (cost=0.00..8.45 rows=2 width=4) (actual time=0.013..0.014rows=1 loops=1) Buffers: shared hit=4 -> Seq Scan on my_log_daily (cost=0.00..0.00 rows=1 width=4)(actual time=0.000..0.000 rows=0 loops=1) Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp withouttime zone) AND (date1 < '2017-09-07 00:00:00'::timestamp without time zone)AND (msgid1 = 3414253) AND (msgid2 = 20756) AND (msgid3 = 1504712117) AND((instance)::text = 'WS6'::text)) -> Index Scan using my_log_daily_idx_170906 onmy_log_daily_170906 (cost=0.42..8.45 rows=1 width=4) (actualtime=0.012..0.013 rows=1 loops=1) Index Cond: ((msgid1 = 3414253) AND (msgid2 = 20756) AND(msgid3 = 1504712117) AND ((instance)::text = 'WS6'::text)) Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp withouttime zone) AND (date1 < '2017-09-07 00:00:00'::timestamp without time zone)) Buffers: shared hit=4 Planning time: 2.501 ms Execution time: 0.042 ms(12 rows)db=>explain (ANALYZE,BUFFERS) update my_log_daily_170906 set stage=stage+1where date1 >= '2017-09-06' and date1 < '2017-09-07' and msgid1=3414253and msgid2=20756 and msgid3=1504712117 and instance='WS6';QUERYPLAN--------------------------------------------------------------------------------------------------------------------------------------------------------- Update on my_log_daily_170906 (cost=0.42..8.46 rows=1 width=186) (actualtime=0.133..0.133 rows=0 loops=1) Buffers: shared hit=5 dirtied=1 -> Index Scan using my_log_daily_idx_170906 on my_log_daily_170906(cost=0.42..8.46 rows=1 width=186) (actual time=0.014..0.015 rows=1 loops=1) Index Cond: ((msgid1 = 3414253) AND (msgid2 = 20756) AND (msgid3 =1504712117) AND ((instance)::text = 'WS6'::text)) Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp without timezone) AND (date1 < '2017-09-07 00:00:00'::timestamp without time zone)) Buffers: shared hit=4 Planning time: 0.488 ms Execution time: 0.177 ms(8 rows)db=>explain (ANALYZE,BUFFERS) update my_log_daily set stage=stage+1 wheredate1 >= '2017-09-06' and date1 < '2017-09-07' and msgid1=3414253 andmsgid2=20756 and msgid3=1504712117 and instance='WS6';QUERYPLAN------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Update on my_log_daily (cost=0.00..8.46 rows=2 width=587) (actualtime=0.052..0.052 rows=0 loops=1) Update on my_log_daily Update on my_log_daily_170906 Buffers: shared hit=5 -> Seq Scan on my_log_daily (cost=0.00..0.00 rows=1 width=988) (actualtime=0.001..0.001 rows=0 loops=1) Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp without timezone) AND (date1 < '2017-09-07 00:00:00'::timestamp without time zone) AND(msgid1 = 3414253) AND (msgid2 = 20756) AND (msgid3 = 1504712117) AND((instance)::text = 'WS6'::text)) -> Index Scan using my_log_daily_idx_170906 on my_log_daily_170906(cost=0.42..8.46 rows=1 width=186) (actual time=0.019..0.020 rows=1 loops=1) Index Cond: ((msgid1 = 3414253) AND (msgid2 = 20756) AND (msgid3 =1504712117) AND ((instance)::text = 'WS6'::text)) Filter: ((date1 >= '2017-09-06 00:00:00'::timestamp without timezone) AND (date1 < '2017-09-07 00:00:00'::timestamp without time zone)) Buffers: shared hit=4 Planning time: 4.639 ms Execution time: 0.147 ms(12 rows)db=> \\d my_log_daily Table \"public.my_log_daily\" Column | Type |Modifiers------------+-----------------------------+---------------------------------------------------- client_id | integer | not null pult | character varying(6) | not null opr | character varying(30) | not null handler | character varying(60) | msgid | integer | sclient_id | integer | stage | integer | default 0 msgid1 | integer | default 0 msgid2 | integer | default 0 msgid3 | integer | default 0 ended | smallint | default 0 date1 | timestamp without time zone | default('now'::text)::timestamp without time zone date2 | timestamp without time zone | reserved1 | character varying(100) | reserved2 | character varying(100) | reserved3 | character varying(100) | atpco | smallint | not null default 0 rsrvdnum1 | integer | rsrvdnum2 | integer | rsrvdnum3 | integer | instance | character varying(3) | duration | integer | default 0 ip | integer |Triggers: insert_my_log_daily_trigger BEFORE INSERT ON my_log_daily FOR EACH ROWEXECUTE PROCEDURE my_log_daily_insert_trigger()Number of child tables: 40 (Use \\d+ to list them.)db=>Indexes: \"my_log_daily_idx_170906\" UNIQUE, btree (msgid1, msgid2, msgid3,instance) \"my_log_daily_date_170906\" btree (date1) \"my_log_daily_handler_170906\" btree (handler) \"my_log_daily_pult_170906\" btree (pult) \"my_log_daily_reserved1_170906\" btree (reserved1) \"my_log_daily_src_170906\" btree (client_id, date1)Check constraints: \"my_log_daily_170906_date1_check\" CHECK (date1 >= '2017-09-0600:00:00'::timestamp without time zone AND date1 < '2017-09-0700:00:00'::timestamp without time zone)Inherits: my_log_dailydb=>a complete list of child tables below.table descriptions including CHECK and indexes ( as well as triggerfunction ) are autogenerated, so there is no human error.-----------------db=> \\d+ my_log_daily Table\"public.my_log_daily\" Column | Type |Modifiers | Storage | Stats target | Description------------+-----------------------------+----------------------------------------------------+----------+--------------+------------- client_id | integer | notnull | plain | | pult | character varying(6) | notnull | extended | | opr | character varying(30) | notnull | extended | | handler | character varying(60)| | extended| | msgid | integer| | plain| | sclient_id | integer| | plain| | stage | integer | default0 | plain | | msgid1 | integer | default0 | plain | | msgid2 | integer | default0 | plain | | msgid3 | integer | default0 | plain | | ended | smallint | default0 | plain | | date1 | timestamp without time zone | default('now'::text)::timestamp without time zone | plain | | date2 | timestamp without time zone| | plain| | reserved1 | character varying(100)| | extended| | reserved2 | character varying(100)| | extended| | reserved3 | character varying(100)| | extended| | atpco | smallint | not null default0 | plain | | rsrvdnum1 | integer| | plain| | rsrvdnum2 | integer| | plain| | rsrvdnum3 | integer| | plain| | instance | character varying(3)| | extended| | duration | integer | default0 | plain | | ip | integer| | plain| |Triggers: insert_my_log_daily_trigger BEFORE INSERT ON my_log_daily FOR EACH ROWEXECUTE PROCEDURE my_log_daily_insert_trigger()Child tables: my_log_daily_170901, my_log_daily_170902, my_log_daily_170903, my_log_daily_170904, my_log_daily_170905, my_log_daily_170906, my_log_daily_170907, my_log_daily_170908, my_log_daily_170909, my_log_daily_170910, my_log_daily_170911, my_log_daily_170912, my_log_daily_170913, my_log_daily_170914, my_log_daily_170915, my_log_daily_170916, my_log_daily_170917, my_log_daily_170918, my_log_daily_170919, my_log_daily_170920, my_log_daily_170921, my_log_daily_170922, my_log_daily_170923, my_log_daily_170924, my_log_daily_170925, my_log_daily_170926, my_log_daily_170927, my_log_daily_170928, my_log_daily_170929, my_log_daily_170930, my_log_daily_171001, my_log_daily_171002, my_log_daily_171003, my_log_daily_171004, my_log_daily_171005, my_log_daily_171006, hh my_log_daily_171007, my_log_daily_171008, my_log_daily_171009, my_log_daily_171010db=>",
"msg_date": "Sun, 17 Sep 2017 08:34:48 +0000",
"msg_from": "Konstantin Kivi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partitioning perfomance issue"
}
] |
[
{
"msg_contents": "Hello All\n\nI am using Postgresql extension pageinspect.\n\nCould someone tell me the meaning of these columns: magic, version, root,\nlevel, fastroot, fastlevel of the bt_metap function.\n\nThis information is not presents in the documentation.\n\nThe height of the b-tree (position of node farthest from root to leaf), is\nthe column Level?\n\nSee below a return query that I ran on an index called\nidx_l_shipmodelineitem000\n\n------------------------------------------------------------------\npostgres # SELECT * FROM bt_metap ('idx_l_shipmodelineitem000');\npostgres # magic | version | root | level | fastroot | fastlevel\npostgres # 340322 | 2 | 41827 | 3 | 41827 | 3\n\nBest regards\nNeto\n\nHello AllI am using Postgresql extension pageinspect.Could someone tell me the meaning of these columns: magic, version, root, level, fastroot, fastlevel of the bt_metap function.This information is not presents in the documentation.The height of the b-tree (position of node farthest from root to leaf), is the column Level?See below a return query that I ran on an index called idx_l_shipmodelineitem000------------------------------------------------------------------postgres # SELECT * FROM bt_metap ('idx_l_shipmodelineitem000');postgres # magic | version | root | level | fastroot | fastlevelpostgres # 340322 | 2 | 41827 | 3 | 41827 | 3Best regardsNeto",
"msg_date": "Sun, 17 Sep 2017 18:52:37 -0300",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Pageinspect bt_metap help"
},
{
"msg_contents": "On Sun, Sep 17, 2017 at 2:52 PM, Neto pr <[email protected]> wrote:\n> I am using Postgresql extension pageinspect.\n>\n> Could someone tell me the meaning of these columns: magic, version, root,\n> level, fastroot, fastlevel of the bt_metap function.\n>\n> This information is not presents in the documentation.\n\nA magic number distinguishes the meta-page as a B-Tree meta-page. A\nversion number is used for each major incompatible revision of the\nB-Tree code (these are very infrequent).\n\nThe fast root can differ from the true root following a deletion\npattern that leaves a \"skinny index\". The implementation can never\nremove a level, essentially because it's optimized for concurrency,\nthough it can have a fast root, to just skip levels. This happens to\nlevels that no longer contain any distinguishing information in their\nsingle internal page.\n\nI imagine that in practice the large majority of B-Trees never have a\ntrue root that differs from its fast root - you see this with repeated\nlarge range deletions. Probably nothing to worry about.\n\n> The height of the b-tree (position of node farthest from root to leaf), is\n> the column Level?\n\nYes.\n\nIf you want to learn more about the B-Tree code, I suggest that you\nstart by looking at the code for contrib/amcheck.\n\n-- \nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 17 Sep 2017 14:59:34 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pageinspect bt_metap help"
},
{
"msg_contents": "Very interesting information.\nSee if I'm right, so for performance purposes, would it be better to\nconsider the columns: fast_root and fast_level instead of the root and\nlevel columns?\n\nI have read that even deleting records the B-tree tree is not rebuilt, so\nit does not cause overhead in dbms, and can have null pointers.\n\nIn my example, the values of fast_root, fast_root are equal to root,\nlevel, I believe that due to the newly created index and no delete\noperations occurred in the table.\n\nBest Regards\nNeto\n\n2017-09-17 18:59 GMT-03:00 Peter Geoghegan <[email protected]>:\n\n> On Sun, Sep 17, 2017 at 2:52 PM, Neto pr <[email protected]> wrote:\n> > I am using Postgresql extension pageinspect.\n> >\n> > Could someone tell me the meaning of these columns: magic, version, root,\n> > level, fastroot, fastlevel of the bt_metap function.\n> >\n> > This information is not presents in the documentation.\n>\n> A magic number distinguishes the meta-page as a B-Tree meta-page. A\n> version number is used for each major incompatible revision of the\n> B-Tree code (these are very infrequent).\n>\n> The fast root can differ from the true root following a deletion\n> pattern that leaves a \"skinny index\". The implementation can never\n> remove a level, essentially because it's optimized for concurrency,\n> though it can have a fast root, to just skip levels. This happens to\n> levels that no longer contain any distinguishing information in their\n> single internal page.\n>\n> I imagine that in practice the large majority of B-Trees never have a\n> true root that differs from its fast root - you see this with repeated\n> large range deletions. Probably nothing to worry about.\n>\n> > The height of the b-tree (position of node farthest from root to leaf),\n> is\n> > the column Level?\n>\n> Yes.\n>\n> If you want to learn more about the B-Tree code, I suggest that you\n> start by looking at the code for contrib/amcheck.\n>\n> --\n> Peter Geoghegan\n>\n\nVery interesting information.See if I'm right, so for performance purposes, would it be better to consider the columns: fast_root and fast_level instead of the root and level columns?I have read that even deleting records the B-tree tree is not rebuilt, so it does not cause overhead in dbms, and can have null pointers.In my example, the values of fast_root, fast_root are equal to root, level, I believe that due to the newly created index and no delete operations occurred in the table.Best Regards Neto2017-09-17 18:59 GMT-03:00 Peter Geoghegan <[email protected]>:On Sun, Sep 17, 2017 at 2:52 PM, Neto pr <[email protected]> wrote:\n> I am using Postgresql extension pageinspect.\n>\n> Could someone tell me the meaning of these columns: magic, version, root,\n> level, fastroot, fastlevel of the bt_metap function.\n>\n> This information is not presents in the documentation.\n\nA magic number distinguishes the meta-page as a B-Tree meta-page. A\nversion number is used for each major incompatible revision of the\nB-Tree code (these are very infrequent).\n\nThe fast root can differ from the true root following a deletion\npattern that leaves a \"skinny index\". The implementation can never\nremove a level, essentially because it's optimized for concurrency,\nthough it can have a fast root, to just skip levels. This happens to\nlevels that no longer contain any distinguishing information in their\nsingle internal page.\n\nI imagine that in practice the large majority of B-Trees never have a\ntrue root that differs from its fast root - you see this with repeated\nlarge range deletions. Probably nothing to worry about.\n\n> The height of the b-tree (position of node farthest from root to leaf), is\n> the column Level?\n\nYes.\n\nIf you want to learn more about the B-Tree code, I suggest that you\nstart by looking at the code for contrib/amcheck.\n\n--\nPeter Geoghegan",
"msg_date": "Mon, 18 Sep 2017 11:31:47 -0300",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pageinspect bt_metap help"
},
{
"msg_contents": "On Mon, Sep 18, 2017 at 7:31 AM, Neto pr <[email protected]> wrote:\n> In my example, the values of fast_root, fast_root are equal to root, level,\n> I believe that due to the newly created index and no delete operations\n> occurred in the table.\n\nFast root and true root will probably never be different, even when\nthere are many deletions, including page deletions by VACUUM. As I\nunderstand it, the fast root thing is for a fairly rare, though still\nimportant edge case. It's a way of working around the fact that a\nB-Tree can never become shorter due to the locking protocols not\nallowing it. We can instead just pretend that it's shorter, knowing\nthat upper levels don't contain useful information.\n\n\n-- \nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Sep 2017 18:07:26 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pageinspect bt_metap help"
},
{
"msg_contents": "Peter Geoghegan <[email protected]> writes:\n> On Mon, Sep 18, 2017 at 7:31 AM, Neto pr <[email protected]> wrote:\n>> In my example, the values of fast_root, fast_root are equal to root, level,\n>> I believe that due to the newly created index and no delete operations\n>> occurred in the table.\n\n> Fast root and true root will probably never be different, even when\n> there are many deletions, including page deletions by VACUUM. As I\n> understand it, the fast root thing is for a fairly rare, though still\n> important edge case. It's a way of working around the fact that a\n> B-Tree can never become shorter due to the locking protocols not\n> allowing it. We can instead just pretend that it's shorter, knowing\n> that upper levels don't contain useful information.\n\nMy (vague) recollection is that it's actually useful in cases where the\nlive key-space constantly migrates to the right, so that the original\nupper-level key splits would become impossibly unbalanced. This isn't\nall that unusual a situation; consider timestamp keys for instance,\nin a table where old data gets flushed regularly.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Sep 2017 23:28:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pageinspect bt_metap help"
}
] |
[
{
"msg_contents": "I use materialized views to cache results from a foreign data wrapper to a\nhigh latency, fairly large (cloud) Hadoop instance. In order to boost\nrefresh times I split the FDW and materialized views up into partitions.\n\nNote: I can't use pg_partman or native partitioning because those don't\nreally work with this architecture - they are designed for \"real\" tables.\nI can't really use citus because it isn't FDW/matview aware at this time\neither.\n\nI then join the various materialized views together with a regular view\nmade up of a bunch of 'union all' statements.\n\nI have a set of functions which automatically create the new partitions and\nthen replace the top level view to add them in on the fly. At this time I\nprobably have about 60 partitions.\n\nWith that approach I can refresh individual chunks of data, or I can\nrefresh several chunks in parallel. Generally this has been working pretty\nwell. One side effect is that because this is not a real partition, the\nplanner does have to check each partition whenever I run a query to see if\nit has the data I need. With appropriate indexes, this is ok, checking the\npartitions that don't have the data is very quick. It does make for some\nlong explain outputs though.\n\nThe challenge is that because of an exponential rate of data growth, I\nmight have to significantly increase the number of partitions I'm working\nwith - to several hundred at a minimum and potentially more than 1000...\n\nThis leads me to the question how many 'union all' statements can I have in\none view? Should I create a hierarchy of views to gradually roll the data\nup instead of putting them all in one top-level view?\n\nI use materialized views to cache results from a foreign data wrapper to a high latency, fairly large (cloud) Hadoop instance. In order to boost refresh times I split the FDW and materialized views up into partitions. Note: I can't use pg_partman or native partitioning because those don't really work with this architecture - they are designed for \"real\" tables. I can't really use citus because it isn't FDW/matview aware at this time either.I then join the various materialized views together with a regular view made up of a bunch of 'union all' statements.I have a set of functions which automatically create the new partitions and then replace the top level view to add them in on the fly. At this time I probably have about 60 partitions.With that approach I can refresh individual chunks of data, or I can refresh several chunks in parallel. Generally this has been working pretty well. One side effect is that because this is not a real partition, the planner does have to check each partition whenever I run a query to see if it has the data I need. With appropriate indexes, this is ok, checking the partitions that don't have the data is very quick. It does make for some long explain outputs though.The challenge is that because of an exponential rate of data growth, I might have to significantly increase the number of partitions I'm working with - to several hundred at a minimum and potentially more than 1000...This leads me to the question how many 'union all' statements can I have in one view? Should I create a hierarchy of views to gradually roll the data up instead of putting them all in one top-level view?",
"msg_date": "Mon, 18 Sep 2017 07:25:14 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": true,
"msg_subject": "max partitions behind a view?"
},
{
"msg_contents": "Rick Otten <[email protected]> writes:\n> The challenge is that because of an exponential rate of data growth, I\n> might have to significantly increase the number of partitions I'm working\n> with - to several hundred at a minimum and potentially more than 1000...\n\n> This leads me to the question how many 'union all' statements can I have in\n> one view?\n\nI don't think there's a hard limit short of INT32_MAX or so, but I'd be\nworried about whether there are any O(N^2) algorithms that would start\nto be noticeable at the O(1000) level.\n\n> Should I create a hierarchy of views to gradually roll the data\n> up instead of putting them all in one top-level view?\n\nThat would likely make things worse not better; the planner would flatten\nthem anyway and would expend extra cycles doing so. You could perhaps\nstop the flattening with optimization fences (OFFSET 0) but I really doubt\nyou want the side-effects of that.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Sep 2017 09:55:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: max partitions behind a view?"
}
] |
[
{
"msg_contents": "Hi experts,\n\nFor an academic experiment I need to *restrict the total amount of memory\nthat is available for a pgSQL server* to compute a given set of queries.\n\nI know that I can do this through postgressql.conffile, where I can adjust\nsome parameters related with Resource Management.\n\nThe problem is that: it's not clear for me--given the several parameters\navailable on the config file--which is the parameter that I should change.\n\nWhen I first opened the config file I'm expecting someting like this:\nmax_server_memmory. Instead I found a lot of: shared_buffers, temp_buffers,\nwork_mem, and so on...\n\nGiven that, I've consulted pgSQL docs. on Resource Consumption\n<http://www.postgresql.org/docs/9.3/static/runtime-config-resource.html> and\nI come up with the shared_buffers as the best candidate for what I'm\nlooking for: *the parameter that restricts the total amount of memory that\na pgSQL server can use to perform its computation*. But I'm not completely\nsure about this.\n\nCan you guys give me some insight about which parameters should I adjust to\nrestrict the pgSQL server's memory, please?\n\nHi experts,For an academic experiment I need to restrict the total amount of memory that is available for a pgSQL server to compute a given set of queries.I know that I can do this through postgressql.conffile, where I can adjust some parameters related with Resource Management.The problem is that: it's not clear for me--given the several parameters available on the config file--which is the parameter that I should change. When I first opened the config file I'm expecting someting like this: max_server_memmory. Instead I found a lot of: shared_buffers, temp_buffers, work_mem, and so on...Given that, I've consulted pgSQL docs. on Resource Consumption and I come up with the shared_buffers as the best candidate for what I'm looking for: the parameter that restricts the total amount of memory that a pgSQL server can use to perform its computation. But I'm not completely sure about this. Can you guys give me some insight about which parameters should I adjust to restrict the pgSQL server's memory, please?",
"msg_date": "Tue, 19 Sep 2017 00:49:14 +0000",
"msg_from": "=?UTF-8?B?5ZyS55Sw56Wl5bmz?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n https://stackoverflow.com/questions/28844170/how-to-limit-the-memory-that-is-available-for-postgressql-server"
},
{
"msg_contents": "\n\nOn 09/19/2017 02:49 AM, 園田祥平 wrote:\n> Hi experts,\n> \n> For an academic experiment I need to *restrict the total amount of\n> memory that is available for a pgSQL server* to compute a given set of\n> queries.\n> \n> I know that I can do this through |postgressql.conf|file, where I can\n> adjust some parameters related with Resource Management.\n> \n> The problem is that: it's not clear for me--given the several parameters\n> available on the config file--which is the parameter that I should change. \n> > When I first opened the config file I'm expecting someting like\n> this: |max_server_memmory|. Instead I found a lot\n> of: |shared_buffers|, |temp_buffers|, |work_mem|, and so on...\n> \n> Given that, I've consulted pgSQL docs. on Resource Consumption\n> <http://www.postgresql.org/docs/9.3/static/runtime-config-resource.html> and\n> I come up with the |shared_buffers| as the best candidate for what I'm\n> looking for: *the parameter that restricts the total amount of memory\n> that a pgSQL server can use to perform its computation*. But I'm not\n> completely sure about this. \n> \n> Can you guys give me some insight about which parameters should I adjust\n> to restrict the pgSQL server's memory, please?\n> \n\nThe short answer is \"You can't do that from within PostgreSQL alone.\"\nYou can define size of some memory buffers, but not some hard total\nlimit. One reason is that queries may use multiple work_mem buffers, we\ndon't know how much memory the other queries are consuming, etc. We also\ndon't have any control over page cache, for example.\n\nIf you really need to do that, you'll need to do that at the OS level,\ne.g. by specifying \"mem=X\" kernel parameter, at the VM level, or\nsomething like that.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Sep 2017 15:08:54 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re:\n https://stackoverflow.com/questions/28844170/how-to-limit-the-memory-that-is-available-for-postgressql-server"
}
] |
[
{
"msg_contents": "On Tue, 19 Sep 2017 00:49:14 +0000, ???? <[email protected]> wrote:\n\n> For an academic experiment I need to *restrict the total amount of memory\n> that is available for a pgSQL server* to compute a given set of queries.\n>\n> I know that I can do this through postgressql.conffile, where I can adjust\n> some parameters related with Resource Management.\n>\n> The problem is that: it's not clear for me--given the several parameters\n> available on the config file--which is the parameter that I should change.\n>\n> When I first opened the config file I'm expecting someting like this:\n> max_server_memmory. Instead I found a lot of: shared_buffers, \n> temp_buffers,\n> work_mem, and so on...\n>\n> Given that, I've consulted pgSQL docs. on Resource Consumption\n> <http://www.postgresql.org/docs/9.3/static/runtime-config-resource.html \n> and\n> I come up with the shared_buffers as the best candidate for what I'm\n> looking for: *the parameter that restricts the total amount of memory that\n> a pgSQL server can use to perform its computation*. But I'm not completely\n> sure about this.\n>\n> Can you guys give me some insight about which parameters should I \n> adjust to\n> restrict the pgSQL server's memory, please?\n\nWhat you are asking - a way to configure Postgresql to a hard memory \nlimit - effectively is impossible.� Shared memory isn't really a hard \nlimit on anything - it's just a cache for query results.� You can limit \nhow much is available, but there isn't any way to limit how much a \nparticular query [worker process] can take.� Then, local [to the worker \nprocess] work buffers are allocated as needed to perform the joins, \nsorts, groupings, etc. as specified by the query.� For any given query, \nyou may be able to explain/analyze your way to a reasonable estimate of \nthe maximum allocation, but there isn't any way via configuration to \nactually limit the worker process to that maximum.\n\nThe only way I can think of to impose such limits would be to sandbox \nthe processes with ULIMIT.� If you set appropriate limits before \nstarting the postmaster process, those limits will apply to every worker \nprocess it spawns afterwards.�� The thing to remember is that limits on \nprocesses apply individually - e.g., if you say \"ulimit -d 500000\" and \nthen start Postgresql, each individual worker process will be able to \nuse up to 500MB.� And when you limit the data size or the address space, \nyou need to consider and include the shared memory.\nsee https://ss64.com/bash/ulimit.html\n\nIf you want to place a global limit on the entire Postgresql \"server\" \n[i.e. the collection of worker processes], you can limit the user that \nowns the processes (in /etc/security/limits.conf) - which usually is \n\"postgres\" when Postgresql is run as a service.\n\n\nUsing ulimit isn't difficult if you are starting/stopping Postgresql \nmanually, but it's a pain when Postgresql is running as a system \nservice.� To limit a service, you have to either limit the owning user \n[and hope that doesn't break something else], or find and edit the init \nscripts that start the service, and what to do there depends on whether \nthe system is using SysVinit or Upstart to manage the services.\n\nIf you're on Windows, good luck.�� I know that there are things called \n\"Job objects\"� [something in between Linux sessions and process groups] \nthat can be used to limit process resources ... but I have no idea how \nto do that.\n\nHope this ... doesn't confuse even more.\nGeorge\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Sep 2017 22:44:56 -0400",
"msg_from": "George Neuner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re:\n https://stackoverflow.com/questions/28844170/how-to-limit-the-memory-that-is-available-for-postgressql-server"
},
{
"msg_contents": "On 09/18/2017 10:44 PM, George Neuner wrote:\n> On Tue, 19 Sep 2017 00:49:14 +0000, ???? <[email protected]> wrote:\n> \n>> For an academic experiment I need to *restrict the total amount of memory\n>> that is available for a pgSQL server* to compute a given set of queries.\n>>\n>> I know that I can do this through postgressql.conffile, where I can\n>> adjust\n>> some parameters related with Resource Management.\n>>\n>> The problem is that: it's not clear for me--given the several parameters\n>> available on the config file--which is the parameter that I should\n>> change.\n>>\n>> When I first opened the config file I'm expecting someting like this:\n>> max_server_memmory. Instead I found a lot of: shared_buffers,\n>> temp_buffers,\n>> work_mem, and so on...\n>>\n>> Given that, I've consulted pgSQL docs. on Resource Consumption\n>> <http://www.postgresql.org/docs/9.3/static/runtime-config-resource.html and\n>>\n>> I come up with the shared_buffers as the best candidate for what I'm\n>> looking for: *the parameter that restricts the total amount of memory\n>> that\n>> a pgSQL server can use to perform its computation*. But I'm not\n>> completely\n>> sure about this.\n>>\n>> Can you guys give me some insight about which parameters should I\n>> adjust to\n>> restrict the pgSQL server's memory, please?\n> \n> What you are asking - a way to configure Postgresql to a hard memory\n> limit - effectively is impossible. Shared memory isn't really a hard\n> limit on anything - it's just a cache for query results. You can limit\n> how much is available, but there isn't any way to limit how much a\n> particular query [worker process] can take. Then, local [to the worker\n> process] work buffers are allocated as needed to perform the joins,\n> sorts, groupings, etc. as specified by the query. For any given query,\n> you may be able to explain/analyze your way to a reasonable estimate of\n> the maximum allocation, but there isn't any way via configuration to\n> actually limit the worker process to that maximum.\n> \n> The only way I can think of to impose such limits would be to sandbox\n> the processes with ULIMIT. If you set appropriate limits before\n> starting the postmaster process, those limits will apply to every worker\n> process it spawns afterwards. The thing to remember is that limits on\n> processes apply individually - e.g., if you say \"ulimit -d 500000\" and\n> then start Postgresql, each individual worker process will be able to\n> use up to 500MB. And when you limit the data size or the address space,\n> you need to consider and include the shared memory.\n> see https://ss64.com/bash/ulimit.html\n> \n> If you want to place a global limit on the entire Postgresql \"server\"\n> [i.e. the collection of worker processes], you can limit the user that\n> owns the processes (in /etc/security/limits.conf) - which usually is\n> \"postgres\" when Postgresql is run as a service.\n\n\nThe easiest way to impose a limit on the entire Postgres cluster is to\nrun it in a container using Docker. For example you could use the image\nfrom hub.docker.com and run it with the \"--memory\" argument.\n\nhttps://hub.docker.com/_/postgres/\nhttps://docs.docker.com/engine/reference/commandline/run/\n\n-- \nJonathan Rogers\nSocialserve.com by Emphasys Software\[email protected]",
"msg_date": "Mon, 18 Sep 2017 23:30:59 -0400",
"msg_from": "Jonathan Rogers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re:\n https://stackoverflow.com/questions/28844170/how-to-limit-the-memory-that-is-available-for-postgressql-server"
}
] |
[
{
"msg_contents": "I have a complicated query which runs the exact same subplan more than once.\n\nHere is a greatly simplified (and rather pointless) query to replicate the\nissue:\n\nselect aid, sum_bid from\n (select\n aid,\n (select sum(bid) from pgbench_branches\n where bbalance between -10000-abalance and 1+abalance\n ) as sum_bid\n from pgbench_accounts\n where aid between 1 and 1000\n group by aid\n ) asdfsadf\nwhere sum_bid >0;\n\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Group (cost=0.44..375841.29 rows=931 width=12) (actual\ntime=1.233..691.200 rows=679 loops=1)\n Group Key: pgbench_accounts.aid\n Filter: ((SubPlan 2) > 0)\n Rows Removed by Filter: 321\n -> Index Scan using pgbench_accounts_pkey on pgbench_accounts\n (cost=0.44..634.32 rows=931 width=8) (actual time=0.040..1.783 rows=1000\nloops=1)\n Index Cond: ((aid >= 1) AND (aid <= 1000))\n SubPlan 2\n -> Aggregate (cost=403.00..403.01 rows=1 width=8) (actual\ntime=0.406..0.407 rows=1 loops=1000)\n -> Seq Scan on pgbench_branches pgbench_branches_1\n (cost=0.00..403.00 rows=1 width=4) (actual time=0.392..0.402 rows=1\nloops=1000)\n Filter: ((bbalance >= ('-10000'::integer -\npgbench_accounts.abalance)) AND (bbalance <= (1 +\npgbench_accounts.abalance)))\n Rows Removed by Filter: 199\n SubPlan 1\n -> Aggregate (cost=403.00..403.01 rows=1 width=8) (actual\ntime=0.407..0.407 rows=1 loops=679)\n -> Seq Scan on pgbench_branches (cost=0.00..403.00 rows=1\nwidth=4) (actual time=0.388..0.402 rows=1 loops=679)\n Filter: ((bbalance >= ('-10000'::integer -\npgbench_accounts.abalance)) AND (bbalance <= (1 +\npgbench_accounts.abalance)))\n Rows Removed by Filter: 199\n Planning time: 0.534 ms\n Execution time: 691.784 ms\n\n\nhttps://explain.depesz.com/s/Xaib\n\n\nThe subplan is not so fast that I wish it to be executed again or every row\nwhich passes the filter.\n\nI can prevent this dual execution using a CTE, but that creates other\nproblems. Is there a way to get rid of it without resorting to that?\n\nMaybe also a question for bugs and/or hackers, is why should I need to do\nanything special to avoid dual execution?\n\nCheers,\n\nJeff\n\nI have a complicated query which runs the exact same subplan more than once.Here is a greatly simplified (and rather pointless) query to replicate the issue: select aid, sum_bid from (select aid, (select sum(bid) from pgbench_branches where bbalance between -10000-abalance and 1+abalance ) as sum_bid from pgbench_accounts where aid between 1 and 1000 group by aid ) asdfsadfwhere sum_bid >0; QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------- Group (cost=0.44..375841.29 rows=931 width=12) (actual time=1.233..691.200 rows=679 loops=1) Group Key: pgbench_accounts.aid Filter: ((SubPlan 2) > 0) Rows Removed by Filter: 321 -> Index Scan using pgbench_accounts_pkey on pgbench_accounts (cost=0.44..634.32 rows=931 width=8) (actual time=0.040..1.783 rows=1000 loops=1) Index Cond: ((aid >= 1) AND (aid <= 1000)) SubPlan 2 -> Aggregate (cost=403.00..403.01 rows=1 width=8) (actual time=0.406..0.407 rows=1 loops=1000) -> Seq Scan on pgbench_branches pgbench_branches_1 (cost=0.00..403.00 rows=1 width=4) (actual time=0.392..0.402 rows=1 loops=1000) Filter: ((bbalance >= ('-10000'::integer - pgbench_accounts.abalance)) AND (bbalance <= (1 + pgbench_accounts.abalance))) Rows Removed by Filter: 199 SubPlan 1 -> Aggregate (cost=403.00..403.01 rows=1 width=8) (actual time=0.407..0.407 rows=1 loops=679) -> Seq Scan on pgbench_branches (cost=0.00..403.00 rows=1 width=4) (actual time=0.388..0.402 rows=1 loops=679) Filter: ((bbalance >= ('-10000'::integer - pgbench_accounts.abalance)) AND (bbalance <= (1 + pgbench_accounts.abalance))) Rows Removed by Filter: 199 Planning time: 0.534 ms Execution time: 691.784 mshttps://explain.depesz.com/s/XaibThe subplan is not so fast that I wish it to be executed again or every row which passes the filter. I can prevent this dual execution using a CTE, but that creates other problems. Is there a way to get rid of it without resorting to that?Maybe also a question for bugs and/or hackers, is why should I need to do anything special to avoid dual execution?Cheers,Jeff",
"msg_date": "Tue, 19 Sep 2017 16:30:27 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": true,
"msg_subject": "repeated subplan execution"
},
{
"msg_contents": "Hi All,\n\nI didn't understand why same sub plan for the sub query executed two times?\nAs per the query it should have been executed only once.\n\nCan someone please explain this behaviour of query execution ?\n\nThanks a lot.\n\nOn Wed, 20 Sep 2017 at 5:01 AM, Jeff Janes <[email protected]> wrote:\n\n> I have a complicated query which runs the exact same subplan more than\n> once.\n>\n> Here is a greatly simplified (and rather pointless) query to replicate the\n> issue:\n>\n> select aid, sum_bid from\n> (select\n> aid,\n> (select sum(bid) from pgbench_branches\n> where bbalance between -10000-abalance and 1+abalance\n> ) as sum_bid\n> from pgbench_accounts\n> where aid between 1 and 1000\n> group by aid\n> ) asdfsadf\n> where sum_bid >0;\n>\n> QUERY\n> PLAN\n>\n> -----------------------------------------------------------------------------------------------------------------------------------------------------\n> Group (cost=0.44..375841.29 rows=931 width=12) (actual\n> time=1.233..691.200 rows=679 loops=1)\n> Group Key: pgbench_accounts.aid\n> Filter: ((SubPlan 2) > 0)\n> Rows Removed by Filter: 321\n> -> Index Scan using pgbench_accounts_pkey on pgbench_accounts\n> (cost=0.44..634.32 rows=931 width=8) (actual time=0.040..1.783 rows=1000\n> loops=1)\n> Index Cond: ((aid >= 1) AND (aid <= 1000))\n> SubPlan 2\n> -> Aggregate (cost=403.00..403.01 rows=1 width=8) (actual\n> time=0.406..0.407 rows=1 loops=1000)\n> -> Seq Scan on pgbench_branches pgbench_branches_1\n> (cost=0.00..403.00 rows=1 width=4) (actual time=0.392..0.402 rows=1\n> loops=1000)\n> Filter: ((bbalance >= ('-10000'::integer -\n> pgbench_accounts.abalance)) AND (bbalance <= (1 +\n> pgbench_accounts.abalance)))\n> Rows Removed by Filter: 199\n> SubPlan 1\n> -> Aggregate (cost=403.00..403.01 rows=1 width=8) (actual\n> time=0.407..0.407 rows=1 loops=679)\n> -> Seq Scan on pgbench_branches (cost=0.00..403.00 rows=1\n> width=4) (actual time=0.388..0.402 rows=1 loops=679)\n> Filter: ((bbalance >= ('-10000'::integer -\n> pgbench_accounts.abalance)) AND (bbalance <= (1 +\n> pgbench_accounts.abalance)))\n> Rows Removed by Filter: 199\n> Planning time: 0.534 ms\n> Execution time: 691.784 ms\n>\n>\n> https://explain.depesz.com/s/Xaib\n>\n>\n> The subplan is not so fast that I wish it to be executed again or every\n> row which passes the filter.\n>\n> I can prevent this dual execution using a CTE, but that creates other\n> problems. Is there a way to get rid of it without resorting to that?\n>\n> Maybe also a question for bugs and/or hackers, is why should I need to do\n> anything special to avoid dual execution?\n>\n> Cheers,\n>\n> Jeff\n>\n\nHi All,I didn't understand why same sub plan for the sub query executed two times? As per the query it should have been executed only once.Can someone please explain this behaviour of query execution ? Thanks a lot.On Wed, 20 Sep 2017 at 5:01 AM, Jeff Janes <[email protected]> wrote:I have a complicated query which runs the exact same subplan more than once.Here is a greatly simplified (and rather pointless) query to replicate the issue: select aid, sum_bid from (select aid, (select sum(bid) from pgbench_branches where bbalance between -10000-abalance and 1+abalance ) as sum_bid from pgbench_accounts where aid between 1 and 1000 group by aid ) asdfsadfwhere sum_bid >0; QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------- Group (cost=0.44..375841.29 rows=931 width=12) (actual time=1.233..691.200 rows=679 loops=1) Group Key: pgbench_accounts.aid Filter: ((SubPlan 2) > 0) Rows Removed by Filter: 321 -> Index Scan using pgbench_accounts_pkey on pgbench_accounts (cost=0.44..634.32 rows=931 width=8) (actual time=0.040..1.783 rows=1000 loops=1) Index Cond: ((aid >= 1) AND (aid <= 1000)) SubPlan 2 -> Aggregate (cost=403.00..403.01 rows=1 width=8) (actual time=0.406..0.407 rows=1 loops=1000) -> Seq Scan on pgbench_branches pgbench_branches_1 (cost=0.00..403.00 rows=1 width=4) (actual time=0.392..0.402 rows=1 loops=1000) Filter: ((bbalance >= ('-10000'::integer - pgbench_accounts.abalance)) AND (bbalance <= (1 + pgbench_accounts.abalance))) Rows Removed by Filter: 199 SubPlan 1 -> Aggregate (cost=403.00..403.01 rows=1 width=8) (actual time=0.407..0.407 rows=1 loops=679) -> Seq Scan on pgbench_branches (cost=0.00..403.00 rows=1 width=4) (actual time=0.388..0.402 rows=1 loops=679) Filter: ((bbalance >= ('-10000'::integer - pgbench_accounts.abalance)) AND (bbalance <= (1 + pgbench_accounts.abalance))) Rows Removed by Filter: 199 Planning time: 0.534 ms Execution time: 691.784 mshttps://explain.depesz.com/s/XaibThe subplan is not so fast that I wish it to be executed again or every row which passes the filter. I can prevent this dual execution using a CTE, but that creates other problems. Is there a way to get rid of it without resorting to that?Maybe also a question for bugs and/or hackers, is why should I need to do anything special to avoid dual execution?Cheers,Jeff",
"msg_date": "Wed, 20 Sep 2017 02:31:20 +0000",
"msg_from": "monika yadav <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: repeated subplan execution"
},
{
"msg_contents": "On Tue, Sep 19, 2017 at 7:31 PM, monika yadav <[email protected]>\nwrote:\n\n> Hi All,\n>\n> I didn't understand why same sub plan for the sub query executed two\n> times? As per the query it should have been executed only once.\n>\n> Can someone please explain this behaviour of query execution ?\n>\n\n\nThe sum_bid at the end of the query is an alias for the entire subselect,\nso it not entirely surprising that it gets interpolated twice. it is just\nkind of unfortunate from a performance perspective.\n\nThe query I originally gave is equivalent to this query:\n\n\n select\n aid,\n (select sum(bid) from pgbench_branches\n where bbalance between -10000-abalance and 1+abalance\n ) as sum_bid\n from pgbench_accounts\n where aid between 1 and 1000\n group by aid\n having (select sum(bid) from pgbench_branches where bbalance\nbetween -10000-abalance and 1+abalance ) >0;\n\n\nIn my originally query I just wrapped the whole thing in another select, so\nthat I could use the alias rather than having to mechanically repeat the\nentire subquery again in the HAVING section. They give identical plans.\n\nCheers,\n\nJeff\n\nOn Tue, Sep 19, 2017 at 7:31 PM, monika yadav <[email protected]> wrote:Hi All,I didn't understand why same sub plan for the sub query executed two times? As per the query it should have been executed only once.Can someone please explain this behaviour of query execution ? The sum_bid at the end of the query is an alias for the entire subselect, so it not entirely surprising that it gets interpolated twice. it is just kind of unfortunate from a performance perspective.The query I originally gave is equivalent to this query: select aid, (select sum(bid) from pgbench_branches where bbalance between -10000-abalance and 1+abalance ) as sum_bid from pgbench_accounts where aid between 1 and 1000 group by aid having (select sum(bid) from pgbench_branches where bbalance between -10000-abalance and 1+abalance ) >0;In my originally query I just wrapped the whole thing in another select, so that I could use the alias rather than having to mechanically repeat the entire subquery again in the HAVING section. They give identical plans.Cheers,Jeff",
"msg_date": "Wed, 20 Sep 2017 09:46:43 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: repeated subplan execution"
},
{
"msg_contents": "Hi Jeff,\n\nThanks for the update and clarification. I will look to see a better\nalternative to resolve this twice execution of same plan.\n\nOn Wed, Sep 20, 2017 at 10:16 PM, Jeff Janes <[email protected]> wrote:\n\n> On Tue, Sep 19, 2017 at 7:31 PM, monika yadav <[email protected]>\n> wrote:\n>\n>> Hi All,\n>>\n>> I didn't understand why same sub plan for the sub query executed two\n>> times? As per the query it should have been executed only once.\n>>\n>> Can someone please explain this behaviour of query execution ?\n>>\n>\n>\n> The sum_bid at the end of the query is an alias for the entire subselect,\n> so it not entirely surprising that it gets interpolated twice. it is just\n> kind of unfortunate from a performance perspective.\n>\n> The query I originally gave is equivalent to this query:\n>\n>\n> select\n> aid,\n> (select sum(bid) from pgbench_branches\n> where bbalance between -10000-abalance and 1+abalance\n> ) as sum_bid\n> from pgbench_accounts\n> where aid between 1 and 1000\n> group by aid\n> having (select sum(bid) from pgbench_branches where bbalance\n> between -10000-abalance and 1+abalance ) >0;\n>\n>\n> In my originally query I just wrapped the whole thing in another select,\n> so that I could use the alias rather than having to mechanically repeat the\n> entire subquery again in the HAVING section. They give identical plans.\n>\n> Cheers,\n>\n> Jeff\n>\n\nHi Jeff,Thanks for the update and clarification. I will look to see a better alternative to resolve this twice execution of same plan. On Wed, Sep 20, 2017 at 10:16 PM, Jeff Janes <[email protected]> wrote:On Tue, Sep 19, 2017 at 7:31 PM, monika yadav <[email protected]> wrote:Hi All,I didn't understand why same sub plan for the sub query executed two times? As per the query it should have been executed only once.Can someone please explain this behaviour of query execution ? The sum_bid at the end of the query is an alias for the entire subselect, so it not entirely surprising that it gets interpolated twice. it is just kind of unfortunate from a performance perspective.The query I originally gave is equivalent to this query: select aid, (select sum(bid) from pgbench_branches where bbalance between -10000-abalance and 1+abalance ) as sum_bid from pgbench_accounts where aid between 1 and 1000 group by aid having (select sum(bid) from pgbench_branches where bbalance between -10000-abalance and 1+abalance ) >0;In my originally query I just wrapped the whole thing in another select, so that I could use the alias rather than having to mechanically repeat the entire subquery again in the HAVING section. They give identical plans.Cheers,Jeff",
"msg_date": "Thu, 21 Sep 2017 16:05:13 +0530",
"msg_from": "monika yadav <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: repeated subplan execution"
}
] |
[
{
"msg_contents": "Hi\n\nI wanted to query top 20 rows by joining two tables, one table having\naround 1 lac rows and other table having 5 lac rows. Since I am using ORDER\nBY in the query so I created compound index with the columns being used in\nORDER BY. Initially index size was 939 MB.\n\nThen I ran EXPLAIN(ANALYZE,BUFFERS) for this query which took around 20\nsecs as it was not using the compound index for this query. So I drop this\nindex and created again. The index size now got reduced to 559 MB.\n\nAfter this if I ran the EXPLAIN(ANALYZE,BUFFERS) for this query it was\nusing the index and took only 5 secs.\n\nCan you please explain how the index size got reduced after recreating it\nand how the query started using the index after recreating?\n\nThanks and Regards\nSubramaniam\n\nHiI wanted to query top 20 rows by joining two tables, one table having around 1 lac rows and other table having 5 lac rows. Since I am using ORDER BY in the query so I created compound index with the columns being used in ORDER BY. Initially index size was 939 MB.Then I ran EXPLAIN(ANALYZE,BUFFERS) for this query which took around 20 secs as it was not using the compound index for this query. So I drop this index and created again. The index size now got reduced to 559 MB.After this if I ran the EXPLAIN(ANALYZE,BUFFERS) for this query it was using the index and took only 5 secs.Can you please explain how the index size got reduced after recreating it and how the query started using the index after recreating?Thanks and RegardsSubramaniam",
"msg_date": "Thu, 21 Sep 2017 16:22:11 +0530",
"msg_from": "Subramaniam C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query regarding EXPLAIN (ANALYZE,BUFFERS)"
},
{
"msg_contents": "2017-09-21 12:52 GMT+02:00 Subramaniam C <[email protected]>:\n\n> Hi\n>\n> I wanted to query top 20 rows by joining two tables, one table having\n> around 1 lac rows and other table having 5 lac rows. Since I am using ORDER\n> BY in the query so I created compound index with the columns being used in\n> ORDER BY. Initially index size was 939 MB.\n>\n> Then I ran EXPLAIN(ANALYZE,BUFFERS) for this query which took around 20\n> secs as it was not using the compound index for this query. So I drop this\n> index and created again. The index size now got reduced to 559 MB.\n>\n> After this if I ran the EXPLAIN(ANALYZE,BUFFERS) for this query it was\n> using the index and took only 5 secs.\n>\n> Can you please explain how the index size got reduced after recreating it\n> and how the query started using the index after recreating?\n>\n>\nThe index can be bloated - when you recreate it or when you use REINDEX\ncommand, then you remove a bloat content. VACUUM FULL recreate indexes too.\n\nFresh index needs less space on disc (the read is faster), in memory too\nand has better structure - a access should be faster.\n\n\n\n> Thanks and Regards\n> Subramaniam\n>\n\n2017-09-21 12:52 GMT+02:00 Subramaniam C <[email protected]>:HiI wanted to query top 20 rows by joining two tables, one table having around 1 lac rows and other table having 5 lac rows. Since I am using ORDER BY in the query so I created compound index with the columns being used in ORDER BY. Initially index size was 939 MB.Then I ran EXPLAIN(ANALYZE,BUFFERS) for this query which took around 20 secs as it was not using the compound index for this query. So I drop this index and created again. The index size now got reduced to 559 MB.After this if I ran the EXPLAIN(ANALYZE,BUFFERS) for this query it was using the index and took only 5 secs.Can you please explain how the index size got reduced after recreating it and how the query started using the index after recreating?The index can be bloated - when you recreate it or when you use REINDEX command, then you remove a bloat content. VACUUM FULL recreate indexes too.Fresh index needs less space on disc (the read is faster), in memory too and has better structure - a access should be faster. Thanks and RegardsSubramaniam",
"msg_date": "Thu, 21 Sep 2017 20:37:34 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query regarding EXPLAIN (ANALYZE,BUFFERS)"
}
] |
[
{
"msg_contents": "Hi,\n\nWe're using PostgreSQL 9.6.3 on Linux.\n\nI have a pl/pgsql stored procedure that is not utilizing the new \nparallel sequential scan feature although manually running the same \nquery does (assuming the same settings/optimizer hints are used ofc).\nA rough outline of the stored procedure (omitting all the boring parts) \nis given below ; basically I'm dynamically creating a SQL statement and \nthen using RETURN QUERY EXECUTE to run it.\n\nEXPLAIN'ing the query that's printed by the \"RAISE NOTICE\" (with the \nsame options as the stored procedure) produces a plan that uses parallel \nexecution but invoking the stored procedure obviously does not as the \nexecution time is orders of magnitudes slower.\n\nAny ideas ?\n\nThanks,\nTobias\n\n\n---------------------------------\nCREATE OR REPLACE FUNCTION do_stuff(....lots of parameters...)\nRETURNS SETOF importer.statistic_type AS\n$BODY$\nDECLARE\n _sql text;\nBEGIN\n _sql := 'SELECT ''' || _hostname || '''::text AS hostname,'\n 'interval_start, '\n 'total_filesize, '\n 'total_filecount, '\n 'EXTRACT(EPOCH FROM combined_import_time_seconds) AS \ncombined_import_time_seconds, '\n 'min_throughput, '\n 'max_throughput, '\n ''''|| _filetype ||'''::text AS filetype, '\n 'busy_seconds '\n 'FROM ( SELECT '\n 'vf_cut_func(starttime' || \n_cut_func_parameter || ') AS interval_start, '\n 'sum(filesize) AS total_filesize, '\n 'count(*) AS total_filecount, '\n 'sum( endtime-starttime ) AS \ncombined_import_time_seconds, '\n 'min(filesize/EXTRACT(EPOCH FROM \nendtime-starttime)) AS min_throughput, '\n 'max(filesize/EXTRACT(EPOCH FROM \nendtime-starttime)) AS max_throughput, '\n 'busy_time_seconds( \ntstzrange(starttime,MIN(endtime, vf_cut_func(starttime' || \n_cut_func_parameter || ') + ' || _interval || ') ) ) AS busy_seconds '\n 'FROM importer.log '\n 'WHERE filetype = '''|| _filetype ||''' '\n 'AND starttime >= ''' || _starttime || ''' '\n 'AND starttime < ''' || _endtime || ''' '\n 'AND hostname=''' || _hostname || ''' '\n 'GROUP BY vf_cut_func(starttime' || \n_cut_func_parameter || '), hostname) AS foo;';\n\n RAISE NOTICE '_sql:%', _sql;\n\n RETURN QUERY EXECUTE _sql;\nEND;\n$BODY$\n LANGUAGE plpgsql VOLATILE\n SET \"TimeZone\" TO 'utc'\n set parallel_setup_cost TO 1\n set max_parallel_workers_per_gather TO 4\n set min_parallel_relation_size TO 1\n set enable_indexscan TO false\n set enable_bitmapscan TO false;\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 25 Sep 2017 13:10:56 +0200",
"msg_from": "Tobias Gierke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Parallel sequential scan not supported for stored procedure with\n RETURN QUERY EXECUTE ?"
}
] |
[
{
"msg_contents": "Hi\n\nWhen I try to execute the query from sql command line then that query is\ntaking only around 1 sec. But when I execute the query using JDBC(Java)\nusing preparedStatement then the same query is taking around 10 secs.\n\nCan you please let us know the reason and how to fix this issue?\n\nThanks and Regards\nSubramaniam\n\nHiWhen I try to execute the query from sql command line then that query is taking only around 1 sec. But when I execute the query using JDBC(Java) using preparedStatement then the same query is taking around 10 secs.Can you please let us know the reason and how to fix this issue?Thanks and RegardsSubramaniam",
"msg_date": "Thu, 28 Sep 2017 13:49:30 +0530",
"msg_from": "Subramaniam C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query in JDBC"
},
{
"msg_contents": "On Thu, Sep 28, 2017 at 10:19 AM, Subramaniam C\n<[email protected]> wrote:\n> Hi\n>\n> When I try to execute the query from sql command line then that query is\n> taking only around 1 sec. But when I execute the query using JDBC(Java)\n> using preparedStatement then the same query is taking around 10 secs.\n>\n> Can you please let us know the reason and how to fix this issue?\n\n\nI think jdbc always uses cursor, which can be problematic with default\nconfiguration, because postgres will try to generate plans that\nreturns fast the first rows but not all the rows . Can you try to\nconfigure cursor_tuple_fraction to 1 and see if that fixes your issue?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 28 Sep 2017 10:48:53 +0200",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query in JDBC"
},
{
"msg_contents": "I configured cursor_tuple_fraction to 1 but still I am facing the same\nissue.\nPlease help.\n\nOn Thu, Sep 28, 2017 at 2:18 PM, Julien Rouhaud <[email protected]> wrote:\n\n> On Thu, Sep 28, 2017 at 10:19 AM, Subramaniam C\n> <[email protected]> wrote:\n> > Hi\n> >\n> > When I try to execute the query from sql command line then that query is\n> > taking only around 1 sec. But when I execute the query using JDBC(Java)\n> > using preparedStatement then the same query is taking around 10 secs.\n> >\n> > Can you please let us know the reason and how to fix this issue?\n>\n>\n> I think jdbc always uses cursor, which can be problematic with default\n> configuration, because postgres will try to generate plans that\n> returns fast the first rows but not all the rows . Can you try to\n> configure cursor_tuple_fraction to 1 and see if that fixes your issue?\n>\n\nI configured cursor_tuple_fraction to 1 but still I am facing the same issue.Please help.On Thu, Sep 28, 2017 at 2:18 PM, Julien Rouhaud <[email protected]> wrote:On Thu, Sep 28, 2017 at 10:19 AM, Subramaniam C\n<[email protected]> wrote:\n> Hi\n>\n> When I try to execute the query from sql command line then that query is\n> taking only around 1 sec. But when I execute the query using JDBC(Java)\n> using preparedStatement then the same query is taking around 10 secs.\n>\n> Can you please let us know the reason and how to fix this issue?\n\n\nI think jdbc always uses cursor, which can be problematic with default\nconfiguration, because postgres will try to generate plans that\nreturns fast the first rows but not all the rows . Can you try to\nconfigure cursor_tuple_fraction to 1 and see if that fixes your issue?",
"msg_date": "Thu, 28 Sep 2017 14:28:24 +0530",
"msg_from": "Subramaniam C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query in JDBC"
},
{
"msg_contents": "On Thu, Sep 28, 2017 at 10:58 AM, Subramaniam C\n<[email protected]> wrote:\n> I configured cursor_tuple_fraction to 1 but still I am facing the same\n> issue.\n\nCan you show explain (analyze, buffers) of the query when run from\npsql and run from application (you can use auto_explain for that if\nneeded, see https://www.postgresql.org/docs/current/static/auto-explain.html).\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 28 Sep 2017 11:20:45 +0200",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query in JDBC"
},
{
"msg_contents": "https://www.postgresql.org/docs/current/static/auto-explain.html\r\n\r\n\r\n-----Message d'origine-----\r\nDe : [email protected] [mailto:[email protected]] De la part de Julien Rouhaud\r\nEnvoyé : jeudi 28 septembre 2017 11:21\r\nÀ : Subramaniam C\r\nCc : [email protected]\r\nObjet : Re: [PERFORM] Slow query in JDBC\r\n\r\nOn Thu, Sep 28, 2017 at 10:58 AM, Subramaniam C <[email protected]> wrote:\r\n> I configured cursor_tuple_fraction to 1 but still I am facing the same\r\n> issue.\r\n\r\nCan you show explain (analyze, buffers) of the query when run from psql and run from application (you can use auto_explain for that if needed, see https://www.postgresql.org/docs/current/static/auto-explain.html).\r\n\r\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\r\n!!!*************************************************************************************\r\n\"Ce message et les pièces jointes sont confidentiels et réservés à l'usage exclusif de ses destinataires. Il peut également être protégé par le secret professionnel. Si vous recevez ce message par erreur, merci d'en avertir immédiatement l'expéditeur et de le détruire. L'intégrité du message ne pouvant être assurée sur Internet, la responsabilité de Worldline ne pourra être recherchée quant au contenu de ce message. Bien que les meilleurs efforts soient faits pour maintenir cette transmission exempte de tout virus, l'expéditeur ne donne aucune garantie à cet égard et sa responsabilité ne saurait être recherchée pour tout dommage résultant d'un virus transmis.\r\n\r\nThis e-mail and the documents attached are confidential and intended solely for the addressee; it may also be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it. As its integrity cannot be secured on the Internet, the Worldline liability cannot be triggered for the message content. Although the sender endeavours to maintain a computer virus-free network, the sender does not warrant that this transmission is virus-free and will not be liable for any damages resulting from any virus transmitted.!!!\"\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 28 Sep 2017 11:26:06 +0200",
"msg_from": "Pavy Philippe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query in JDBC"
},
{
"msg_contents": "First output show the output when the query is executed from sql command\nline. The second output show when it is executed from the application. AS\nper the output it is clear that the when the query is executed through JDBC\nits not using the index (health_index) instead its doing sequence scan.\nPlease let us know how this issue can be resolved from JDBC?\n\n1.)\n\n\n*Limit (cost=510711.53..510711.58 rows=20 width=72)*\n\n* -> Sort (cost=510711.53..511961.53 rows=500000 width=72)*\n\n* Sort Key: health_timeseries_table.health*\n\n* -> WindowAgg (cost=0.98..497406.71 rows=500000 width=72)*\n\n* -> Merge Left Join (cost=0.98..491156.71 rows=500000\nwidth=64)*\n\n* Merge Cond: (object_table.uuid =\nhealth_timeseries_table.mobid)*\n\n* -> Unique (cost=0.42..57977.00 rows=500000\nwidth=64)*\n\n* -> Index Scan Backward using object_table_pkey\non object_table (cost=0.42..56727.00 rows=500000 width=64)*\n\n* Index Cond: ((\"timestamp\" >= 0) AND\n(\"timestamp\" <= '1505990086834'::bigint))*\n\n* Filter: (tenantid = 'perspica'::text)*\n\n* -> Materialize (cost=0.56..426235.64 rows=55526\nwidth=16)*\n\n* -> Unique (cost=0.56..425541.56 rows=55526\nwidth=24)*\n\n* -> Index Only Scan using health_index on\nhealth_timeseries_table (cost=0.56..421644.56 rows=1558800 width=24)*\n\n* Index Cond: ((\"timestamp\" >=\n'1505989186834'::bigint) AND (\"timestamp\" <= '1505990086834'::bigint))*\n\n*LOG: duration: 1971.697 ms*\n\n\n\n\n\n2.)\n\n\nLimit (cost=457629.21..457629.26 rows=20 width=72)\n\n -> Sort (cost=457629.21..458879.21 rows=500000 width=72)\n\n Sort Key: health_timeseries_table.health\n\n -> WindowAgg (cost=367431.49..444324.39 rows=500000 width=72)\n\n -> Merge Left Join (cost=367431.49..438074.39 rows=500000\nwidth=64)\n\n Merge Cond: (object_table.uuid =\nhealth_timeseries_table.mobid)\n\n -> Unique (cost=0.42..57977.00 rows=500000 width=64)\n\n -> Index Scan Backward using object_table_pkey\non object_table (cost=0.42..56727.00 rows=500000 width=64)\n\n Index Cond: ((\"timestamp\" >= '0'::bigint)\nAND (\"timestamp\" <= '1505990400000'::bigint))\n\n Filter: (tenantid = 'perspica'::text)\n\n -> Materialize (cost=367431.07..373153.32 rows=55526\nwidth=16)\n\n -> Unique (cost=367431.07..372459.24 rows=55526\nwidth=24)\n\n -> Sort (cost=367431.07..369945.16\nrows=1005634 width=24)\n\n Sort Key:\nhealth_timeseries_table.mobid DESC, health_timeseries_table.\"timestamp\"\nDESC, health_timeseries_table.health\n\n -> Seq Scan on\nhealth_timeseries_table (cost=0.00..267171.00 rows=1005634 width=24)\n\n\n Filter: ((\"timestamp\" >=\n'1505989500000'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint))\n\nOn Thu, Sep 28, 2017 at 2:56 PM, Pavy Philippe <[email protected]>\nwrote:\n\n> https://www.postgresql.org/docs/current/static/auto-explain.html\n>\n>\n> -----Message d'origine-----\n> De : [email protected] [mailto:pgsql-performance-\n> [email protected]] De la part de Julien Rouhaud\n> Envoyé : jeudi 28 septembre 2017 11:21\n> À : Subramaniam C\n> Cc : [email protected]\n> Objet : Re: [PERFORM] Slow query in JDBC\n>\n> On Thu, Sep 28, 2017 at 10:58 AM, Subramaniam C <\n> [email protected]> wrote:\n> > I configured cursor_tuple_fraction to 1 but still I am facing the same\n> > issue.\n>\n> Can you show explain (analyze, buffers) of the query when run from psql\n> and run from application (you can use auto_explain for that if needed, see\n> https://www.postgresql.org/docs/current/static/auto-explain.html).\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n> !!!*********************************************************\n> ****************************\n> \"Ce message et les pièces jointes sont confidentiels et réservés à l'usage\n> exclusif de ses destinataires. Il peut également être protégé par le secret\n> professionnel. Si vous recevez ce message par erreur, merci d'en avertir\n> immédiatement l'expéditeur et de le détruire. L'intégrité du message ne\n> pouvant être assurée sur Internet, la responsabilité de Worldline ne pourra\n> être recherchée quant au contenu de ce message. Bien que les meilleurs\n> efforts soient faits pour maintenir cette transmission exempte de tout\n> virus, l'expéditeur ne donne aucune garantie à cet égard et sa\n> responsabilité ne saurait être recherchée pour tout dommage résultant d'un\n> virus transmis.\n>\n> This e-mail and the documents attached are confidential and intended\n> solely for the addressee; it may also be privileged. If you receive this\n> e-mail in error, please notify the sender immediately and destroy it. As\n> its integrity cannot be secured on the Internet, the Worldline liability\n> cannot be triggered for the message content. Although the sender endeavours\n> to maintain a computer virus-free network, the sender does not warrant that\n> this transmission is virus-free and will not be liable for any damages\n> resulting from any virus transmitted.!!!\"\n>\n\nFirst output show the output when the query is executed from sql command line. The second output show when it is executed from the application. AS per the output it is clear that the when the query is executed through JDBC its not using the index (health_index) instead its doing sequence scan. Please let us know how this issue can be resolved from JDBC?1.)Limit (cost=510711.53..510711.58 rows=20 width=72) -> Sort (cost=510711.53..511961.53 rows=500000 width=72) Sort Key: health_timeseries_table.health -> WindowAgg (cost=0.98..497406.71 rows=500000 width=72) -> Merge Left Join (cost=0.98..491156.71 rows=500000 width=64) Merge Cond: (object_table.uuid = health_timeseries_table.mobid) -> Unique (cost=0.42..57977.00 rows=500000 width=64) -> Index Scan Backward using object_table_pkey on object_table (cost=0.42..56727.00 rows=500000 width=64) Index Cond: ((\"timestamp\" >= 0) AND (\"timestamp\" <= '1505990086834'::bigint)) Filter: (tenantid = 'perspica'::text) -> Materialize (cost=0.56..426235.64 rows=55526 width=16) -> Unique (cost=0.56..425541.56 rows=55526 width=24) -> Index Only Scan using health_index on health_timeseries_table (cost=0.56..421644.56 rows=1558800 width=24) Index Cond: ((\"timestamp\" >= '1505989186834'::bigint) AND (\"timestamp\" <= '1505990086834'::bigint))LOG: duration: 1971.697 ms2.)Limit (cost=457629.21..457629.26 rows=20 width=72) -> Sort (cost=457629.21..458879.21 rows=500000 width=72) Sort Key: health_timeseries_table.health -> WindowAgg (cost=367431.49..444324.39 rows=500000 width=72) -> Merge Left Join (cost=367431.49..438074.39 rows=500000 width=64) Merge Cond: (object_table.uuid = health_timeseries_table.mobid) -> Unique (cost=0.42..57977.00 rows=500000 width=64) -> Index Scan Backward using object_table_pkey on object_table (cost=0.42..56727.00 rows=500000 width=64) Index Cond: ((\"timestamp\" >= '0'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint)) Filter: (tenantid = 'perspica'::text) -> Materialize (cost=367431.07..373153.32 rows=55526 width=16) -> Unique (cost=367431.07..372459.24 rows=55526 width=24) -> Sort (cost=367431.07..369945.16 rows=1005634 width=24) Sort Key: health_timeseries_table.mobid DESC, health_timeseries_table.\"timestamp\" DESC, health_timeseries_table.health -> Seq Scan on health_timeseries_table (cost=0.00..267171.00 rows=1005634 width=24) Filter: ((\"timestamp\" >= '1505989500000'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint))On Thu, Sep 28, 2017 at 2:56 PM, Pavy Philippe <[email protected]> wrote:https://www.postgresql.org/docs/current/static/auto-explain.html\n\n\n-----Message d'origine-----\nDe : [email protected] [mailto:[email protected]] De la part de Julien Rouhaud\nEnvoyé : jeudi 28 septembre 2017 11:21\nÀ : Subramaniam C\nCc : [email protected]\nObjet : Re: [PERFORM] Slow query in JDBC\n\nOn Thu, Sep 28, 2017 at 10:58 AM, Subramaniam C <[email protected]> wrote:\n> I configured cursor_tuple_fraction to 1 but still I am facing the same\n> issue.\n\nCan you show explain (analyze, buffers) of the query when run from psql and run from application (you can use auto_explain for that if needed, see https://www.postgresql.org/docs/current/static/auto-explain.html).\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n!!!*************************************************************************************\n\"Ce message et les pièces jointes sont confidentiels et réservés à l'usage exclusif de ses destinataires. Il peut également être protégé par le secret professionnel. Si vous recevez ce message par erreur, merci d'en avertir immédiatement l'expéditeur et de le détruire. L'intégrité du message ne pouvant être assurée sur Internet, la responsabilité de Worldline ne pourra être recherchée quant au contenu de ce message. Bien que les meilleurs efforts soient faits pour maintenir cette transmission exempte de tout virus, l'expéditeur ne donne aucune garantie à cet égard et sa responsabilité ne saurait être recherchée pour tout dommage résultant d'un virus transmis.\n\nThis e-mail and the documents attached are confidential and intended solely for the addressee; it may also be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it. As its integrity cannot be secured on the Internet, the Worldline liability cannot be triggered for the message content. Although the sender endeavours to maintain a computer virus-free network, the sender does not warrant that this transmission is virus-free and will not be liable for any damages resulting from any virus transmitted.!!!\"",
"msg_date": "Thu, 28 Sep 2017 15:29:08 +0530",
"msg_from": "Subramaniam C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query in JDBC"
},
{
"msg_contents": "What version of the driver are you using?\n\nThe driver does not automatically use a cursor, but it does use prepared\nstatements which can be slower.\n\n\nCan you provide the query and the jdbc query ?\n\n\n\nDave Cramer\n\[email protected]\nwww.postgresintl.com\n\nOn 28 September 2017 at 05:59, Subramaniam C <[email protected]>\nwrote:\n\n> First output show the output when the query is executed from sql command\n> line. The second output show when it is executed from the application. AS\n> per the output it is clear that the when the query is executed through JDBC\n> its not using the index (health_index) instead its doing sequence scan.\n> Please let us know how this issue can be resolved from JDBC?\n>\n> 1.)\n>\n>\n> *Limit (cost=510711.53..510711.58 rows=20 width=72)*\n>\n> * -> Sort (cost=510711.53..511961.53 rows=500000 width=72)*\n>\n> * Sort Key: health_timeseries_table.health*\n>\n> * -> WindowAgg (cost=0.98..497406.71 rows=500000 width=72)*\n>\n> * -> Merge Left Join (cost=0.98..491156.71 rows=500000\n> width=64)*\n>\n> * Merge Cond: (object_table.uuid =\n> health_timeseries_table.mobid)*\n>\n> * -> Unique (cost=0.42..57977.00 rows=500000\n> width=64)*\n>\n> * -> Index Scan Backward using\n> object_table_pkey on object_table (cost=0.42..56727.00 rows=500000\n> width=64)*\n>\n> * Index Cond: ((\"timestamp\" >= 0) AND\n> (\"timestamp\" <= '1505990086834'::bigint))*\n>\n> * Filter: (tenantid = 'perspica'::text)*\n>\n> * -> Materialize (cost=0.56..426235.64 rows=55526\n> width=16)*\n>\n> * -> Unique (cost=0.56..425541.56 rows=55526\n> width=24)*\n>\n> * -> Index Only Scan\n> using health_index on health_timeseries_table (cost=0.56..421644.56\n> rows=1558800 width=24)*\n>\n> * Index Cond: ((\"timestamp\" >=\n> '1505989186834'::bigint) AND (\"timestamp\" <= '1505990086834'::bigint))*\n>\n> *LOG: duration: 1971.697 ms*\n>\n>\n>\n>\n>\n> 2.)\n>\n>\n> Limit (cost=457629.21..457629.26 rows=20 width=72)\n>\n> -> Sort (cost=457629.21..458879.21 rows=500000 width=72)\n>\n> Sort Key: health_timeseries_table.health\n>\n> -> WindowAgg (cost=367431.49..444324.39 rows=500000 width=72)\n>\n> -> Merge Left Join (cost=367431.49..438074.39 rows=500000\n> width=64)\n>\n> Merge Cond: (object_table.uuid =\n> health_timeseries_table.mobid)\n>\n> -> Unique (cost=0.42..57977.00 rows=500000 width=64)\n>\n> -> Index Scan Backward using object_table_pkey\n> on object_table (cost=0.42..56727.00 rows=500000 width=64)\n>\n> Index Cond: ((\"timestamp\" >= '0'::bigint)\n> AND (\"timestamp\" <= '1505990400000'::bigint))\n>\n> Filter: (tenantid = 'perspica'::text)\n>\n> -> Materialize (cost=367431.07..373153.32 rows=55526\n> width=16)\n>\n> -> Unique (cost=367431.07..372459.24\n> rows=55526 width=24)\n>\n> -> Sort (cost=367431.07..369945.16\n> rows=1005634 width=24)\n>\n> Sort Key:\n> health_timeseries_table.mobid DESC, health_timeseries_table.\"timestamp\"\n> DESC, health_timeseries_table.health\n>\n> -> Seq Scan on\n> health_timeseries_table (cost=0.00..267171.00 rows=1005634 width=24)\n>\n>\n> Filter: ((\"timestamp\" >=\n> '1505989500000'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint))\n>\n> On Thu, Sep 28, 2017 at 2:56 PM, Pavy Philippe <\n> [email protected]> wrote:\n>\n>> https://www.postgresql.org/docs/current/static/auto-explain.html\n>>\n>>\n>> -----Message d'origine-----\n>> De : [email protected] [mailto:\n>> [email protected]] De la part de Julien Rouhaud\n>> Envoyé : jeudi 28 septembre 2017 11:21\n>> À : Subramaniam C\n>> Cc : [email protected]\n>> Objet : Re: [PERFORM] Slow query in JDBC\n>>\n>> On Thu, Sep 28, 2017 at 10:58 AM, Subramaniam C <\n>> [email protected]> wrote:\n>> > I configured cursor_tuple_fraction to 1 but still I am facing the same\n>> > issue.\n>>\n>> Can you show explain (analyze, buffers) of the query when run from psql\n>> and run from application (you can use auto_explain for that if needed, see\n>> https://www.postgresql.org/docs/current/static/auto-explain.html).\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]\n>> )\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>> !!!*********************************************************\n>> ****************************\n>> \"Ce message et les pièces jointes sont confidentiels et réservés à\n>> l'usage exclusif de ses destinataires. Il peut également être protégé par\n>> le secret professionnel. Si vous recevez ce message par erreur, merci d'en\n>> avertir immédiatement l'expéditeur et de le détruire. L'intégrité du\n>> message ne pouvant être assurée sur Internet, la responsabilité de\n>> Worldline ne pourra être recherchée quant au contenu de ce message. Bien\n>> que les meilleurs efforts soient faits pour maintenir cette transmission\n>> exempte de tout virus, l'expéditeur ne donne aucune garantie à cet égard et\n>> sa responsabilité ne saurait être recherchée pour tout dommage résultant\n>> d'un virus transmis.\n>>\n>> This e-mail and the documents attached are confidential and intended\n>> solely for the addressee; it may also be privileged. If you receive this\n>> e-mail in error, please notify the sender immediately and destroy it. As\n>> its integrity cannot be secured on the Internet, the Worldline liability\n>> cannot be triggered for the message content. Although the sender endeavours\n>> to maintain a computer virus-free network, the sender does not warrant that\n>> this transmission is virus-free and will not be liable for any damages\n>> resulting from any virus transmitted.!!!\"\n>>\n>\n>\n\nWhat version of the driver are you using?The driver does not automatically use a cursor, but it does use prepared statements which can be slower.Can you provide the query and the jdbc query ?Dave [email protected]\nOn 28 September 2017 at 05:59, Subramaniam C <[email protected]> wrote:First output show the output when the query is executed from sql command line. The second output show when it is executed from the application. AS per the output it is clear that the when the query is executed through JDBC its not using the index (health_index) instead its doing sequence scan. Please let us know how this issue can be resolved from JDBC?1.)Limit (cost=510711.53..510711.58 rows=20 width=72) -> Sort (cost=510711.53..511961.53 rows=500000 width=72) Sort Key: health_timeseries_table.health -> WindowAgg (cost=0.98..497406.71 rows=500000 width=72) -> Merge Left Join (cost=0.98..491156.71 rows=500000 width=64) Merge Cond: (object_table.uuid = health_timeseries_table.mobid) -> Unique (cost=0.42..57977.00 rows=500000 width=64) -> Index Scan Backward using object_table_pkey on object_table (cost=0.42..56727.00 rows=500000 width=64) Index Cond: ((\"timestamp\" >= 0) AND (\"timestamp\" <= '1505990086834'::bigint)) Filter: (tenantid = 'perspica'::text) -> Materialize (cost=0.56..426235.64 rows=55526 width=16) -> Unique (cost=0.56..425541.56 rows=55526 width=24) -> Index Only Scan using health_index on health_timeseries_table (cost=0.56..421644.56 rows=1558800 width=24) Index Cond: ((\"timestamp\" >= '1505989186834'::bigint) AND (\"timestamp\" <= '1505990086834'::bigint))LOG: duration: 1971.697 ms2.)Limit (cost=457629.21..457629.26 rows=20 width=72) -> Sort (cost=457629.21..458879.21 rows=500000 width=72) Sort Key: health_timeseries_table.health -> WindowAgg (cost=367431.49..444324.39 rows=500000 width=72) -> Merge Left Join (cost=367431.49..438074.39 rows=500000 width=64) Merge Cond: (object_table.uuid = health_timeseries_table.mobid) -> Unique (cost=0.42..57977.00 rows=500000 width=64) -> Index Scan Backward using object_table_pkey on object_table (cost=0.42..56727.00 rows=500000 width=64) Index Cond: ((\"timestamp\" >= '0'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint)) Filter: (tenantid = 'perspica'::text) -> Materialize (cost=367431.07..373153.32 rows=55526 width=16) -> Unique (cost=367431.07..372459.24 rows=55526 width=24) -> Sort (cost=367431.07..369945.16 rows=1005634 width=24) Sort Key: health_timeseries_table.mobid DESC, health_timeseries_table.\"timestamp\" DESC, health_timeseries_table.health -> Seq Scan on health_timeseries_table (cost=0.00..267171.00 rows=1005634 width=24) Filter: ((\"timestamp\" >= '1505989500000'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint))On Thu, Sep 28, 2017 at 2:56 PM, Pavy Philippe <[email protected]> wrote:https://www.postgresql.org/docs/current/static/auto-explain.html\n\n\n-----Message d'origine-----\nDe : [email protected] [mailto:[email protected]] De la part de Julien Rouhaud\nEnvoyé : jeudi 28 septembre 2017 11:21\nÀ : Subramaniam C\nCc : [email protected]\nObjet : Re: [PERFORM] Slow query in JDBC\n\nOn Thu, Sep 28, 2017 at 10:58 AM, Subramaniam C <[email protected]> wrote:\n> I configured cursor_tuple_fraction to 1 but still I am facing the same\n> issue.\n\nCan you show explain (analyze, buffers) of the query when run from psql and run from application (you can use auto_explain for that if needed, see https://www.postgresql.org/docs/current/static/auto-explain.html).\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n!!!*************************************************************************************\n\"Ce message et les pièces jointes sont confidentiels et réservés à l'usage exclusif de ses destinataires. Il peut également être protégé par le secret professionnel. Si vous recevez ce message par erreur, merci d'en avertir immédiatement l'expéditeur et de le détruire. L'intégrité du message ne pouvant être assurée sur Internet, la responsabilité de Worldline ne pourra être recherchée quant au contenu de ce message. Bien que les meilleurs efforts soient faits pour maintenir cette transmission exempte de tout virus, l'expéditeur ne donne aucune garantie à cet égard et sa responsabilité ne saurait être recherchée pour tout dommage résultant d'un virus transmis.\n\nThis e-mail and the documents attached are confidential and intended solely for the addressee; it may also be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it. As its integrity cannot be secured on the Internet, the Worldline liability cannot be triggered for the message content. Although the sender endeavours to maintain a computer virus-free network, the sender does not warrant that this transmission is virus-free and will not be liable for any damages resulting from any virus transmitted.!!!\"",
"msg_date": "Thu, 28 Sep 2017 09:59:48 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query in JDBC"
},
{
"msg_contents": "The JDBC version is 9.4-1201-jdbc41.\n\nQuery :-\n\nselect count(*) OVER() AS\ncount,uuid,availability,objectname,datasourcename,datasourcetype,objecttype,health\nfrom (select distinct on (health_timeseries_table.mobid) mobid,\nhealth_timeseries_table.health, health_timeseries_table.timestamp from\nhealth_timeseries_table where timestamp >= 1505989186834 and timestamp <=\n1505990086834 ORDER BY health_timeseries_table.mobid DESC,\nhealth_timeseries_table.timestamp DESC, health_timeseries_table.health ASC)\nt right join (SELECT DISTINCT ON (object_table.uuid) uuid,\nobject_table.timestamp,object_table.availability,object_table.objectname,object_table.datasourcename,object_table.datasourcetype,object_table.objecttype\nFROM object_table where object_table.timestamp >= 0 and\nobject_table.timestamp <= 1505990086834 and object_table.tenantid =\n'perspica' ORDER BY object_table.uuid DESC, object_table.timestamp DESC)u\non (t.mobid = u.uuid) order by health asc limit 20 offset 0;\n\n\nPlease let us know any other details?\n\n\nThanks and Regards\n\nSubramaniam\n\nOn Thu, Sep 28, 2017 at 7:29 PM, Dave Cramer <[email protected]> wrote:\n\n> What version of the driver are you using?\n>\n> The driver does not automatically use a cursor, but it does use prepared\n> statements which can be slower.\n>\n>\n> Can you provide the query and the jdbc query ?\n>\n>\n>\n> Dave Cramer\n>\n> [email protected]\n> www.postgresintl.com\n>\n> On 28 September 2017 at 05:59, Subramaniam C <[email protected]>\n> wrote:\n>\n>> First output show the output when the query is executed from sql command\n>> line. The second output show when it is executed from the application. AS\n>> per the output it is clear that the when the query is executed through JDBC\n>> its not using the index (health_index) instead its doing sequence scan.\n>> Please let us know how this issue can be resolved from JDBC?\n>>\n>> 1.)\n>>\n>>\n>> *Limit (cost=510711.53..510711.58 rows=20 width=72)*\n>>\n>> * -> Sort (cost=510711.53..511961.53 rows=500000 width=72)*\n>>\n>> * Sort Key: health_timeseries_table.health*\n>>\n>> * -> WindowAgg (cost=0.98..497406.71 rows=500000 width=72)*\n>>\n>> * -> Merge Left Join (cost=0.98..491156.71 rows=500000\n>> width=64)*\n>>\n>> * Merge Cond: (object_table.uuid =\n>> health_timeseries_table.mobid)*\n>>\n>> * -> Unique (cost=0.42..57977.00 rows=500000\n>> width=64)*\n>>\n>> * -> Index Scan Backward using\n>> object_table_pkey on object_table (cost=0.42..56727.00 rows=500000\n>> width=64)*\n>>\n>> * Index Cond: ((\"timestamp\" >= 0) AND\n>> (\"timestamp\" <= '1505990086834'::bigint))*\n>>\n>> * Filter: (tenantid = 'perspica'::text)*\n>>\n>> * -> Materialize (cost=0.56..426235.64 rows=55526\n>> width=16)*\n>>\n>> * -> Unique (cost=0.56..425541.56 rows=55526\n>> width=24)*\n>>\n>> * -> Index Only Scan\n>> using health_index on health_timeseries_table (cost=0.56..421644.56\n>> rows=1558800 width=24)*\n>>\n>> * Index Cond: ((\"timestamp\" >=\n>> '1505989186834'::bigint) AND (\"timestamp\" <= '1505990086834'::bigint))*\n>>\n>> *LOG: duration: 1971.697 ms*\n>>\n>>\n>>\n>>\n>>\n>> 2.)\n>>\n>>\n>> Limit (cost=457629.21..457629.26 rows=20 width=72)\n>>\n>> -> Sort (cost=457629.21..458879.21 rows=500000 width=72)\n>>\n>> Sort Key: health_timeseries_table.health\n>>\n>> -> WindowAgg (cost=367431.49..444324.39 rows=500000 width=72)\n>>\n>> -> Merge Left Join (cost=367431.49..438074.39 rows=500000\n>> width=64)\n>>\n>> Merge Cond: (object_table.uuid =\n>> health_timeseries_table.mobid)\n>>\n>> -> Unique (cost=0.42..57977.00 rows=500000 width=64)\n>>\n>> -> Index Scan Backward using object_table_pkey\n>> on object_table (cost=0.42..56727.00 rows=500000 width=64)\n>>\n>> Index Cond: ((\"timestamp\" >= '0'::bigint)\n>> AND (\"timestamp\" <= '1505990400000'::bigint))\n>>\n>> Filter: (tenantid = 'perspica'::text)\n>>\n>> -> Materialize (cost=367431.07..373153.32\n>> rows=55526 width=16)\n>>\n>> -> Unique (cost=367431.07..372459.24\n>> rows=55526 width=24)\n>>\n>> -> Sort (cost=367431.07..369945.16\n>> rows=1005634 width=24)\n>>\n>> Sort Key:\n>> health_timeseries_table.mobid DESC, health_timeseries_table.\"timestamp\"\n>> DESC, health_timeseries_table.health\n>>\n>> -> Seq Scan on\n>> health_timeseries_table (cost=0.00..267171.00 rows=1005634 width=24)\n>>\n>>\n>> Filter: ((\"timestamp\" >=\n>> '1505989500000'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint))\n>>\n>> On Thu, Sep 28, 2017 at 2:56 PM, Pavy Philippe <\n>> [email protected]> wrote:\n>>\n>>> https://www.postgresql.org/docs/current/static/auto-explain.html\n>>>\n>>>\n>>> -----Message d'origine-----\n>>> De : [email protected] [mailto:\n>>> [email protected]] De la part de Julien Rouhaud\n>>> Envoyé : jeudi 28 septembre 2017 11:21\n>>> À : Subramaniam C\n>>> Cc : [email protected]\n>>> Objet : Re: [PERFORM] Slow query in JDBC\n>>>\n>>> On Thu, Sep 28, 2017 at 10:58 AM, Subramaniam C <\n>>> [email protected]> wrote:\n>>> > I configured cursor_tuple_fraction to 1 but still I am facing the same\n>>> > issue.\n>>>\n>>> Can you show explain (analyze, buffers) of the query when run from psql\n>>> and run from application (you can use auto_explain for that if needed, see\n>>> https://www.postgresql.org/docs/current/static/auto-explain.html).\n>>>\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.\n>>> org)\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>\n>>> !!!*********************************************************\n>>> ****************************\n>>> \"Ce message et les pièces jointes sont confidentiels et réservés à\n>>> l'usage exclusif de ses destinataires. Il peut également être protégé par\n>>> le secret professionnel. Si vous recevez ce message par erreur, merci d'en\n>>> avertir immédiatement l'expéditeur et de le détruire. L'intégrité du\n>>> message ne pouvant être assurée sur Internet, la responsabilité de\n>>> Worldline ne pourra être recherchée quant au contenu de ce message. Bien\n>>> que les meilleurs efforts soient faits pour maintenir cette transmission\n>>> exempte de tout virus, l'expéditeur ne donne aucune garantie à cet égard et\n>>> sa responsabilité ne saurait être recherchée pour tout dommage résultant\n>>> d'un virus transmis.\n>>>\n>>> This e-mail and the documents attached are confidential and intended\n>>> solely for the addressee; it may also be privileged. If you receive this\n>>> e-mail in error, please notify the sender immediately and destroy it. As\n>>> its integrity cannot be secured on the Internet, the Worldline liability\n>>> cannot be triggered for the message content. Although the sender endeavours\n>>> to maintain a computer virus-free network, the sender does not warrant that\n>>> this transmission is virus-free and will not be liable for any damages\n>>> resulting from any virus transmitted.!!!\"\n>>>\n>>\n>>\n>\n\nThe JDBC version is 9.4-1201-jdbc41.Query :-select count(*) OVER() AS count,uuid,availability,objectname,datasourcename,datasourcetype,objecttype,health from (select distinct on (health_timeseries_table.mobid) mobid, health_timeseries_table.health, health_timeseries_table.timestamp from health_timeseries_table where timestamp >= 1505989186834 and timestamp <= 1505990086834 ORDER BY health_timeseries_table.mobid DESC, health_timeseries_table.timestamp DESC, health_timeseries_table.health ASC) t right join (SELECT DISTINCT ON (object_table.uuid) uuid, object_table.timestamp,object_table.availability,object_table.objectname,object_table.datasourcename,object_table.datasourcetype,object_table.objecttype FROM object_table where object_table.timestamp >= 0 and object_table.timestamp <= 1505990086834 and object_table.tenantid = 'perspica' ORDER BY object_table.uuid DESC, object_table.timestamp DESC)u on (t.mobid = u.uuid) order by health asc limit 20 offset 0;Please let us know any other details?Thanks and RegardsSubramaniamOn Thu, Sep 28, 2017 at 7:29 PM, Dave Cramer <[email protected]> wrote:What version of the driver are you using?The driver does not automatically use a cursor, but it does use prepared statements which can be slower.Can you provide the query and the jdbc query ?Dave [email protected]\nOn 28 September 2017 at 05:59, Subramaniam C <[email protected]> wrote:First output show the output when the query is executed from sql command line. The second output show when it is executed from the application. AS per the output it is clear that the when the query is executed through JDBC its not using the index (health_index) instead its doing sequence scan. Please let us know how this issue can be resolved from JDBC?1.)Limit (cost=510711.53..510711.58 rows=20 width=72) -> Sort (cost=510711.53..511961.53 rows=500000 width=72) Sort Key: health_timeseries_table.health -> WindowAgg (cost=0.98..497406.71 rows=500000 width=72) -> Merge Left Join (cost=0.98..491156.71 rows=500000 width=64) Merge Cond: (object_table.uuid = health_timeseries_table.mobid) -> Unique (cost=0.42..57977.00 rows=500000 width=64) -> Index Scan Backward using object_table_pkey on object_table (cost=0.42..56727.00 rows=500000 width=64) Index Cond: ((\"timestamp\" >= 0) AND (\"timestamp\" <= '1505990086834'::bigint)) Filter: (tenantid = 'perspica'::text) -> Materialize (cost=0.56..426235.64 rows=55526 width=16) -> Unique (cost=0.56..425541.56 rows=55526 width=24) -> Index Only Scan using health_index on health_timeseries_table (cost=0.56..421644.56 rows=1558800 width=24) Index Cond: ((\"timestamp\" >= '1505989186834'::bigint) AND (\"timestamp\" <= '1505990086834'::bigint))LOG: duration: 1971.697 ms2.)Limit (cost=457629.21..457629.26 rows=20 width=72) -> Sort (cost=457629.21..458879.21 rows=500000 width=72) Sort Key: health_timeseries_table.health -> WindowAgg (cost=367431.49..444324.39 rows=500000 width=72) -> Merge Left Join (cost=367431.49..438074.39 rows=500000 width=64) Merge Cond: (object_table.uuid = health_timeseries_table.mobid) -> Unique (cost=0.42..57977.00 rows=500000 width=64) -> Index Scan Backward using object_table_pkey on object_table (cost=0.42..56727.00 rows=500000 width=64) Index Cond: ((\"timestamp\" >= '0'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint)) Filter: (tenantid = 'perspica'::text) -> Materialize (cost=367431.07..373153.32 rows=55526 width=16) -> Unique (cost=367431.07..372459.24 rows=55526 width=24) -> Sort (cost=367431.07..369945.16 rows=1005634 width=24) Sort Key: health_timeseries_table.mobid DESC, health_timeseries_table.\"timestamp\" DESC, health_timeseries_table.health -> Seq Scan on health_timeseries_table (cost=0.00..267171.00 rows=1005634 width=24) Filter: ((\"timestamp\" >= '1505989500000'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint))On Thu, Sep 28, 2017 at 2:56 PM, Pavy Philippe <[email protected]> wrote:https://www.postgresql.org/docs/current/static/auto-explain.html\n\n\n-----Message d'origine-----\nDe : [email protected] [mailto:[email protected]] De la part de Julien Rouhaud\nEnvoyé : jeudi 28 septembre 2017 11:21\nÀ : Subramaniam C\nCc : [email protected]\nObjet : Re: [PERFORM] Slow query in JDBC\n\nOn Thu, Sep 28, 2017 at 10:58 AM, Subramaniam C <[email protected]> wrote:\n> I configured cursor_tuple_fraction to 1 but still I am facing the same\n> issue.\n\nCan you show explain (analyze, buffers) of the query when run from psql and run from application (you can use auto_explain for that if needed, see https://www.postgresql.org/docs/current/static/auto-explain.html).\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n!!!*************************************************************************************\n\"Ce message et les pièces jointes sont confidentiels et réservés à l'usage exclusif de ses destinataires. Il peut également être protégé par le secret professionnel. Si vous recevez ce message par erreur, merci d'en avertir immédiatement l'expéditeur et de le détruire. L'intégrité du message ne pouvant être assurée sur Internet, la responsabilité de Worldline ne pourra être recherchée quant au contenu de ce message. Bien que les meilleurs efforts soient faits pour maintenir cette transmission exempte de tout virus, l'expéditeur ne donne aucune garantie à cet égard et sa responsabilité ne saurait être recherchée pour tout dommage résultant d'un virus transmis.\n\nThis e-mail and the documents attached are confidential and intended solely for the addressee; it may also be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it. As its integrity cannot be secured on the Internet, the Worldline liability cannot be triggered for the message content. Although the sender endeavours to maintain a computer virus-free network, the sender does not warrant that this transmission is virus-free and will not be liable for any damages resulting from any virus transmitted.!!!\"",
"msg_date": "Thu, 28 Sep 2017 22:02:58 +0530",
"msg_from": "Subramaniam C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query in JDBC"
},
{
"msg_contents": "Why are you using such an old version of the driver ?\n\nEither way the driver is going to use prepare statement to run this, that\nis the difference from it an psql.\n\n\nIf you want to see the explain in psql you will need to do\n\nprepare foo as <your query>\n\nthen explain execute foo;\n\nFWIW upgrading the driver won't help this situation but there's still no\nreason not to upgrade.\n\nDave Cramer\n\[email protected]\nwww.postgresintl.com\n\nOn 28 September 2017 at 12:32, Subramaniam C <[email protected]>\nwrote:\n\n> The JDBC version is 9.4-1201-jdbc41.\n>\n> Query :-\n>\n> select count(*) OVER() AS count,uuid,availability,\n> objectname,datasourcename,datasourcetype,objecttype,health from (select\n> distinct on (health_timeseries_table.mobid) mobid,\n> health_timeseries_table.health, health_timeseries_table.timestamp from\n> health_timeseries_table where timestamp >= 1505989186834 and timestamp <=\n> 1505990086834 ORDER BY health_timeseries_table.mobid DESC,\n> health_timeseries_table.timestamp DESC, health_timeseries_table.health\n> ASC) t right join (SELECT DISTINCT ON (object_table.uuid) uuid,\n> object_table.timestamp,object_table.availability,object_\n> table.objectname,object_table.datasourcename,object_table.\n> datasourcetype,object_table.objecttype FROM object_table where\n> object_table.timestamp >= 0 and object_table.timestamp <= 1505990086834 and\n> object_table.tenantid = 'perspica' ORDER BY object_table.uuid DESC,\n> object_table.timestamp DESC)u on (t.mobid = u.uuid) order by health asc\n> limit 20 offset 0;\n>\n>\n> Please let us know any other details?\n>\n>\n> Thanks and Regards\n>\n> Subramaniam\n>\n> On Thu, Sep 28, 2017 at 7:29 PM, Dave Cramer <[email protected]> wrote:\n>\n>> What version of the driver are you using?\n>>\n>> The driver does not automatically use a cursor, but it does use prepared\n>> statements which can be slower.\n>>\n>>\n>> Can you provide the query and the jdbc query ?\n>>\n>>\n>>\n>> Dave Cramer\n>>\n>> [email protected]\n>> www.postgresintl.com\n>>\n>> On 28 September 2017 at 05:59, Subramaniam C <[email protected]>\n>> wrote:\n>>\n>>> First output show the output when the query is executed from sql command\n>>> line. The second output show when it is executed from the application. AS\n>>> per the output it is clear that the when the query is executed through JDBC\n>>> its not using the index (health_index) instead its doing sequence scan.\n>>> Please let us know how this issue can be resolved from JDBC?\n>>>\n>>> 1.)\n>>>\n>>>\n>>> *Limit (cost=510711.53..510711.58 rows=20 width=72)*\n>>>\n>>> * -> Sort (cost=510711.53..511961.53 rows=500000 width=72)*\n>>>\n>>> * Sort Key: health_timeseries_table.health*\n>>>\n>>> * -> WindowAgg (cost=0.98..497406.71 rows=500000 width=72)*\n>>>\n>>> * -> Merge Left Join (cost=0.98..491156.71 rows=500000\n>>> width=64)*\n>>>\n>>> * Merge Cond: (object_table.uuid =\n>>> health_timeseries_table.mobid)*\n>>>\n>>> * -> Unique (cost=0.42..57977.00 rows=500000\n>>> width=64)*\n>>>\n>>> * -> Index Scan Backward using\n>>> object_table_pkey on object_table (cost=0.42..56727.00 rows=500000\n>>> width=64)*\n>>>\n>>> * Index Cond: ((\"timestamp\" >= 0) AND\n>>> (\"timestamp\" <= '1505990086834'::bigint))*\n>>>\n>>> * Filter: (tenantid = 'perspica'::text)*\n>>>\n>>> * -> Materialize (cost=0.56..426235.64 rows=55526\n>>> width=16)*\n>>>\n>>> * -> Unique (cost=0.56..425541.56 rows=55526\n>>> width=24)*\n>>>\n>>> * -> Index Only Scan\n>>> using health_index on health_timeseries_table (cost=0.56..421644.56\n>>> rows=1558800 width=24)*\n>>>\n>>> * Index Cond: ((\"timestamp\" >=\n>>> '1505989186834'::bigint) AND (\"timestamp\" <= '1505990086834'::bigint))*\n>>>\n>>> *LOG: duration: 1971.697 ms*\n>>>\n>>>\n>>>\n>>>\n>>>\n>>> 2.)\n>>>\n>>>\n>>> Limit (cost=457629.21..457629.26 rows=20 width=72)\n>>>\n>>> -> Sort (cost=457629.21..458879.21 rows=500000 width=72)\n>>>\n>>> Sort Key: health_timeseries_table.health\n>>>\n>>> -> WindowAgg (cost=367431.49..444324.39 rows=500000 width=72)\n>>>\n>>> -> Merge Left Join (cost=367431.49..438074.39\n>>> rows=500000 width=64)\n>>>\n>>> Merge Cond: (object_table.uuid =\n>>> health_timeseries_table.mobid)\n>>>\n>>> -> Unique (cost=0.42..57977.00 rows=500000\n>>> width=64)\n>>>\n>>> -> Index Scan Backward using\n>>> object_table_pkey on object_table (cost=0.42..56727.00 rows=500000\n>>> width=64)\n>>>\n>>> Index Cond: ((\"timestamp\" >=\n>>> '0'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint))\n>>>\n>>> Filter: (tenantid = 'perspica'::text)\n>>>\n>>> -> Materialize (cost=367431.07..373153.32\n>>> rows=55526 width=16)\n>>>\n>>> -> Unique (cost=367431.07..372459.24\n>>> rows=55526 width=24)\n>>>\n>>> -> Sort (cost=367431.07..369945.16\n>>> rows=1005634 width=24)\n>>>\n>>> Sort Key:\n>>> health_timeseries_table.mobid DESC, health_timeseries_table.\"timestamp\"\n>>> DESC, health_timeseries_table.health\n>>>\n>>> -> Seq Scan on\n>>> health_timeseries_table (cost=0.00..267171.00 rows=1005634 width=24)\n>>>\n>>>\n>>> Filter: ((\"timestamp\" >=\n>>> '1505989500000'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint))\n>>>\n>>> On Thu, Sep 28, 2017 at 2:56 PM, Pavy Philippe <\n>>> [email protected]> wrote:\n>>>\n>>>> https://www.postgresql.org/docs/current/static/auto-explain.html\n>>>>\n>>>>\n>>>> -----Message d'origine-----\n>>>> De : [email protected] [mailto:\n>>>> [email protected]] De la part de Julien Rouhaud\n>>>> Envoyé : jeudi 28 septembre 2017 11:21\n>>>> À : Subramaniam C\n>>>> Cc : [email protected]\n>>>> Objet : Re: [PERFORM] Slow query in JDBC\n>>>>\n>>>> On Thu, Sep 28, 2017 at 10:58 AM, Subramaniam C <\n>>>> [email protected]> wrote:\n>>>> > I configured cursor_tuple_fraction to 1 but still I am facing the same\n>>>> > issue.\n>>>>\n>>>> Can you show explain (analyze, buffers) of the query when run from psql\n>>>> and run from application (you can use auto_explain for that if needed, see\n>>>> https://www.postgresql.org/docs/current/static/auto-explain.html).\n>>>>\n>>>>\n>>>> --\n>>>> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.\n>>>> org)\n>>>> To make changes to your subscription:\n>>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>>\n>>>> !!!*********************************************************\n>>>> ****************************\n>>>> \"Ce message et les pièces jointes sont confidentiels et réservés à\n>>>> l'usage exclusif de ses destinataires. Il peut également être protégé par\n>>>> le secret professionnel. Si vous recevez ce message par erreur, merci d'en\n>>>> avertir immédiatement l'expéditeur et de le détruire. L'intégrité du\n>>>> message ne pouvant être assurée sur Internet, la responsabilité de\n>>>> Worldline ne pourra être recherchée quant au contenu de ce message. Bien\n>>>> que les meilleurs efforts soient faits pour maintenir cette transmission\n>>>> exempte de tout virus, l'expéditeur ne donne aucune garantie à cet égard et\n>>>> sa responsabilité ne saurait être recherchée pour tout dommage résultant\n>>>> d'un virus transmis.\n>>>>\n>>>> This e-mail and the documents attached are confidential and intended\n>>>> solely for the addressee; it may also be privileged. If you receive this\n>>>> e-mail in error, please notify the sender immediately and destroy it. As\n>>>> its integrity cannot be secured on the Internet, the Worldline liability\n>>>> cannot be triggered for the message content. Although the sender endeavours\n>>>> to maintain a computer virus-free network, the sender does not warrant that\n>>>> this transmission is virus-free and will not be liable for any damages\n>>>> resulting from any virus transmitted.!!!\"\n>>>>\n>>>\n>>>\n>>\n>\n\nWhy are you using such an old version of the driver ?Either way the driver is going to use prepare statement to run this, that is the difference from it an psql.If you want to see the explain in psql you will need to do prepare foo as <your query>then explain execute foo;FWIW upgrading the driver won't help this situation but there's still no reason not to upgrade.Dave [email protected]\nOn 28 September 2017 at 12:32, Subramaniam C <[email protected]> wrote:The JDBC version is 9.4-1201-jdbc41.Query :-select count(*) OVER() AS count,uuid,availability,objectname,datasourcename,datasourcetype,objecttype,health from (select distinct on (health_timeseries_table.mobid) mobid, health_timeseries_table.health, health_timeseries_table.timestamp from health_timeseries_table where timestamp >= 1505989186834 and timestamp <= 1505990086834 ORDER BY health_timeseries_table.mobid DESC, health_timeseries_table.timestamp DESC, health_timeseries_table.health ASC) t right join (SELECT DISTINCT ON (object_table.uuid) uuid, object_table.timestamp,object_table.availability,object_table.objectname,object_table.datasourcename,object_table.datasourcetype,object_table.objecttype FROM object_table where object_table.timestamp >= 0 and object_table.timestamp <= 1505990086834 and object_table.tenantid = 'perspica' ORDER BY object_table.uuid DESC, object_table.timestamp DESC)u on (t.mobid = u.uuid) order by health asc limit 20 offset 0;Please let us know any other details?Thanks and RegardsSubramaniamOn Thu, Sep 28, 2017 at 7:29 PM, Dave Cramer <[email protected]> wrote:What version of the driver are you using?The driver does not automatically use a cursor, but it does use prepared statements which can be slower.Can you provide the query and the jdbc query ?Dave [email protected]\nOn 28 September 2017 at 05:59, Subramaniam C <[email protected]> wrote:First output show the output when the query is executed from sql command line. The second output show when it is executed from the application. AS per the output it is clear that the when the query is executed through JDBC its not using the index (health_index) instead its doing sequence scan. Please let us know how this issue can be resolved from JDBC?1.)Limit (cost=510711.53..510711.58 rows=20 width=72) -> Sort (cost=510711.53..511961.53 rows=500000 width=72) Sort Key: health_timeseries_table.health -> WindowAgg (cost=0.98..497406.71 rows=500000 width=72) -> Merge Left Join (cost=0.98..491156.71 rows=500000 width=64) Merge Cond: (object_table.uuid = health_timeseries_table.mobid) -> Unique (cost=0.42..57977.00 rows=500000 width=64) -> Index Scan Backward using object_table_pkey on object_table (cost=0.42..56727.00 rows=500000 width=64) Index Cond: ((\"timestamp\" >= 0) AND (\"timestamp\" <= '1505990086834'::bigint)) Filter: (tenantid = 'perspica'::text) -> Materialize (cost=0.56..426235.64 rows=55526 width=16) -> Unique (cost=0.56..425541.56 rows=55526 width=24) -> Index Only Scan using health_index on health_timeseries_table (cost=0.56..421644.56 rows=1558800 width=24) Index Cond: ((\"timestamp\" >= '1505989186834'::bigint) AND (\"timestamp\" <= '1505990086834'::bigint))LOG: duration: 1971.697 ms2.)Limit (cost=457629.21..457629.26 rows=20 width=72) -> Sort (cost=457629.21..458879.21 rows=500000 width=72) Sort Key: health_timeseries_table.health -> WindowAgg (cost=367431.49..444324.39 rows=500000 width=72) -> Merge Left Join (cost=367431.49..438074.39 rows=500000 width=64) Merge Cond: (object_table.uuid = health_timeseries_table.mobid) -> Unique (cost=0.42..57977.00 rows=500000 width=64) -> Index Scan Backward using object_table_pkey on object_table (cost=0.42..56727.00 rows=500000 width=64) Index Cond: ((\"timestamp\" >= '0'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint)) Filter: (tenantid = 'perspica'::text) -> Materialize (cost=367431.07..373153.32 rows=55526 width=16) -> Unique (cost=367431.07..372459.24 rows=55526 width=24) -> Sort (cost=367431.07..369945.16 rows=1005634 width=24) Sort Key: health_timeseries_table.mobid DESC, health_timeseries_table.\"timestamp\" DESC, health_timeseries_table.health -> Seq Scan on health_timeseries_table (cost=0.00..267171.00 rows=1005634 width=24) Filter: ((\"timestamp\" >= '1505989500000'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint))On Thu, Sep 28, 2017 at 2:56 PM, Pavy Philippe <[email protected]> wrote:https://www.postgresql.org/docs/current/static/auto-explain.html\n\n\n-----Message d'origine-----\nDe : [email protected] [mailto:[email protected]] De la part de Julien Rouhaud\nEnvoyé : jeudi 28 septembre 2017 11:21\nÀ : Subramaniam C\nCc : [email protected]\nObjet : Re: [PERFORM] Slow query in JDBC\n\nOn Thu, Sep 28, 2017 at 10:58 AM, Subramaniam C <[email protected]> wrote:\n> I configured cursor_tuple_fraction to 1 but still I am facing the same\n> issue.\n\nCan you show explain (analyze, buffers) of the query when run from psql and run from application (you can use auto_explain for that if needed, see https://www.postgresql.org/docs/current/static/auto-explain.html).\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n!!!*************************************************************************************\n\"Ce message et les pièces jointes sont confidentiels et réservés à l'usage exclusif de ses destinataires. Il peut également être protégé par le secret professionnel. Si vous recevez ce message par erreur, merci d'en avertir immédiatement l'expéditeur et de le détruire. L'intégrité du message ne pouvant être assurée sur Internet, la responsabilité de Worldline ne pourra être recherchée quant au contenu de ce message. Bien que les meilleurs efforts soient faits pour maintenir cette transmission exempte de tout virus, l'expéditeur ne donne aucune garantie à cet égard et sa responsabilité ne saurait être recherchée pour tout dommage résultant d'un virus transmis.\n\nThis e-mail and the documents attached are confidential and intended solely for the addressee; it may also be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it. As its integrity cannot be secured on the Internet, the Worldline liability cannot be triggered for the message content. Although the sender endeavours to maintain a computer virus-free network, the sender does not warrant that this transmission is virus-free and will not be liable for any damages resulting from any virus transmitted.!!!\"",
"msg_date": "Thu, 28 Sep 2017 15:04:20 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query in JDBC"
},
{
"msg_contents": "If I run the below commands from psql command line then in the explain\noutput it showing as its using the index.\n\nprepare foo as <your query>\nexplain execute foo;\n\nBut if I run the same query from my application using JDBC\nPreparedStatement then it showing as its doing sequence scan.\n\nTo which version should I upgrade my JDBC driver? Will it help resolving\nthis issue?\n\nPlease help.\n\nThanks and Regards\nSubramaniam\n\nOn Fri, Sep 29, 2017 at 12:34 AM, Dave Cramer <[email protected]> wrote:\n\n> Why are you using such an old version of the driver ?\n>\n> Either way the driver is going to use prepare statement to run this, that\n> is the difference from it an psql.\n>\n>\n> If you want to see the explain in psql you will need to do\n>\n> prepare foo as <your query>\n>\n> then explain execute foo;\n>\n> FWIW upgrading the driver won't help this situation but there's still no\n> reason not to upgrade.\n>\n> Dave Cramer\n>\n> [email protected]\n> www.postgresintl.com\n>\n> On 28 September 2017 at 12:32, Subramaniam C <[email protected]>\n> wrote:\n>\n>> The JDBC version is 9.4-1201-jdbc41.\n>>\n>> Query :-\n>>\n>> select count(*) OVER() AS count,uuid,availability,object\n>> name,datasourcename,datasourcetype,objecttype,health from (select\n>> distinct on (health_timeseries_table.mobid) mobid,\n>> health_timeseries_table.health, health_timeseries_table.timestamp from\n>> health_timeseries_table where timestamp >= 1505989186834 and timestamp <=\n>> 1505990086834 ORDER BY health_timeseries_table.mobid DESC,\n>> health_timeseries_table.timestamp DESC, health_timeseries_table.health\n>> ASC) t right join (SELECT DISTINCT ON (object_table.uuid) uuid,\n>> object_table.timestamp,object_table.availability,object_tabl\n>> e.objectname,object_table.datasourcename,object_table.dataso\n>> urcetype,object_table.objecttype FROM object_table where\n>> object_table.timestamp >= 0 and object_table.timestamp <= 1505990086834 and\n>> object_table.tenantid = 'perspica' ORDER BY object_table.uuid DESC,\n>> object_table.timestamp DESC)u on (t.mobid = u.uuid) order by health asc\n>> limit 20 offset 0;\n>>\n>>\n>> Please let us know any other details?\n>>\n>>\n>> Thanks and Regards\n>>\n>> Subramaniam\n>>\n>> On Thu, Sep 28, 2017 at 7:29 PM, Dave Cramer <[email protected]> wrote:\n>>\n>>> What version of the driver are you using?\n>>>\n>>> The driver does not automatically use a cursor, but it does use prepared\n>>> statements which can be slower.\n>>>\n>>>\n>>> Can you provide the query and the jdbc query ?\n>>>\n>>>\n>>>\n>>> Dave Cramer\n>>>\n>>> [email protected]\n>>> www.postgresintl.com\n>>>\n>>> On 28 September 2017 at 05:59, Subramaniam C <[email protected]\n>>> > wrote:\n>>>\n>>>> First output show the output when the query is executed from sql\n>>>> command line. The second output show when it is executed from the\n>>>> application. AS per the output it is clear that the when the query is\n>>>> executed through JDBC its not using the index (health_index) instead its\n>>>> doing sequence scan. Please let us know how this issue can be resolved from\n>>>> JDBC?\n>>>>\n>>>> 1.)\n>>>>\n>>>>\n>>>> *Limit (cost=510711.53..510711.58 rows=20 width=72)*\n>>>>\n>>>> * -> Sort (cost=510711.53..511961.53 rows=500000 width=72)*\n>>>>\n>>>> * Sort Key: health_timeseries_table.health*\n>>>>\n>>>> * -> WindowAgg (cost=0.98..497406.71 rows=500000 width=72)*\n>>>>\n>>>> * -> Merge Left Join (cost=0.98..491156.71 rows=500000\n>>>> width=64)*\n>>>>\n>>>> * Merge Cond: (object_table.uuid =\n>>>> health_timeseries_table.mobid)*\n>>>>\n>>>> * -> Unique (cost=0.42..57977.00 rows=500000\n>>>> width=64)*\n>>>>\n>>>> * -> Index Scan Backward using\n>>>> object_table_pkey on object_table (cost=0.42..56727.00 rows=500000\n>>>> width=64)*\n>>>>\n>>>> * Index Cond: ((\"timestamp\" >= 0) AND\n>>>> (\"timestamp\" <= '1505990086834'::bigint))*\n>>>>\n>>>> * Filter: (tenantid = 'perspica'::text)*\n>>>>\n>>>> * -> Materialize (cost=0.56..426235.64 rows=55526\n>>>> width=16)*\n>>>>\n>>>> * -> Unique (cost=0.56..425541.56\n>>>> rows=55526 width=24)*\n>>>>\n>>>> * -> Index Only Scan\n>>>> using health_index on health_timeseries_table (cost=0.56..421644.56\n>>>> rows=1558800 width=24)*\n>>>>\n>>>> * Index Cond: ((\"timestamp\" >=\n>>>> '1505989186834'::bigint) AND (\"timestamp\" <= '1505990086834'::bigint))*\n>>>>\n>>>> *LOG: duration: 1971.697 ms*\n>>>>\n>>>>\n>>>>\n>>>>\n>>>>\n>>>> 2.)\n>>>>\n>>>>\n>>>> Limit (cost=457629.21..457629.26 rows=20 width=72)\n>>>>\n>>>> -> Sort (cost=457629.21..458879.21 rows=500000 width=72)\n>>>>\n>>>> Sort Key: health_timeseries_table.health\n>>>>\n>>>> -> WindowAgg (cost=367431.49..444324.39 rows=500000 width=72)\n>>>>\n>>>> -> Merge Left Join (cost=367431.49..438074.39\n>>>> rows=500000 width=64)\n>>>>\n>>>> Merge Cond: (object_table.uuid =\n>>>> health_timeseries_table.mobid)\n>>>>\n>>>> -> Unique (cost=0.42..57977.00 rows=500000\n>>>> width=64)\n>>>>\n>>>> -> Index Scan Backward using\n>>>> object_table_pkey on object_table (cost=0.42..56727.00 rows=500000\n>>>> width=64)\n>>>>\n>>>> Index Cond: ((\"timestamp\" >=\n>>>> '0'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint))\n>>>>\n>>>> Filter: (tenantid = 'perspica'::text)\n>>>>\n>>>> -> Materialize (cost=367431.07..373153.32\n>>>> rows=55526 width=16)\n>>>>\n>>>> -> Unique (cost=367431.07..372459.24\n>>>> rows=55526 width=24)\n>>>>\n>>>> -> Sort (cost=367431.07..369945.16\n>>>> rows=1005634 width=24)\n>>>>\n>>>> Sort Key:\n>>>> health_timeseries_table.mobid DESC, health_timeseries_table.\"timestamp\"\n>>>> DESC, health_timeseries_table.health\n>>>>\n>>>> -> Seq Scan on\n>>>> health_timeseries_table (cost=0.00..267171.00 rows=1005634 width=24)\n>>>>\n>>>>\n>>>> Filter: ((\"timestamp\" >=\n>>>> '1505989500000'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint))\n>>>>\n>>>> On Thu, Sep 28, 2017 at 2:56 PM, Pavy Philippe <\n>>>> [email protected]> wrote:\n>>>>\n>>>>> https://www.postgresql.org/docs/current/static/auto-explain.html\n>>>>>\n>>>>>\n>>>>> -----Message d'origine-----\n>>>>> De : [email protected] [mailto:\n>>>>> [email protected]] De la part de Julien Rouhaud\n>>>>> Envoyé : jeudi 28 septembre 2017 11:21\n>>>>> À : Subramaniam C\n>>>>> Cc : [email protected]\n>>>>> Objet : Re: [PERFORM] Slow query in JDBC\n>>>>>\n>>>>> On Thu, Sep 28, 2017 at 10:58 AM, Subramaniam C <\n>>>>> [email protected]> wrote:\n>>>>> > I configured cursor_tuple_fraction to 1 but still I am facing the\n>>>>> same\n>>>>> > issue.\n>>>>>\n>>>>> Can you show explain (analyze, buffers) of the query when run from\n>>>>> psql and run from application (you can use auto_explain for that if needed,\n>>>>> see https://www.postgresql.org/docs/current/static/auto-explain.html).\n>>>>>\n>>>>>\n>>>>> --\n>>>>> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.\n>>>>> org)\n>>>>> To make changes to your subscription:\n>>>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>>>\n>>>>> !!!*********************************************************\n>>>>> ****************************\n>>>>> \"Ce message et les pièces jointes sont confidentiels et réservés à\n>>>>> l'usage exclusif de ses destinataires. Il peut également être protégé par\n>>>>> le secret professionnel. Si vous recevez ce message par erreur, merci d'en\n>>>>> avertir immédiatement l'expéditeur et de le détruire. L'intégrité du\n>>>>> message ne pouvant être assurée sur Internet, la responsabilité de\n>>>>> Worldline ne pourra être recherchée quant au contenu de ce message. Bien\n>>>>> que les meilleurs efforts soient faits pour maintenir cette transmission\n>>>>> exempte de tout virus, l'expéditeur ne donne aucune garantie à cet égard et\n>>>>> sa responsabilité ne saurait être recherchée pour tout dommage résultant\n>>>>> d'un virus transmis.\n>>>>>\n>>>>> This e-mail and the documents attached are confidential and intended\n>>>>> solely for the addressee; it may also be privileged. If you receive this\n>>>>> e-mail in error, please notify the sender immediately and destroy it. As\n>>>>> its integrity cannot be secured on the Internet, the Worldline liability\n>>>>> cannot be triggered for the message content. Although the sender endeavours\n>>>>> to maintain a computer virus-free network, the sender does not warrant that\n>>>>> this transmission is virus-free and will not be liable for any damages\n>>>>> resulting from any virus transmitted.!!!\"\n>>>>>\n>>>>\n>>>>\n>>>\n>>\n>\n\nIf I run the below commands from psql command line then in the explain output it showing as its using the index.prepare foo as <your query>explain execute foo;But if I run the same query from my application using JDBC PreparedStatement then it showing as its doing sequence scan.To which version should I upgrade my JDBC driver? Will it help resolving this issue?Please help.Thanks and RegardsSubramaniamOn Fri, Sep 29, 2017 at 12:34 AM, Dave Cramer <[email protected]> wrote:Why are you using such an old version of the driver ?Either way the driver is going to use prepare statement to run this, that is the difference from it an psql.If you want to see the explain in psql you will need to do prepare foo as <your query>then explain execute foo;FWIW upgrading the driver won't help this situation but there's still no reason not to upgrade.Dave [email protected]\nOn 28 September 2017 at 12:32, Subramaniam C <[email protected]> wrote:The JDBC version is 9.4-1201-jdbc41.Query :-select count(*) OVER() AS count,uuid,availability,objectname,datasourcename,datasourcetype,objecttype,health from (select distinct on (health_timeseries_table.mobid) mobid, health_timeseries_table.health, health_timeseries_table.timestamp from health_timeseries_table where timestamp >= 1505989186834 and timestamp <= 1505990086834 ORDER BY health_timeseries_table.mobid DESC, health_timeseries_table.timestamp DESC, health_timeseries_table.health ASC) t right join (SELECT DISTINCT ON (object_table.uuid) uuid, object_table.timestamp,object_table.availability,object_table.objectname,object_table.datasourcename,object_table.datasourcetype,object_table.objecttype FROM object_table where object_table.timestamp >= 0 and object_table.timestamp <= 1505990086834 and object_table.tenantid = 'perspica' ORDER BY object_table.uuid DESC, object_table.timestamp DESC)u on (t.mobid = u.uuid) order by health asc limit 20 offset 0;Please let us know any other details?Thanks and RegardsSubramaniamOn Thu, Sep 28, 2017 at 7:29 PM, Dave Cramer <[email protected]> wrote:What version of the driver are you using?The driver does not automatically use a cursor, but it does use prepared statements which can be slower.Can you provide the query and the jdbc query ?Dave [email protected]\nOn 28 September 2017 at 05:59, Subramaniam C <[email protected]> wrote:First output show the output when the query is executed from sql command line. The second output show when it is executed from the application. AS per the output it is clear that the when the query is executed through JDBC its not using the index (health_index) instead its doing sequence scan. Please let us know how this issue can be resolved from JDBC?1.)Limit (cost=510711.53..510711.58 rows=20 width=72) -> Sort (cost=510711.53..511961.53 rows=500000 width=72) Sort Key: health_timeseries_table.health -> WindowAgg (cost=0.98..497406.71 rows=500000 width=72) -> Merge Left Join (cost=0.98..491156.71 rows=500000 width=64) Merge Cond: (object_table.uuid = health_timeseries_table.mobid) -> Unique (cost=0.42..57977.00 rows=500000 width=64) -> Index Scan Backward using object_table_pkey on object_table (cost=0.42..56727.00 rows=500000 width=64) Index Cond: ((\"timestamp\" >= 0) AND (\"timestamp\" <= '1505990086834'::bigint)) Filter: (tenantid = 'perspica'::text) -> Materialize (cost=0.56..426235.64 rows=55526 width=16) -> Unique (cost=0.56..425541.56 rows=55526 width=24) -> Index Only Scan using health_index on health_timeseries_table (cost=0.56..421644.56 rows=1558800 width=24) Index Cond: ((\"timestamp\" >= '1505989186834'::bigint) AND (\"timestamp\" <= '1505990086834'::bigint))LOG: duration: 1971.697 ms2.)Limit (cost=457629.21..457629.26 rows=20 width=72) -> Sort (cost=457629.21..458879.21 rows=500000 width=72) Sort Key: health_timeseries_table.health -> WindowAgg (cost=367431.49..444324.39 rows=500000 width=72) -> Merge Left Join (cost=367431.49..438074.39 rows=500000 width=64) Merge Cond: (object_table.uuid = health_timeseries_table.mobid) -> Unique (cost=0.42..57977.00 rows=500000 width=64) -> Index Scan Backward using object_table_pkey on object_table (cost=0.42..56727.00 rows=500000 width=64) Index Cond: ((\"timestamp\" >= '0'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint)) Filter: (tenantid = 'perspica'::text) -> Materialize (cost=367431.07..373153.32 rows=55526 width=16) -> Unique (cost=367431.07..372459.24 rows=55526 width=24) -> Sort (cost=367431.07..369945.16 rows=1005634 width=24) Sort Key: health_timeseries_table.mobid DESC, health_timeseries_table.\"timestamp\" DESC, health_timeseries_table.health -> Seq Scan on health_timeseries_table (cost=0.00..267171.00 rows=1005634 width=24) Filter: ((\"timestamp\" >= '1505989500000'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint))On Thu, Sep 28, 2017 at 2:56 PM, Pavy Philippe <[email protected]> wrote:https://www.postgresql.org/docs/current/static/auto-explain.html\n\n\n-----Message d'origine-----\nDe : [email protected] [mailto:[email protected]] De la part de Julien Rouhaud\nEnvoyé : jeudi 28 septembre 2017 11:21\nÀ : Subramaniam C\nCc : [email protected]\nObjet : Re: [PERFORM] Slow query in JDBC\n\nOn Thu, Sep 28, 2017 at 10:58 AM, Subramaniam C <[email protected]> wrote:\n> I configured cursor_tuple_fraction to 1 but still I am facing the same\n> issue.\n\nCan you show explain (analyze, buffers) of the query when run from psql and run from application (you can use auto_explain for that if needed, see https://www.postgresql.org/docs/current/static/auto-explain.html).\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n!!!*************************************************************************************\n\"Ce message et les pièces jointes sont confidentiels et réservés à l'usage exclusif de ses destinataires. Il peut également être protégé par le secret professionnel. Si vous recevez ce message par erreur, merci d'en avertir immédiatement l'expéditeur et de le détruire. L'intégrité du message ne pouvant être assurée sur Internet, la responsabilité de Worldline ne pourra être recherchée quant au contenu de ce message. Bien que les meilleurs efforts soient faits pour maintenir cette transmission exempte de tout virus, l'expéditeur ne donne aucune garantie à cet égard et sa responsabilité ne saurait être recherchée pour tout dommage résultant d'un virus transmis.\n\nThis e-mail and the documents attached are confidential and intended solely for the addressee; it may also be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it. As its integrity cannot be secured on the Internet, the Worldline liability cannot be triggered for the message content. Although the sender endeavours to maintain a computer virus-free network, the sender does not warrant that this transmission is virus-free and will not be liable for any damages resulting from any virus transmitted.!!!\"",
"msg_date": "Fri, 29 Sep 2017 11:27:11 +0530",
"msg_from": "Subramaniam C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query in JDBC"
},
{
"msg_contents": "On Thu, Sep 28, 2017 at 2:59 AM, Subramaniam C <[email protected]>\nwrote:\n\n> First output show the output when the query is executed from sql command\n> line. The second output show when it is executed from the application. AS\n> per the output it is clear that the when the query is executed through JDBC\n> its not using the index (health_index) instead its doing sequence scan.\n> Please let us know how this issue can be resolved from JDBC?\n>\n> 1.)\n>\n>\n> * -> Index Only Scan\n> using health_index on health_timeseries_table (cost=0.56..421644.56\n> rows=1558800 width=24)*\n>\n> * Index Cond: ((\"timestamp\" >=\n> '1505989186834'::bigint) AND (\"timestamp\" <= '1505990086834'::bigint))*\n>\n>\n\n> 2.)\n>\n>\n> -> Seq Scan on\n> health_timeseries_table (cost=0.00..267171.00 rows=1005634 width=24)\n>\n> Filter: ((\"timestamp\" >=\n> '1505989500000'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint))\n>\n\n\nThose are different queries, so it is not terribly surprising it might\nchoose a different plan.\n\nFor this type of comparison, you need to compare identical queries,\nincluding parameter.\n\nCheers,\n\nJeff\n\nOn Thu, Sep 28, 2017 at 2:59 AM, Subramaniam C <[email protected]> wrote:First output show the output when the query is executed from sql command line. The second output show when it is executed from the application. AS per the output it is clear that the when the query is executed through JDBC its not using the index (health_index) instead its doing sequence scan. Please let us know how this issue can be resolved from JDBC?1.) -> Index Only Scan using health_index on health_timeseries_table (cost=0.56..421644.56 rows=1558800 width=24) Index Cond: ((\"timestamp\" >= '1505989186834'::bigint) AND (\"timestamp\" <= '1505990086834'::bigint)) 2.) -> Seq Scan on health_timeseries_table (cost=0.00..267171.00 rows=1005634 width=24) Filter: ((\"timestamp\" >= '1505989500000'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint))Those are different queries, so it is not terribly surprising it might choose a different plan.For this type of comparison, you need to compare identical queries, including parameter.Cheers,Jeff",
"msg_date": "Thu, 28 Sep 2017 23:49:48 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query in JDBC"
},
{
"msg_contents": "Yes you are right the timestamp which the application was providing was in\nseconds whereas the query which was using index had a timestamp in\nmilliseconds. So the query was taking time in application.\n\nOn Fri, Sep 29, 2017 at 12:19 PM, Jeff Janes <[email protected]> wrote:\n\n> On Thu, Sep 28, 2017 at 2:59 AM, Subramaniam C <[email protected]\n> > wrote:\n>\n>> First output show the output when the query is executed from sql command\n>> line. The second output show when it is executed from the application. AS\n>> per the output it is clear that the when the query is executed through JDBC\n>> its not using the index (health_index) instead its doing sequence scan.\n>> Please let us know how this issue can be resolved from JDBC?\n>>\n>> 1.)\n>>\n>>\n>> * -> Index Only Scan\n>> using health_index on health_timeseries_table (cost=0.56..421644.56\n>> rows=1558800 width=24)*\n>>\n>> * Index Cond: ((\"timestamp\" >=\n>> '1505989186834'::bigint) AND (\"timestamp\" <= '1505990086834'::bigint))*\n>>\n>>\n>\n>> 2.)\n>>\n>>\n>> -> Seq Scan on\n>> health_timeseries_table (cost=0.00..267171.00 rows=1005634 width=24)\n>>\n>> Filter: ((\"timestamp\" >=\n>> '1505989500000'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint))\n>>\n>\n>\n> Those are different queries, so it is not terribly surprising it might\n> choose a different plan.\n>\n> For this type of comparison, you need to compare identical queries,\n> including parameter.\n>\n> Cheers,\n>\n> Jeff\n>\n\nYes you are right the timestamp which the application was providing was in seconds whereas the query which was using index had a timestamp in milliseconds. So the query was taking time in application.On Fri, Sep 29, 2017 at 12:19 PM, Jeff Janes <[email protected]> wrote:On Thu, Sep 28, 2017 at 2:59 AM, Subramaniam C <[email protected]> wrote:First output show the output when the query is executed from sql command line. The second output show when it is executed from the application. AS per the output it is clear that the when the query is executed through JDBC its not using the index (health_index) instead its doing sequence scan. Please let us know how this issue can be resolved from JDBC?1.) -> Index Only Scan using health_index on health_timeseries_table (cost=0.56..421644.56 rows=1558800 width=24) Index Cond: ((\"timestamp\" >= '1505989186834'::bigint) AND (\"timestamp\" <= '1505990086834'::bigint)) 2.) -> Seq Scan on health_timeseries_table (cost=0.00..267171.00 rows=1005634 width=24) Filter: ((\"timestamp\" >= '1505989500000'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint))Those are different queries, so it is not terribly surprising it might choose a different plan.For this type of comparison, you need to compare identical queries, including parameter.Cheers,Jeff",
"msg_date": "Fri, 29 Sep 2017 16:14:03 +0530",
"msg_from": "Subramaniam C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query in JDBC"
},
{
"msg_contents": "Good catch Jeff.\n\nas for which version. We always recommend the latest version. 42.1.4\n\nDave Cramer\n\[email protected]\nwww.postgresintl.com\n\nOn 29 September 2017 at 06:44, Subramaniam C <[email protected]>\nwrote:\n\n> Yes you are right the timestamp which the application was providing was in\n> seconds whereas the query which was using index had a timestamp in\n> milliseconds. So the query was taking time in application.\n>\n> On Fri, Sep 29, 2017 at 12:19 PM, Jeff Janes <[email protected]> wrote:\n>\n>> On Thu, Sep 28, 2017 at 2:59 AM, Subramaniam C <\n>> [email protected]> wrote:\n>>\n>>> First output show the output when the query is executed from sql command\n>>> line. The second output show when it is executed from the application. AS\n>>> per the output it is clear that the when the query is executed through JDBC\n>>> its not using the index (health_index) instead its doing sequence scan.\n>>> Please let us know how this issue can be resolved from JDBC?\n>>>\n>>> 1.)\n>>>\n>>>\n>>> * -> Index Only Scan\n>>> using health_index on health_timeseries_table (cost=0.56..421644.56\n>>> rows=1558800 width=24)*\n>>>\n>>> * Index Cond: ((\"timestamp\" >=\n>>> '1505989186834'::bigint) AND (\"timestamp\" <= '1505990086834'::bigint))*\n>>>\n>>>\n>>\n>>> 2.)\n>>>\n>>>\n>>> -> Seq Scan on\n>>> health_timeseries_table (cost=0.00..267171.00 rows=1005634 width=24)\n>>>\n>>> Filter: ((\"timestamp\" >=\n>>> '1505989500000'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint))\n>>>\n>>\n>>\n>> Those are different queries, so it is not terribly surprising it might\n>> choose a different plan.\n>>\n>> For this type of comparison, you need to compare identical queries,\n>> including parameter.\n>>\n>> Cheers,\n>>\n>> Jeff\n>>\n>\n>\n\nGood catch Jeff.as for which version. We always recommend the latest version. 42.1.4Dave [email protected]\nOn 29 September 2017 at 06:44, Subramaniam C <[email protected]> wrote:Yes you are right the timestamp which the application was providing was in seconds whereas the query which was using index had a timestamp in milliseconds. So the query was taking time in application.On Fri, Sep 29, 2017 at 12:19 PM, Jeff Janes <[email protected]> wrote:On Thu, Sep 28, 2017 at 2:59 AM, Subramaniam C <[email protected]> wrote:First output show the output when the query is executed from sql command line. The second output show when it is executed from the application. AS per the output it is clear that the when the query is executed through JDBC its not using the index (health_index) instead its doing sequence scan. Please let us know how this issue can be resolved from JDBC?1.) -> Index Only Scan using health_index on health_timeseries_table (cost=0.56..421644.56 rows=1558800 width=24) Index Cond: ((\"timestamp\" >= '1505989186834'::bigint) AND (\"timestamp\" <= '1505990086834'::bigint)) 2.) -> Seq Scan on health_timeseries_table (cost=0.00..267171.00 rows=1005634 width=24) Filter: ((\"timestamp\" >= '1505989500000'::bigint) AND (\"timestamp\" <= '1505990400000'::bigint))Those are different queries, so it is not terribly surprising it might choose a different plan.For this type of comparison, you need to compare identical queries, including parameter.Cheers,Jeff",
"msg_date": "Fri, 29 Sep 2017 09:04:46 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query in JDBC"
}
] |
[
{
"msg_contents": "Hi all,\n\n\nI've an environment 9.4 + bdr:\nPostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (Debian\n4.7.2-5) 4.7.2, 64-bit\n\nkernel version:\n3.2.0-4-amd64 #1 SMP Debian 3.2.65-1 x86_64 GNU/Linux\n\nThis is consolidation databases, in this machine there are around 250+ wal\nsender processes.\n\ntop output revealed high system cpu:\n%Cpu(s): 1.4 us, 49.7 sy, 0.0 ni, 48.8 id, 0.0 wa, 0.0 hi, 0.0 si,\n 0.0 st\n\nprofiling cpu with perf:\n\nperf top -e cpu-clock\n\nEvents: 142K cpu-clock\n 82.37% [kernel] [k] __mutex_lock_common.isra.5\n 4.49% [kernel] [k] do_raw_spin_lock\n 2.23% [kernel] [k] mutex_lock\n 2.16% [kernel] [k] mutex_unlock\n 2.12% [kernel] [k] arch_local_irq_restore\n 1.73% postgres [.] ValidXLogRecord\n 0.87% [kernel] [k] __mutex_unlock_slowpath\n 0.78% [kernel] [k] arch_local_irq_enable\n 0.63% [kernel] [k] sys_recvfrom\n\n\nfinally get which processes (wal senders) that are using mutexes:\n\nperf top -e task-clock -p 55382\n\nEvents: 697 task-clock\n 88.08% [kernel] [k] __mutex_lock_common.isra.5\n 3.27% [kernel] [k] do_raw_spin_lock\n 2.34% [kernel] [k] arch_local_irq_restore\n 2.10% postgres [.] ValidXLogRecord\n 1.87% [kernel] [k] mutex_unlock\n 1.87% [kernel] [k] mutex_lock\n 0.47% [kernel] [k] sys_recvfrom\n\nI think bdr is only reading wal file (current state is we behind current\nwal lsn),\nso why reading wal file needs mutex?\n\nI wonder, is there kernel version has better handling mutexes?\n\n\n-- \nregards\n\nujang jaenudin | DBA Consultant (Freelancer)\nhttp://ora62.wordpress.com\nhttp://id.linkedin.com/pub/ujang-jaenudin/12/64/bab\n\nHi all,I've an environment 9.4 + bdr: PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, 64-bitkernel version:3.2.0-4-amd64 #1 SMP Debian 3.2.65-1 x86_64 GNU/LinuxThis is consolidation databases, in this machine there are around 250+ wal sender processes.top output revealed high system cpu:%Cpu(s): 1.4 us, 49.7 sy, 0.0 ni, 48.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 stprofiling cpu with perf:perf top -e cpu-clockEvents: 142K cpu-clock 82.37% [kernel] [k] __mutex_lock_common.isra.5 4.49% [kernel] [k] do_raw_spin_lock 2.23% [kernel] [k] mutex_lock 2.16% [kernel] [k] mutex_unlock 2.12% [kernel] [k] arch_local_irq_restore 1.73% postgres [.] ValidXLogRecord 0.87% [kernel] [k] __mutex_unlock_slowpath 0.78% [kernel] [k] arch_local_irq_enable 0.63% [kernel] [k] sys_recvfromfinally get which processes (wal senders) that are using mutexes:perf top -e task-clock -p 55382Events: 697 task-clock 88.08% [kernel] [k] __mutex_lock_common.isra.5 3.27% [kernel] [k] do_raw_spin_lock 2.34% [kernel] [k] arch_local_irq_restore 2.10% postgres [.] ValidXLogRecord 1.87% [kernel] [k] mutex_unlock 1.87% [kernel] [k] mutex_lock 0.47% [kernel] [k] sys_recvfromI think bdr is only reading wal file (current state is we behind current wal lsn),so why reading wal file needs mutex?I wonder, is there kernel version has better handling mutexes? -- regardsujang jaenudin | DBA Consultant (Freelancer)http://ora62.wordpress.comhttp://id.linkedin.com/pub/ujang-jaenudin/12/64/bab",
"msg_date": "Sat, 30 Sep 2017 06:07:06 +0700",
"msg_from": "milist ujang <[email protected]>",
"msg_from_op": true,
"msg_subject": "BDR, wal sender, high system cpu, mutex_lock_common"
},
{
"msg_contents": "additional info, strace output :\n\n% time seconds usecs/call calls errors syscall\n------ ----------- ----------- --------- --------- ----------------\n 98.30 1.030072 5 213063 201463 read\n 1.69 0.017686 0 201464 201464 recvfrom\n 0.01 0.000110 0 806 lseek\n 0.00 0.000043 0 474 468 rt_sigreturn\n 0.00 0.000000 0 6 open\n 0.00 0.000000 0 6 close\n------ ----------- ----------- --------- --------- ----------------\n100.00 1.047911 415819 403395 total\n\n\n\nOn Sat, Sep 30, 2017 at 6:07 AM, milist ujang <[email protected]>\nwrote:\n\n> Hi all,\n>\n>\n> I've an environment 9.4 + bdr:\n> PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (Debian\n> 4.7.2-5) 4.7.2, 64-bit\n>\n> kernel version:\n> 3.2.0-4-amd64 #1 SMP Debian 3.2.65-1 x86_64 GNU/Linux\n>\n> This is consolidation databases, in this machine there are around 250+ wal\n> sender processes.\n>\n> top output revealed high system cpu:\n> %Cpu(s): 1.4 us, 49.7 sy, 0.0 ni, 48.8 id, 0.0 wa, 0.0 hi, 0.0 si,\n> 0.0 st\n>\n> profiling cpu with perf:\n>\n> perf top -e cpu-clock\n>\n> Events: 142K cpu-clock\n> 82.37% [kernel] [k] __mutex_lock_common.isra.5\n> 4.49% [kernel] [k] do_raw_spin_lock\n> 2.23% [kernel] [k] mutex_lock\n> 2.16% [kernel] [k] mutex_unlock\n> 2.12% [kernel] [k] arch_local_irq_restore\n> 1.73% postgres [.] ValidXLogRecord\n> 0.87% [kernel] [k] __mutex_unlock_slowpath\n> 0.78% [kernel] [k] arch_local_irq_enable\n> 0.63% [kernel] [k] sys_recvfrom\n>\n>\n> finally get which processes (wal senders) that are using mutexes:\n>\n> perf top -e task-clock -p 55382\n>\n> Events: 697 task-clock\n> 88.08% [kernel] [k] __mutex_lock_common.isra.5\n> 3.27% [kernel] [k] do_raw_spin_lock\n> 2.34% [kernel] [k] arch_local_irq_restore\n> 2.10% postgres [.] ValidXLogRecord\n> 1.87% [kernel] [k] mutex_unlock\n> 1.87% [kernel] [k] mutex_lock\n> 0.47% [kernel] [k] sys_recvfrom\n>\n> I think bdr is only reading wal file (current state is we behind current\n> wal lsn),\n> so why reading wal file needs mutex?\n>\n> I wonder, is there kernel version has better handling mutexes?\n>\n>\n> --\n> regards\n>\n> ujang jaenudin | DBA Consultant (Freelancer)\n> http://ora62.wordpress.com\n> http://id.linkedin.com/pub/ujang-jaenudin/12/64/bab\n>\n\n\n\n-- \nregards\n\nujang jaenudin | DBA Consultant (Freelancer)\nhttp://ora62.wordpress.com\nhttp://id.linkedin.com/pub/ujang-jaenudin/12/64/bab\n\nadditional info, strace output :% time seconds usecs/call calls errors syscall------ ----------- ----------- --------- --------- ---------------- 98.30 1.030072 5 213063 201463 read 1.69 0.017686 0 201464 201464 recvfrom 0.01 0.000110 0 806 lseek 0.00 0.000043 0 474 468 rt_sigreturn 0.00 0.000000 0 6 open 0.00 0.000000 0 6 close------ ----------- ----------- --------- --------- ----------------100.00 1.047911 415819 403395 totalOn Sat, Sep 30, 2017 at 6:07 AM, milist ujang <[email protected]> wrote:Hi all,I've an environment 9.4 + bdr: PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, 64-bitkernel version:3.2.0-4-amd64 #1 SMP Debian 3.2.65-1 x86_64 GNU/LinuxThis is consolidation databases, in this machine there are around 250+ wal sender processes.top output revealed high system cpu:%Cpu(s): 1.4 us, 49.7 sy, 0.0 ni, 48.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 stprofiling cpu with perf:perf top -e cpu-clockEvents: 142K cpu-clock 82.37% [kernel] [k] __mutex_lock_common.isra.5 4.49% [kernel] [k] do_raw_spin_lock 2.23% [kernel] [k] mutex_lock 2.16% [kernel] [k] mutex_unlock 2.12% [kernel] [k] arch_local_irq_restore 1.73% postgres [.] ValidXLogRecord 0.87% [kernel] [k] __mutex_unlock_slowpath 0.78% [kernel] [k] arch_local_irq_enable 0.63% [kernel] [k] sys_recvfromfinally get which processes (wal senders) that are using mutexes:perf top -e task-clock -p 55382Events: 697 task-clock 88.08% [kernel] [k] __mutex_lock_common.isra.5 3.27% [kernel] [k] do_raw_spin_lock 2.34% [kernel] [k] arch_local_irq_restore 2.10% postgres [.] ValidXLogRecord 1.87% [kernel] [k] mutex_unlock 1.87% [kernel] [k] mutex_lock 0.47% [kernel] [k] sys_recvfromI think bdr is only reading wal file (current state is we behind current wal lsn),so why reading wal file needs mutex?I wonder, is there kernel version has better handling mutexes? -- regardsujang jaenudin | DBA Consultant (Freelancer)http://ora62.wordpress.comhttp://id.linkedin.com/pub/ujang-jaenudin/12/64/bab\n\n-- regardsujang jaenudin | DBA Consultant (Freelancer)http://ora62.wordpress.comhttp://id.linkedin.com/pub/ujang-jaenudin/12/64/bab",
"msg_date": "Sun, 1 Oct 2017 08:36:15 +0700",
"msg_from": "milist ujang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BDR, wal sender, high system cpu, mutex_lock_common"
}
] |
[
{
"msg_contents": "Hi,\nI need to use the max function in my query. I had very bad performance when\nI used the max :\n\n SELECT Ma.User_Id,\n COUNT(*) COUNT\n FROM Manuim Ma\n WHERE Ma.Bb_Open_Date =\n (SELECT max(Bb_Open_Date)\n FROM Manuim Man\n WHERE Man.User_Id = Ma.User_Id\n )\n GROUP BY Ma.User_Id\n HAVING COUNT(*) > 1;\n\n\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=0.56..3250554784.13 rows=115111 width=18)\n Group Key: ma.user_id\n Filter: (count(*) > 1)\n -> Index Scan using manuim_i_user_id on manuim ma\n(cost=0.56..3250552295.59 rows=178324 width=10)\n Filter: (bb_open_date = (SubPlan 1))\n SubPlan 1\n -> Aggregate (cost=90.98..90.99 rows=1 width=8)\n -> Index Scan using manuim_i_user_id on manuim man\n(cost=0.56..90.92 rows=22 width=8)\n Index Cond: ((user_id)::text = (ma.user_id)::text)\n(9 rows)\n\n\n\nSo I used the limit 1 option :\n\n SELECT Ma.User_Id,\n COUNT(*) COUNT\n FROM Manuim Ma\n WHERE Ma.Bb_Open_Date =\n (SELECT Bb_Open_Date\n FROM Manuim Man\n WHERE Man.User_Id = Ma.User_Id order\nby bb_open_date desc limit 1\n )\n GROUP BY Ma.User_Id\n HAVING COUNT(*) > 1;\n\nand the performance are still the same :\n\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=0.56..3252248863.46 rows=115111 width=18)\n Group Key: ma.user_id\n Filter: (count(*) > 1)\n -> Index Scan using manuim_i_user_id on manuim ma\n(cost=0.56..3252246374.92 rows=178324 width=10)\n Filter: (bb_open_date = (SubPlan 1))\n SubPlan 1\n -> Limit (cost=91.03..91.03 rows=1 width=8)\n -> Sort (cost=91.03..91.09 rows=22 width=8)\n Sort Key: man.bb_open_date DESC\n -> Index Scan using manuim_i_user_id on manuim man\n(cost=0.56..90.92 rows=22 width=8)\n Index Cond: ((user_id)::text =\n(ma.user_id)::text)\n(11 rows)\n\n\n\nthe reading on the table manuim takes a lot of effort, what else can I do ?\nthe table`s size is 8G.\n\nselect count(*) from manuim;\n count\n----------\n 35664828\n(1 row)\n\nthe indexes on the table :\n \"manuim_bb_open_date\" btree (bb_open_date)\n\"manuim_i_user_id\" btree (user_id)\n\n\nAny idea how can I continue from here ? Thanks , Mariel.\n\nHi,I need to use the max function in my query. I had very bad performance when I used the max : SELECT Ma.User_Id, COUNT(*) COUNT FROM Manuim Ma WHERE Ma.Bb_Open_Date = (SELECT max(Bb_Open_Date) FROM Manuim Man WHERE Man.User_Id = Ma.User_Id ) GROUP BY Ma.User_Id HAVING COUNT(*) > 1; QUERY PLAN --------------------------------------------------------------------------------------------------------- GroupAggregate (cost=0.56..3250554784.13 rows=115111 width=18) Group Key: ma.user_id Filter: (count(*) > 1) -> Index Scan using manuim_i_user_id on manuim ma (cost=0.56..3250552295.59 rows=178324 width=10) Filter: (bb_open_date = (SubPlan 1)) SubPlan 1 -> Aggregate (cost=90.98..90.99 rows=1 width=8) -> Index Scan using manuim_i_user_id on manuim man (cost=0.56..90.92 rows=22 width=8) Index Cond: ((user_id)::text = (ma.user_id)::text)(9 rows)So I used the limit 1 option : SELECT Ma.User_Id, COUNT(*) COUNT FROM Manuim Ma WHERE Ma.Bb_Open_Date = (SELECT Bb_Open_Date FROM Manuim Man WHERE Man.User_Id = Ma.User_Id order by bb_open_date desc limit 1 ) GROUP BY Ma.User_Id HAVING COUNT(*) > 1;and the performance are still the same : QUERY PLAN --------------------------------------------------------------------------------------------------------------- GroupAggregate (cost=0.56..3252248863.46 rows=115111 width=18) Group Key: ma.user_id Filter: (count(*) > 1) -> Index Scan using manuim_i_user_id on manuim ma (cost=0.56..3252246374.92 rows=178324 width=10) Filter: (bb_open_date = (SubPlan 1)) SubPlan 1 -> Limit (cost=91.03..91.03 rows=1 width=8) -> Sort (cost=91.03..91.09 rows=22 width=8) Sort Key: man.bb_open_date DESC -> Index Scan using manuim_i_user_id on manuim man (cost=0.56..90.92 rows=22 width=8) Index Cond: ((user_id)::text = (ma.user_id)::text)(11 rows)the reading on the table manuim takes a lot of effort, what else can I do ? the table`s size is 8G. select count(*) from manuim; count ---------- 35664828(1 row)the indexes on the table : \"manuim_bb_open_date\" btree (bb_open_date)\"manuim_i_user_id\" btree (user_id)Any idea how can I continue from here ? Thanks , Mariel.",
"msg_date": "Sun, 1 Oct 2017 15:41:37 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "select with max functions"
},
{
"msg_contents": "\n\n----- Mensaje original -----\n> De: \"Mariel Cherkassky\" <[email protected]>\n> Para: [email protected]\n> Enviados: Domingo, 1 de Octubre 2017 9:41:37\n> Asunto: [PERFORM] select with max functions\n> \n> Hi,\n> I need to use the max function in my query. I had very bad performance when\n> I used the max :\n> \n> SELECT Ma.User_Id,\n> COUNT(*) COUNT\n> FROM Manuim Ma\n> WHERE Ma.Bb_Open_Date =\n> (SELECT max(Bb_Open_Date)\n> FROM Manuim Man\n> WHERE Man.User_Id = Ma.User_Id\n> )\n> GROUP BY Ma.User_Id\n> HAVING COUNT(*) > 1;\n> \n> \n> QUERY PLAN\n> \n> ---------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=0.56..3250554784.13 rows=115111 width=18)\n> Group Key: ma.user_id\n> Filter: (count(*) > 1)\n> -> Index Scan using manuim_i_user_id on manuim ma\n> (cost=0.56..3250552295.59 rows=178324 width=10)\n> Filter: (bb_open_date = (SubPlan 1))\n> SubPlan 1\n> -> Aggregate (cost=90.98..90.99 rows=1 width=8)\n> -> Index Scan using manuim_i_user_id on manuim man\n> (cost=0.56..90.92 rows=22 width=8)\n> Index Cond: ((user_id)::text = (ma.user_id)::text)\n> (9 rows)\n> \n> \n> \n> So I used the limit 1 option :\n> \n> SELECT Ma.User_Id,\n> COUNT(*) COUNT\n> FROM Manuim Ma\n> WHERE Ma.Bb_Open_Date =\n> (SELECT Bb_Open_Date\n> FROM Manuim Man\n> WHERE Man.User_Id = Ma.User_Id order\n> by bb_open_date desc limit 1\n> )\n> GROUP BY Ma.User_Id\n> HAVING COUNT(*) > 1;\n> \n> and the performance are still the same :\n> \n> QUERY PLAN\n> \n> ---------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=0.56..3252248863.46 rows=115111 width=18)\n> Group Key: ma.user_id\n> Filter: (count(*) > 1)\n> -> Index Scan using manuim_i_user_id on manuim ma\n> (cost=0.56..3252246374.92 rows=178324 width=10)\n> Filter: (bb_open_date = (SubPlan 1))\n> SubPlan 1\n> -> Limit (cost=91.03..91.03 rows=1 width=8)\n> -> Sort (cost=91.03..91.09 rows=22 width=8)\n> Sort Key: man.bb_open_date DESC\n> -> Index Scan using manuim_i_user_id on manuim man\n> (cost=0.56..90.92 rows=22 width=8)\n> Index Cond: ((user_id)::text =\n> (ma.user_id)::text)\n> (11 rows)\n> \n> \n> \n> the reading on the table manuim takes a lot of effort, what else can I do ?\n> the table`s size is 8G.\n> \n> select count(*) from manuim;\n> count\n> ----------\n> 35664828\n> (1 row)\n> \n> the indexes on the table :\n> \"manuim_bb_open_date\" btree (bb_open_date)\n> \"manuim_i_user_id\" btree (user_id)\n> \n> \n> Any idea how can I continue from here ? Thanks , Mariel.\n\nStart by posting the results of \"explain analyze\" of that queries, so we can see some timming stuff.\n\nGerardo\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 1 Oct 2017 13:35:47 +0000 (UTC)",
"msg_from": "Gerardo Herzig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select with max functions"
},
{
"msg_contents": "\n\nAm 01.10.2017 um 14:41 schrieb Mariel Cherkassky:\n> Hi,\n> I need to use the max function in my query. I had very bad performance \n> when I used the max :\n>\n> SELECT Ma.User_Id,\n> COUNT(*) COUNT\n> FROM Manuim Ma\n> WHERE Ma.Bb_Open_Date =\n> (SELECT max(Bb_Open_Date)\n> FROM Manuim Man\n> WHERE Man.User_Id = Ma.User_Id\n> )\n> GROUP BY Ma.User_Id\n> HAVING COUNT(*) > 1;\n>\n>\n> Any idea how can I continue from here ? Thanks , Mariel.\n\n\nMaybe you can rewrite it, for instance to\n\nselect distinct on (user_id, bb_open_date) user_id, bb_open_date, \ncount(1) from Manuim group by 1,2 having count(1) > 1;\n\nmaybe much cheaper, but untested! If not, please share more details, at \nleast table-definition.\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 1 Oct 2017 20:48:45 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select with max functions"
},
{
"msg_contents": "Andreas I tried to rewrite it with the function rank() but I failed. The\nquery you wrote isnt the same as what I search. Moreover, I cant use\nexplain analyze because it is taking to much time to run and I'm getting\ntimeout..\n\n2017-10-01 21:48 GMT+03:00 Andreas Kretschmer <[email protected]>:\n\n>\n>\n> Am 01.10.2017 um 14:41 schrieb Mariel Cherkassky:\n>\n>> Hi,\n>> I need to use the max function in my query. I had very bad performance\n>> when I used the max :\n>>\n>> SELECT Ma.User_Id,\n>> COUNT(*) COUNT\n>> FROM Manuim Ma\n>> WHERE Ma.Bb_Open_Date =\n>> (SELECT max(Bb_Open_Date)\n>> FROM Manuim Man\n>> WHERE Man.User_Id = Ma.User_Id\n>> )\n>> GROUP BY Ma.User_Id\n>> HAVING COUNT(*) > 1;\n>>\n>>\n>> Any idea how can I continue from here ? Thanks , Mariel.\n>>\n>\n>\n> Maybe you can rewrite it, for instance to\n>\n> select distinct on (user_id, bb_open_date) user_id, bb_open_date, count(1)\n> from Manuim group by 1,2 having count(1) > 1;\n>\n> maybe much cheaper, but untested! If not, please share more details, at\n> least table-definition.\n>\n> Regards, Andreas\n>\n> --\n> 2ndQuadrant - The PostgreSQL Support Company.\n> www.2ndQuadrant.com\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nAndreas I tried to rewrite it with the function rank() but I failed. The query you wrote isnt the same as what I search. Moreover, I cant use explain analyze because it is taking to much time to run and I'm getting timeout..2017-10-01 21:48 GMT+03:00 Andreas Kretschmer <[email protected]>:\n\nAm 01.10.2017 um 14:41 schrieb Mariel Cherkassky:\n\nHi,\nI need to use the max function in my query. I had very bad performance when I used the max :\n\n SELECT Ma.User_Id,\n COUNT(*) COUNT\n FROM Manuim Ma\n WHERE Ma.Bb_Open_Date =\n (SELECT max(Bb_Open_Date)\n FROM Manuim Man\n WHERE Man.User_Id = Ma.User_Id\n )\n GROUP BY Ma.User_Id\n HAVING COUNT(*) > 1;\n\n\nAny idea how can I continue from here ? Thanks , Mariel.\n\n\n\nMaybe you can rewrite it, for instance to\n\nselect distinct on (user_id, bb_open_date) user_id, bb_open_date, count(1) from Manuim group by 1,2 having count(1) > 1;\n\nmaybe much cheaper, but untested! If not, please share more details, at least table-definition.\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 2 Oct 2017 16:25:19 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: select with max functions"
},
{
"msg_contents": "\n\n----- Mensaje original -----\n> De: \"Mariel Cherkassky\" <[email protected]>\n> Para: \"Andreas Kretschmer\" <[email protected]>\n> CC: [email protected]\n> Enviados: Lunes, 2 de Octubre 2017 10:25:19\n> Asunto: Re: [PERFORM] select with max functions\n> \n> Andreas I tried to rewrite it with the function rank() but I failed. The\n> query you wrote isnt the same as what I search. Moreover, I cant use\n> explain analyze because it is taking to much time to run and I'm getting\n> timeout..\n> \n> 2017-10-01 21:48 GMT+03:00 Andreas Kretschmer <[email protected]>:\n\nDo a \"set statement_timeout TO 0\" prior to \"explain analyze\"\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 2 Oct 2017 13:45:11 +0000 (UTC)",
"msg_from": "Gerardo Herzig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select with max functions"
},
{
"msg_contents": "explain analyze SELECT Ma.User_Id,\n COUNT(*) COUNT\n FROM Manuim Ma\n WHERE Ma.Bb_Open_Date =\n (SELECT Bb_Open_Date\n FROM Manuim Man\n WHERE Man.User_Id = Ma.User_Id order\nby bb_open_date desc limit 1\n )\n GROUP BY Ma.User_Id\n HAVING COUNT(*) > 1;\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------\n----------------------------------------\n GroupAggregate (cost=0.56..2430770384.80 rows=128137 width=18) (actual\ntime=55.823..2970443.757 rows=1213 loops=1)\n Group Key: ma.user_id\n Filter: (count(*) > 1)\n Rows Removed by Filter: 3693020\n -> Index Scan using manuim_i_user_id on manuim ma\n(cost=0.56..2430767766.00 rows=178324 width=10) (actual time=0.249\n..2966355.734 rows=3695461 loops=1)\n Filter: (bb_open_date = (SubPlan 1))\n Rows Removed by Filter: 31969367\n SubPlan 1\n -> Limit (cost=68.00..68.00 rows=1 width=8) (actual\ntime=0.082..0.082 rows=0 loops=35664828)\n -> Sort (cost=68.00..68.04 rows=16 width=8) (actual\ntime=0.081..0.081 rows=0 loops=35664828)\n Sort Key: man.bb_open_date DESC\n Sort Method: quicksort Memory: 25kB\n -> Index Scan using manuim_i_user_id on manuim man\n(cost=0.56..67.92 rows=16 width=8) (actual ti\nme=0.001..0.069 rows=85 loops=35664828)\n Index Cond: ((user_id)::text =\n(ma.user_id)::text)\n Planning time: 0.414 ms\n Execution time: 2970444.732 ms\n(16 rows)\n\n2017-10-02 16:45 GMT+03:00 Gerardo Herzig <[email protected]>:\n\n>\n>\n> ----- Mensaje original -----\n> > De: \"Mariel Cherkassky\" <[email protected]>\n> > Para: \"Andreas Kretschmer\" <[email protected]>\n> > CC: [email protected]\n> > Enviados: Lunes, 2 de Octubre 2017 10:25:19\n> > Asunto: Re: [PERFORM] select with max functions\n> >\n> > Andreas I tried to rewrite it with the function rank() but I failed. The\n> > query you wrote isnt the same as what I search. Moreover, I cant use\n> > explain analyze because it is taking to much time to run and I'm getting\n> > timeout..\n> >\n> > 2017-10-01 21:48 GMT+03:00 Andreas Kretschmer <[email protected]>:\n>\n> Do a \"set statement_timeout TO 0\" prior to \"explain analyze\"\n>\n\nexplain analyze SELECT Ma.User_Id, COUNT(*) COUNT FROM Manuim Ma WHERE Ma.Bb_Open_Date = (SELECT Bb_Open_Date FROM Manuim Man WHERE Man.User_Id = Ma.User_Id order by bb_open_date desc limit 1 ) GROUP BY Ma.User_Id HAVING COUNT(*) > 1; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- GroupAggregate (cost=0.56..2430770384.80 rows=128137 width=18) (actual time=55.823..2970443.757 rows=1213 loops=1) Group Key: ma.user_id Filter: (count(*) > 1) Rows Removed by Filter: 3693020 -> Index Scan using manuim_i_user_id on manuim ma (cost=0.56..2430767766.00 rows=178324 width=10) (actual time=0.249..2966355.734 rows=3695461 loops=1) Filter: (bb_open_date = (SubPlan 1)) Rows Removed by Filter: 31969367 SubPlan 1 -> Limit (cost=68.00..68.00 rows=1 width=8) (actual time=0.082..0.082 rows=0 loops=35664828) -> Sort (cost=68.00..68.04 rows=16 width=8) (actual time=0.081..0.081 rows=0 loops=35664828) Sort Key: man.bb_open_date DESC Sort Method: quicksort Memory: 25kB -> Index Scan using manuim_i_user_id on manuim man (cost=0.56..67.92 rows=16 width=8) (actual time=0.001..0.069 rows=85 loops=35664828) Index Cond: ((user_id)::text = (ma.user_id)::text) Planning time: 0.414 ms Execution time: 2970444.732 ms(16 rows)2017-10-02 16:45 GMT+03:00 Gerardo Herzig <[email protected]>:\n----- Mensaje original -----> De: \"Mariel Cherkassky\" <[email protected]>\n> Para: \"Andreas Kretschmer\" <[email protected]>> CC: [email protected]> Enviados: Lunes, 2 de Octubre 2017 10:25:19> Asunto: Re: [PERFORM] select with max functions\n>> Andreas I tried to rewrite it with the function rank() but I failed. The> query you wrote isnt the same as what I search. Moreover, I cant use> explain analyze because it is taking to much time to run and I'm getting> timeout..>> 2017-10-01 21:48 GMT+03:00 Andreas Kretschmer <[email protected]>:\n\nDo a \"set statement_timeout TO 0\" prior to \"explain analyze\"",
"msg_date": "Mon, 2 Oct 2017 17:45:36 +0300",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: select with max functions"
},
{
"msg_contents": "Mariel Cherkassky <[email protected]> writes:\n> explain analyze SELECT Ma.User_Id,\n> COUNT(*) COUNT\n> FROM Manuim Ma\n> WHERE Ma.Bb_Open_Date =\n> (SELECT Bb_Open_Date\n> FROM Manuim Man\n> WHERE Man.User_Id = Ma.User_Id order\n> by bb_open_date desc limit 1\n> )\n> GROUP BY Ma.User_Id\n> HAVING COUNT(*) > 1;\n\nThe core problem with this query is that the sub-select has to be done\nover again for each row of the outer table, since it's a correlated\nsub-select (ie, it refers to Ma.User_Id from the outer table). Replacing\na max() call with handmade logic doesn't do anything to help that.\nI'd try refactoring it so that you calculate the max Bb_Open_Date just\nonce for each user id, perhaps along the lines of\n\nSELECT Ma.User_Id,\n COUNT(*) COUNT\n FROM Manuim Ma,\n (SELECT User_Id, max(Bb_Open_Date) as max\n FROM Manuim Man\n GROUP BY User_Id) ss\n WHERE Ma.User_Id = ss.User_Id AND\n Ma.Bb_Open_Date = ss.max\n GROUP BY Ma.User_Id\n HAVING COUNT(*) > 1;\n\nThis is still not going to be instantaneous, but it might be better.\n\nIt's possible that an index on (User_Id, Bb_Open_Date) would help,\nbut I'm not sure.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 02 Oct 2017 11:29:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select with max functions"
},
{
"msg_contents": "On 03/10/17 04:29, Tom Lane wrote:\n> Mariel Cherkassky <[email protected]> writes:\n>> explain analyze SELECT Ma.User_Id,\n>> COUNT(*) COUNT\n>> FROM Manuim Ma\n>> WHERE Ma.Bb_Open_Date =\n>> (SELECT Bb_Open_Date\n>> FROM Manuim Man\n>> WHERE Man.User_Id = Ma.User_Id order\n>> by bb_open_date desc limit 1\n>> )\n>> GROUP BY Ma.User_Id\n>> HAVING COUNT(*) > 1;\n> The core problem with this query is that the sub-select has to be done\n> over again for each row of the outer table, since it's a correlated\n> sub-select (ie, it refers to Ma.User_Id from the outer table). Replacing\n> a max() call with handmade logic doesn't do anything to help that.\n> I'd try refactoring it so that you calculate the max Bb_Open_Date just\n> once for each user id, perhaps along the lines of\n>\n> SELECT Ma.User_Id,\n> COUNT(*) COUNT\n> FROM Manuim Ma,\n> (SELECT User_Id, max(Bb_Open_Date) as max\n> FROM Manuim Man\n> GROUP BY User_Id) ss\n> WHERE Ma.User_Id = ss.User_Id AND\n> Ma.Bb_Open_Date = ss.max\n> GROUP BY Ma.User_Id\n> HAVING COUNT(*) > 1;\n>\n> This is still not going to be instantaneous, but it might be better.\n>\n> It's possible that an index on (User_Id, Bb_Open_Date) would help,\n> but I'm not sure.\n>\n> \t\t\tregards, tom lane\n>\n>\n\nFurther ideas based on Tom's rewrite: If that MAX is still expensive it \nmight be worth breaking\n\n\nSELECT User_Id, max(Bb_Open_Date) as max\n FROM Manuim Man\n GROUP BY User_Id\n\nout into a VIEW, and considering making it MATERIALIZED, or creating an \nequivalent trigger based summary table (there are examples in the docs \nof how to do this).\n\nCheers\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 4 Oct 2017 10:16:37 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select with max functions"
}
] |
[
{
"msg_contents": "Hello,\n\nI come from Oracle world and we are porting all our applications to\npostgresql.\n\nThe application calls 2 stored procs,\n- first one does a few selects and then an insert\n- second one does an update\n\nThe main table on which the insert and the update happens is truncated\nbefore every performance test.\n\nWe are doing about 100 executions of both of these stored proc per second.\n\nIn Oracle each exec takes about 1millisec whereas in postgres its taking\n10millisec and that eventually leads to a queue build up in our application.\n\nAll indices are in place. The select, insert & update are all single row\noperations and use the PK.\n\nIt does not look like any query taking longer but something else. How can I\ncheck where is the time being spent? There are no IO waits, so its all on\nthe CPU.\n\nbtw, postgres and oracle both are installed on the same server, so no\ndifferences in env.\n\nAll suggestions welcome but I am more of looking at tools or any profilers\nthat I can use to find out where is the time being spent because we believe\nmost of our applications will run into similar issues.\n\nThe version is 9.6 on RHEL 7.2.\n\nMany thanks in advance.\n\nRegards,\nPurav\n\nHello,I come from Oracle world and we are porting all our applications to postgresql.The application calls 2 stored procs, - first one does a few selects and then an insert- second one does an updateThe main table on which the insert and the update happens is truncated before every performance test.We are doing about 100 executions of both of these stored proc per second.In Oracle each exec takes about 1millisec whereas in postgres its taking 10millisec and that eventually leads to a queue build up in our application.All indices are in place. The select, insert & update are all single row operations and use the PK.It does not look like any query taking longer but something else. How can I check where is the time being spent? There are no IO waits, so its all on the CPU.btw, postgres and oracle both are installed on the same server, so no differences in env.All suggestions welcome but I am more of looking at tools or any profilers that I can use to find out where is the time being spent because we believe most of our applications will run into similar issues.The version is 9.6 on RHEL 7.2.Many thanks in advance.Regards,Purav",
"msg_date": "Tue, 3 Oct 2017 20:03:16 +0530",
"msg_from": "Purav Chovatia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Stored Procedure Performance"
},
{
"msg_contents": "Purav Chovatia wrote:\n> I come from Oracle world and we are porting all our applications to postgresql.\n> \n> The application calls 2 stored procs, \n> - first one does a few selects and then an insert\n> - second one does an update\n> \n> The main table on which the insert and the update happens is truncated before every performance test.\n> \n> We are doing about 100 executions of both of these stored proc per second.\n> \n> In Oracle each exec takes about 1millisec whereas in postgres its taking 10millisec and that eventually leads to a queue build up in our application.\n> \n> All indices are in place. The select, insert & update are all single row operations and use the PK.\n> \n> It does not look like any query taking longer but something else. How can I check where is the time being spent? There are no IO waits, so its all on the CPU.\n\nYou could profile the PostgreSQL server while it is executing the\nworkload,\nsee for example https://wiki.postgresql.org/wiki/Profiling_with_perf\n\nThat way you could see where the time is spent.\n\nPL/pgSQL is not optimized for performance like PL/SQL.\n\nYours,\nLaurenz Albe\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 03 Oct 2017 16:54:46 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored Procedure Performance"
},
{
"msg_contents": "There is also the option of pg_stat_statements:\nhttps://www.postgresql.org/docs/current/static/pgstatstatements.html and\nauto_explain:\nhttps://www.postgresql.org/docs/current/static/auto-explain.html\n\nThese should help you identify what is slowing things down. There is no\nreason I could think of you should be seeing a 10x slowdown between\nPostgres and Oracle, so you'll likely have to just profile it to find out.\n\nThere is also the option of pg_stat_statements: https://www.postgresql.org/docs/current/static/pgstatstatements.html and auto_explain: https://www.postgresql.org/docs/current/static/auto-explain.htmlThese should help you identify what is slowing things down. There is no reason I could think of you should be seeing a 10x slowdown between Postgres and Oracle, so you'll likely have to just profile it to find out.",
"msg_date": "Tue, 3 Oct 2017 11:17:38 -0400",
"msg_from": "Adam Brusselback <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored Procedure Performance"
},
{
"msg_contents": "2017-10-03 17:17 GMT+02:00 Adam Brusselback <[email protected]>:\n\n> There is also the option of pg_stat_statements: https://\n> www.postgresql.org/docs/current/static/pgstatstatements.html and\n> auto_explain: https://www.postgresql.org/docs/current/\n> static/auto-explain.html\n>\n> These should help you identify what is slowing things down. There is no\n> reason I could think of you should be seeing a 10x slowdown between\n> Postgres and Oracle, so you'll likely have to just profile it to find out.\n>\n\ndepends what is inside.\n\nThe max 10x slow down is possible if you are hit some unoptimized cases.\nThe times about 1ms - 10ms shows so procedure (code) can be very sensitive\nto some impacts.\n\nRegards\n\nPavel\n\n2017-10-03 17:17 GMT+02:00 Adam Brusselback <[email protected]>:There is also the option of pg_stat_statements: https://www.postgresql.org/docs/current/static/pgstatstatements.html and auto_explain: https://www.postgresql.org/docs/current/static/auto-explain.htmlThese should help you identify what is slowing things down. There is no reason I could think of you should be seeing a 10x slowdown between Postgres and Oracle, so you'll likely have to just profile it to find out.depends what is inside.The max 10x slow down is possible if you are hit some unoptimized cases. The times about 1ms - 10ms shows so procedure (code) can be very sensitive to some impacts. RegardsPavel",
"msg_date": "Tue, 3 Oct 2017 17:28:55 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored Procedure Performance"
},
{
"msg_contents": "Thanks Laurenz, am having a look at perf.\n\nCan you pls help understand what exactly do you mean when you say \"PL/pgSQL\nis not optimized for performance like PL/SQL\". Do you mean to indicate that\napp firing queries/DMLs directly would be a better option as compared to\nputting those in Stored Procs?\n\nRegards\n\nOn 3 October 2017 at 20:24, Laurenz Albe <[email protected]> wrote:\n\n> Purav Chovatia wrote:\n> > I come from Oracle world and we are porting all our applications to\n> postgresql.\n> >\n> > The application calls 2 stored procs,\n> > - first one does a few selects and then an insert\n> > - second one does an update\n> >\n> > The main table on which the insert and the update happens is truncated\n> before every performance test.\n> >\n> > We are doing about 100 executions of both of these stored proc per\n> second.\n> >\n> > In Oracle each exec takes about 1millisec whereas in postgres its taking\n> 10millisec and that eventually leads to a queue build up in our application.\n> >\n> > All indices are in place. The select, insert & update are all single row\n> operations and use the PK.\n> >\n> > It does not look like any query taking longer but something else. How\n> can I check where is the time being spent? There are no IO waits, so its\n> all on the CPU.\n>\n> You could profile the PostgreSQL server while it is executing the\n> workload,\n> see for example https://wiki.postgresql.org/wiki/Profiling_with_perf\n>\n> That way you could see where the time is spent.\n>\n> PL/pgSQL is not optimized for performance like PL/SQL.\n>\n> Yours,\n> Laurenz Albe\n>\n\nThanks Laurenz, am having a look at perf.Can you pls help understand what exactly do you mean when you say \"PL/pgSQL is not optimized for performance like PL/SQL\". Do you mean to indicate that app firing queries/DMLs directly would be a better option as compared to putting those in Stored Procs?RegardsOn 3 October 2017 at 20:24, Laurenz Albe <[email protected]> wrote:Purav Chovatia wrote:\n> I come from Oracle world and we are porting all our applications to postgresql.\n>\n> The application calls 2 stored procs, \n> - first one does a few selects and then an insert\n> - second one does an update\n>\n> The main table on which the insert and the update happens is truncated before every performance test.\n>\n> We are doing about 100 executions of both of these stored proc per second.\n>\n> In Oracle each exec takes about 1millisec whereas in postgres its taking 10millisec and that eventually leads to a queue build up in our application.\n>\n> All indices are in place. The select, insert & update are all single row operations and use the PK.\n>\n> It does not look like any query taking longer but something else. How can I check where is the time being spent? There are no IO waits, so its all on the CPU.\n\nYou could profile the PostgreSQL server while it is executing the\nworkload,\nsee for example https://wiki.postgresql.org/wiki/Profiling_with_perf\n\nThat way you could see where the time is spent.\n\nPL/pgSQL is not optimized for performance like PL/SQL.\n\nYours,\nLaurenz Albe",
"msg_date": "Wed, 11 Oct 2017 19:29:19 +0530",
"msg_from": "Purav Chovatia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Stored Procedure Performance"
},
{
"msg_contents": "Thanks.\n\nWe looked at pg_stat_statements and we see execution count & total time\ntaken. But that still does not help me to identify why is it slow or what\nis taking time or where is the wait.\n\nbtw, does pg_stat_statements add considerable overhead? Coming from the\nOracle world, we are very used to such execution stats, and hence we are\nplanning to add this extension as a default to all our production\ndeployments.\n\nIts a single row select using PK, single row update using PK and a single\nrow insert, so I dont see anything wrong with the code. So auto_explain\nwould not add any value, I believe.\n\nBasically, on an Oracle server, I would minimally look at statspack/awr\nreport & OS stats (like cpu, iostat & memory) to start with. What should I\nlook for in case of a Postgres server.\n\nThanks & Regards\n\nOn 3 October 2017 at 20:58, Pavel Stehule <[email protected]> wrote:\n\n>\n>\n> 2017-10-03 17:17 GMT+02:00 Adam Brusselback <[email protected]>:\n>\n>> There is also the option of pg_stat_statements: https://ww\n>> w.postgresql.org/docs/current/static/pgstatstatements.html and\n>> auto_explain: https://www.postgresql.org/docs/current/static\n>> /auto-explain.html\n>>\n>> These should help you identify what is slowing things down. There is no\n>> reason I could think of you should be seeing a 10x slowdown between\n>> Postgres and Oracle, so you'll likely have to just profile it to find out.\n>>\n>\n> depends what is inside.\n>\n> The max 10x slow down is possible if you are hit some unoptimized cases.\n> The times about 1ms - 10ms shows so procedure (code) can be very sensitive\n> to some impacts.\n>\n> Regards\n>\n> Pavel\n>\n>\n\nThanks.We looked at pg_stat_statements and we see execution count & total time taken. But that still does not help me to identify why is it slow or what is taking time or where is the wait. btw, does pg_stat_statements add considerable overhead? Coming from the Oracle world, we are very used to such execution stats, and hence we are planning to add this extension as a default to all our production deployments.Its a single row select using PK, single row update using PK and a single row insert, so I dont see anything wrong with the code. So auto_explain would not add any value, I believe.Basically, on an Oracle server, I would minimally look at statspack/awr report & OS stats (like cpu, iostat & memory) to start with. What should I look for in case of a Postgres server.Thanks & RegardsOn 3 October 2017 at 20:58, Pavel Stehule <[email protected]> wrote:2017-10-03 17:17 GMT+02:00 Adam Brusselback <[email protected]>:There is also the option of pg_stat_statements: https://www.postgresql.org/docs/current/static/pgstatstatements.html and auto_explain: https://www.postgresql.org/docs/current/static/auto-explain.htmlThese should help you identify what is slowing things down. There is no reason I could think of you should be seeing a 10x slowdown between Postgres and Oracle, so you'll likely have to just profile it to find out.depends what is inside.The max 10x slow down is possible if you are hit some unoptimized cases. The times about 1ms - 10ms shows so procedure (code) can be very sensitive to some impacts. RegardsPavel",
"msg_date": "Wed, 11 Oct 2017 19:41:03 +0530",
"msg_from": "Purav Chovatia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Stored Procedure Performance"
},
{
"msg_contents": "2017-10-11 15:59 GMT+02:00 Purav Chovatia <[email protected]>:\n\n> Thanks Laurenz, am having a look at perf.\n>\n> Can you pls help understand what exactly do you mean when you say \"PL/pgSQL\n> is not optimized for performance like PL/SQL\". Do you mean to indicate that\n> app firing queries/DMLs directly would be a better option as compared to\n> putting those in Stored Procs?\n>\n\nPL/pgSQL is perfect glue for SQL. SQL queries has same speed without\ndependency on environment that executed it.\n\nThis sentence mean, so PLpgSQL is not designed for intensive mathematics\ncalculation. PL/SQL is self govering environment ... it has own data\ntypes, it has own implementation of logical and mathematics operators.\nPLpgSQL is layer over SQL engine - and has not own types, has not own\noperators. Any expression is translated to SQL and then is interpreted by\nSQL expression interpret. Maybe in next few years there will be a JIT\ncompiler. But it is not now. This is current bottleneck of PLpgSQL. If your\nPL code is glue for SQL queries (implementation of some business\nprocesses), then PLpgSQL is fast enough. If you try to calculate numeric\nintegration or derivation of some functions, then PLpgSQL is slow. It is\nnot too slow - the speed is comparable with PHP, but it is significantly\nslower than C language.\n\nPostgreSQL has perfect C API - so intensive numeric calculations are\nusually implemented as C extension.\n\nRegards\n\nPavel\n\n\n>\n> Regards\n>\n> On 3 October 2017 at 20:24, Laurenz Albe <[email protected]> wrote:\n>\n>> Purav Chovatia wrote:\n>> > I come from Oracle world and we are porting all our applications to\n>> postgresql.\n>> >\n>> > The application calls 2 stored procs,\n>> > - first one does a few selects and then an insert\n>> > - second one does an update\n>> >\n>> > The main table on which the insert and the update happens is truncated\n>> before every performance test.\n>> >\n>> > We are doing about 100 executions of both of these stored proc per\n>> second.\n>> >\n>> > In Oracle each exec takes about 1millisec whereas in postgres its\n>> taking 10millisec and that eventually leads to a queue build up in our\n>> application.\n>> >\n>> > All indices are in place. The select, insert & update are all single\n>> row operations and use the PK.\n>> >\n>> > It does not look like any query taking longer but something else. How\n>> can I check where is the time being spent? There are no IO waits, so its\n>> all on the CPU.\n>>\n>> You could profile the PostgreSQL server while it is executing the\n>> workload,\n>> see for example https://wiki.postgresql.org/wiki/Profiling_with_perf\n>>\n>> That way you could see where the time is spent.\n>>\n>> PL/pgSQL is not optimized for performance like PL/SQL.\n>>\n>> Yours,\n>> Laurenz Albe\n>>\n>\n>\n\n2017-10-11 15:59 GMT+02:00 Purav Chovatia <[email protected]>:Thanks Laurenz, am having a look at perf.Can you pls help understand what exactly do you mean when you say \"PL/pgSQL is not optimized for performance like PL/SQL\". Do you mean to indicate that app firing queries/DMLs directly would be a better option as compared to putting those in Stored Procs?PL/pgSQL is perfect glue for SQL. SQL queries has same speed without dependency on environment that executed it.This sentence mean, so PLpgSQL is not designed for intensive mathematics calculation. PL/SQL is self govering environment ... it has own data types, it has own implementation of logical and mathematics operators. PLpgSQL is layer over SQL engine - and has not own types, has not own operators. Any expression is translated to SQL and then is interpreted by SQL expression interpret. Maybe in next few years there will be a JIT compiler. But it is not now. This is current bottleneck of PLpgSQL. If your PL code is glue for SQL queries (implementation of some business processes), then PLpgSQL is fast enough. If you try to calculate numeric integration or derivation of some functions, then PLpgSQL is slow. It is not too slow - the speed is comparable with PHP, but it is significantly slower than C language.PostgreSQL has perfect C API - so intensive numeric calculations are usually implemented as C extension.RegardsPavel RegardsOn 3 October 2017 at 20:24, Laurenz Albe <[email protected]> wrote:Purav Chovatia wrote:\n> I come from Oracle world and we are porting all our applications to postgresql.\n>\n> The application calls 2 stored procs, \n> - first one does a few selects and then an insert\n> - second one does an update\n>\n> The main table on which the insert and the update happens is truncated before every performance test.\n>\n> We are doing about 100 executions of both of these stored proc per second.\n>\n> In Oracle each exec takes about 1millisec whereas in postgres its taking 10millisec and that eventually leads to a queue build up in our application.\n>\n> All indices are in place. The select, insert & update are all single row operations and use the PK.\n>\n> It does not look like any query taking longer but something else. How can I check where is the time being spent? There are no IO waits, so its all on the CPU.\n\nYou could profile the PostgreSQL server while it is executing the\nworkload,\nsee for example https://wiki.postgresql.org/wiki/Profiling_with_perf\n\nThat way you could see where the time is spent.\n\nPL/pgSQL is not optimized for performance like PL/SQL.\n\nYours,\nLaurenz Albe",
"msg_date": "Wed, 11 Oct 2017 16:20:36 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored Procedure Performance"
},
{
"msg_contents": "Thanks Pavel. Our SPs are not doing any mathematical calculations. Its\nmostly if-else, so I would expect good performance.\n\nOn 11 October 2017 at 19:50, Pavel Stehule <[email protected]> wrote:\n\n>\n>\n> 2017-10-11 15:59 GMT+02:00 Purav Chovatia <[email protected]>:\n>\n>> Thanks Laurenz, am having a look at perf.\n>>\n>> Can you pls help understand what exactly do you mean when you say \"PL/pgSQL\n>> is not optimized for performance like PL/SQL\". Do you mean to indicate that\n>> app firing queries/DMLs directly would be a better option as compared to\n>> putting those in Stored Procs?\n>>\n>\n> PL/pgSQL is perfect glue for SQL. SQL queries has same speed without\n> dependency on environment that executed it.\n>\n> This sentence mean, so PLpgSQL is not designed for intensive mathematics\n> calculation. PL/SQL is self govering environment ... it has own data\n> types, it has own implementation of logical and mathematics operators.\n> PLpgSQL is layer over SQL engine - and has not own types, has not own\n> operators. Any expression is translated to SQL and then is interpreted by\n> SQL expression interpret. Maybe in next few years there will be a JIT\n> compiler. But it is not now. This is current bottleneck of PLpgSQL. If your\n> PL code is glue for SQL queries (implementation of some business\n> processes), then PLpgSQL is fast enough. If you try to calculate numeric\n> integration or derivation of some functions, then PLpgSQL is slow. It is\n> not too slow - the speed is comparable with PHP, but it is significantly\n> slower than C language.\n>\n> PostgreSQL has perfect C API - so intensive numeric calculations are\n> usually implemented as C extension.\n>\n> Regards\n>\n> Pavel\n>\n>\n>>\n>> Regards\n>>\n>> On 3 October 2017 at 20:24, Laurenz Albe <[email protected]>\n>> wrote:\n>>\n>>> Purav Chovatia wrote:\n>>> > I come from Oracle world and we are porting all our applications to\n>>> postgresql.\n>>> >\n>>> > The application calls 2 stored procs,\n>>> > - first one does a few selects and then an insert\n>>> > - second one does an update\n>>> >\n>>> > The main table on which the insert and the update happens is truncated\n>>> before every performance test.\n>>> >\n>>> > We are doing about 100 executions of both of these stored proc per\n>>> second.\n>>> >\n>>> > In Oracle each exec takes about 1millisec whereas in postgres its\n>>> taking 10millisec and that eventually leads to a queue build up in our\n>>> application.\n>>> >\n>>> > All indices are in place. The select, insert & update are all single\n>>> row operations and use the PK.\n>>> >\n>>> > It does not look like any query taking longer but something else. How\n>>> can I check where is the time being spent? There are no IO waits, so its\n>>> all on the CPU.\n>>>\n>>> You could profile the PostgreSQL server while it is executing the\n>>> workload,\n>>> see for example https://wiki.postgresql.org/wiki/Profiling_with_perf\n>>>\n>>> That way you could see where the time is spent.\n>>>\n>>> PL/pgSQL is not optimized for performance like PL/SQL.\n>>>\n>>> Yours,\n>>> Laurenz Albe\n>>>\n>>\n>>\n>\n\nThanks Pavel. Our SPs are not doing any mathematical calculations. Its mostly if-else, so I would expect good performance.On 11 October 2017 at 19:50, Pavel Stehule <[email protected]> wrote:2017-10-11 15:59 GMT+02:00 Purav Chovatia <[email protected]>:Thanks Laurenz, am having a look at perf.Can you pls help understand what exactly do you mean when you say \"PL/pgSQL is not optimized for performance like PL/SQL\". Do you mean to indicate that app firing queries/DMLs directly would be a better option as compared to putting those in Stored Procs?PL/pgSQL is perfect glue for SQL. SQL queries has same speed without dependency on environment that executed it.This sentence mean, so PLpgSQL is not designed for intensive mathematics calculation. PL/SQL is self govering environment ... it has own data types, it has own implementation of logical and mathematics operators. PLpgSQL is layer over SQL engine - and has not own types, has not own operators. Any expression is translated to SQL and then is interpreted by SQL expression interpret. Maybe in next few years there will be a JIT compiler. But it is not now. This is current bottleneck of PLpgSQL. If your PL code is glue for SQL queries (implementation of some business processes), then PLpgSQL is fast enough. If you try to calculate numeric integration or derivation of some functions, then PLpgSQL is slow. It is not too slow - the speed is comparable with PHP, but it is significantly slower than C language.PostgreSQL has perfect C API - so intensive numeric calculations are usually implemented as C extension.RegardsPavel RegardsOn 3 October 2017 at 20:24, Laurenz Albe <[email protected]> wrote:Purav Chovatia wrote:\n> I come from Oracle world and we are porting all our applications to postgresql.\n>\n> The application calls 2 stored procs, \n> - first one does a few selects and then an insert\n> - second one does an update\n>\n> The main table on which the insert and the update happens is truncated before every performance test.\n>\n> We are doing about 100 executions of both of these stored proc per second.\n>\n> In Oracle each exec takes about 1millisec whereas in postgres its taking 10millisec and that eventually leads to a queue build up in our application.\n>\n> All indices are in place. The select, insert & update are all single row operations and use the PK.\n>\n> It does not look like any query taking longer but something else. How can I check where is the time being spent? There are no IO waits, so its all on the CPU.\n\nYou could profile the PostgreSQL server while it is executing the\nworkload,\nsee for example https://wiki.postgresql.org/wiki/Profiling_with_perf\n\nThat way you could see where the time is spent.\n\nPL/pgSQL is not optimized for performance like PL/SQL.\n\nYours,\nLaurenz Albe",
"msg_date": "Wed, 11 Oct 2017 21:35:31 +0530",
"msg_from": "Purav Chovatia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Stored Procedure Performance"
},
{
"msg_contents": "Is there any error handling in there? I remember seeing performance\nissues if you put in any code to catch exceptions.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 11 Oct 2017 12:37:01 -0400",
"msg_from": "Adam Brusselback <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored Procedure Performance"
},
{
"msg_contents": "Yes, there is some code to catch exceptions like unique constraint\nviolation and no data found. Do you suggest we trying by commenting that\npart? btw, the dataset is a controlled one, so what I can confirm is we are\nnot hitting any exceptions.\n\nThanks\n\nOn 11 October 2017 at 22:07, Adam Brusselback <[email protected]>\nwrote:\n\n> Is there any error handling in there? I remember seeing performance\n> issues if you put in any code to catch exceptions.\n>\n\nYes, there is some code to catch exceptions like unique constraint violation and no data found. Do you suggest we trying by commenting that part? btw, the dataset is a controlled one, so what I can confirm is we are not hitting any exceptions.ThanksOn 11 October 2017 at 22:07, Adam Brusselback <[email protected]> wrote:Is there any error handling in there? I remember seeing performance\nissues if you put in any code to catch exceptions.",
"msg_date": "Wed, 11 Oct 2017 22:22:23 +0530",
"msg_from": "Purav Chovatia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Stored Procedure Performance"
},
{
"msg_contents": "> Yes, there is some code to catch exceptions like unique constraint violation and no data found. Do you suggest we trying by commenting that part?\n\nThat is likely it. Comment that out and test.\nIf you still need to handle a unique violation, see if you can instead\nuse the ON CONFLICT clause on the INSERT.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 11 Oct 2017 12:54:37 -0400",
"msg_from": "Adam Brusselback <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored Procedure Performance"
},
{
"msg_contents": "2017-10-11 18:52 GMT+02:00 Purav Chovatia <[email protected]>:\n\n> Yes, there is some code to catch exceptions like unique constraint\n> violation and no data found. Do you suggest we trying by commenting that\n> part? btw, the dataset is a controlled one, so what I can confirm is we are\n> not hitting any exceptions.\n>\n\nIf it is possible, don't do it in cycle, or use exception handling only\nwhen it is necessary, not from pleasure.\n\nRegards\n\nPavel\n\n\n> Thanks\n>\n> On 11 October 2017 at 22:07, Adam Brusselback <[email protected]>\n> wrote:\n>\n>> Is there any error handling in there? I remember seeing performance\n>> issues if you put in any code to catch exceptions.\n>>\n>\n>\n\n2017-10-11 18:52 GMT+02:00 Purav Chovatia <[email protected]>:Yes, there is some code to catch exceptions like unique constraint violation and no data found. Do you suggest we trying by commenting that part? btw, the dataset is a controlled one, so what I can confirm is we are not hitting any exceptions.If it is possible, don't do it in cycle, or use exception handling only when it is necessary, not from pleasure.RegardsPavelThanksOn 11 October 2017 at 22:07, Adam Brusselback <[email protected]> wrote:Is there any error handling in there? I remember seeing performance\nissues if you put in any code to catch exceptions.",
"msg_date": "Wed, 11 Oct 2017 21:06:42 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored Procedure Performance"
},
{
"msg_contents": "Le 11/10/2017 à 16:11, Purav Chovatia a écrit :\n> Thanks.\n>\n> We looked at pg_stat_statements and we see execution count & total \n> time taken. But that still does not help me to identify why is it slow \n> or what is taking time or where is the wait.\n>\n> btw, does pg_stat_statements add considerable overhead? Coming from \n> the Oracle world, we are very used to such execution stats, and hence \n> we are planning to add this extension as a default to all our \n> production deployments.\n>\n> Its a single row select using PK, single row update using PK and a \n> single row insert, so I dont see anything wrong with the code. So \n> auto_explain would not add any value, I believe.\n>\n> Basically, on an Oracle server, I would minimally look at \n> statspack/awr report & OS stats (like cpu, iostat & memory) to start \n> with. What should I look for in case of a Postgres server.\nYou could have a look at the PoWA extension \n(http://dalibo.github.io/powa/). It has the same purpose as AWR.\n\n>\n> Thanks & Regards\n>\n> On 3 October 2017 at 20:58, Pavel Stehule <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n>\n>\n> 2017-10-03 17:17 GMT+02:00 Adam Brusselback\n> <[email protected] <mailto:[email protected]>>:\n>\n> There is also the option of pg_stat_statements:\n> https://www.postgresql.org/docs/current/static/pgstatstatements.html\n> <https://www.postgresql.org/docs/current/static/pgstatstatements.html>\n> and auto_explain:\n> https://www.postgresql.org/docs/current/static/auto-explain.html\n> <https://www.postgresql.org/docs/current/static/auto-explain.html>\n>\n> These should help you identify what is slowing things down. \n> There is no reason I could think of you should be seeing a 10x\n> slowdown between Postgres and Oracle, so you'll likely have to\n> just profile it to find out.\n>\n>\n> depends what is inside.\n>\n> The max 10x slow down is possible if you are hit some unoptimized\n> cases. The times about 1ms - 10ms shows so procedure (code) can be\n> very sensitive to some impacts.\n>\n> Regards\n>\n> Pavel\n>\n>\n\n\n\n\n\n\n\n\nLe 11/10/2017 à 16:11, Purav Chovatia a\n écrit :\n\n\nThanks.\n \n\nWe looked at pg_stat_statements and we see execution count\n & total time taken. But that still does not help me to\n identify why is it slow or what is taking time or where is the\n wait. \n\n\nbtw, does pg_stat_statements add considerable overhead?\n Coming from the Oracle world, we are very used to such\n execution stats, and hence we are planning to add this\n extension as a default to all our production deployments.\n\n\nIts a single row select using PK, single row update using\n PK and a single row insert, so I dont see anything wrong with\n the code. So auto_explain would not add any value, I believe.\n\n\nBasically, on an Oracle server, I would minimally look at\n statspack/awr report & OS stats (like cpu, iostat &\n memory) to start with. What should I look for in case of a\n Postgres server.\n\n\n You could have a look at the PoWA extension (http://dalibo.github.io/powa/).\n It has the same purpose as AWR.\n\n\n\n\n\nThanks & Regards\n\nOn 3 October 2017 at 20:58, Pavel\n Stehule <[email protected]>\n wrote:\n\n\n\n2017-10-03 17:17 GMT+02:00\n Adam Brusselback <[email protected]>:\n\n\nThere is also the\n option of pg_stat_statements: https://www.postgresql.org/docs/current/static/pgstatstatements.html\n and auto_explain: https://www.postgresql.org/docs/current/static/auto-explain.html\n\n\nThese should help you\n identify what is slowing things down. There\n is no reason I could think of you should be\n seeing a 10x slowdown between Postgres and\n Oracle, so you'll likely have to just profile\n it to find out.\n\n\n\n\ndepends what is inside.\n\n\nThe max 10x slow down is possible if you are\n hit some unoptimized cases. The times about 1ms -\n 10ms shows so procedure (code) can be very\n sensitive to some impacts. \n\n\n\nRegards\n\n\n\nPavel",
"msg_date": "Sat, 14 Oct 2017 12:21:55 +0200",
"msg_from": "phb07 <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored Procedure Performance"
}
] |
[
{
"msg_contents": "Hello,\n\nThis is my first question on this list.\n\nHow does max_parallel_workers_per_gather change Linux server load averages?\n\nI have 2 cores and my max_parallel_workers_per_gather = 2 and max_worker_processes = 8, but my load averages are between 8 and 5 with scheduled at 1/189 to 5/195. Are these so high because I increased max_parallel_workers_per_gather? My understanding is that if my load averages are greater than my number of cores the system is overloaded. Should I think about it differently once I increase max_parallel_workers_per_gather? How should I think about it?\n\nI am using postgres 9.6.\n\n\n\n\n\n\n\n\n\n\n\n\nHello, \n \nThis is my first question on this list. \n \nHow does max_parallel_workers_per_gather change Linux server load averages?\n \nI have 2 cores and my max_parallel_workers_per_gather = 2 and max_worker_processes = 8, but my load averages are between 8 and 5 with scheduled at 1/189 to 5/195. Are these so high because I increased max_parallel_workers_per_gather? My\n understanding is that if my load averages are greater than my number of cores the system is overloaded. Should I think about it differently once I increase max_parallel_workers_per_gather? How should I think about it?\n \nI am using postgres 9.6.",
"msg_date": "Tue, 3 Oct 2017 19:48:39 +0000",
"msg_from": "Ben Nachtrieb <[email protected]>",
"msg_from_op": true,
"msg_subject": "How does max_parallel_workers_per_gather change load averages?"
},
{
"msg_contents": "On 4 October 2017 at 08:48, Ben Nachtrieb <[email protected]> wrote:\n> I have 2 cores and my max_parallel_workers_per_gather = 2 and\n> max_worker_processes = 8, but my load averages are between 8 and 5 with\n> scheduled at 1/189 to 5/195. Are these so high because I increased\n> max_parallel_workers_per_gather? My understanding is that if my load\n> averages are greater than my number of cores the system is overloaded.\n> Should I think about it differently once I increase\n> max_parallel_workers_per_gather? How should I think about it?\n\nParallel query is not 100% efficient. For example, adding twice the\nCPU, in theory, will never double the performance, there's always some\noverhead to this. It's really only useful to do on systems with spare\nCPU cycles to perform this extra work. You don't seem to have much to\nspare, so you may get along better if you disable parallel query.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 4 Oct 2017 10:44:16 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does max_parallel_workers_per_gather change load averages?"
}
] |
[
{
"msg_contents": "I've got a query that's regressed from 9.4 to 9.6. I suspect it has \nsomething to do with the work done around bad plans from single-row \nestimates. There's a SeqScan happening even though the join is to the PK \nof bd_ident. Full plans are at [1,2,3], but here's the relevant bits...\n\n9.4:\n> -> Nested Loop Left Join (cost=1.00..50816.55 rows=1 width=27) (actual time=979.406..3213.286 rows=508 loops=1)\n> -> Index Scan using bdata_filed_departuretime on bdata_forks (cost=0.57..50807.51 rows=1 width=36) (actual time=979.381..3207.777 rows=508 loops=1)\n> Index Cond: ((filed_departuretime >= '2017-07-20 05:00:00'::timestamp without time zone) AND (filed_departuretime <= '2017-07-30 04:59:59'::timestamp without time zone))\n...\n> -> Index Scan using bd_ident_pkey on bd_ident i (cost=0.43..4.45 rows=1 width=11) (actual time=0.006..0.006 rows=1 loops=508)\n> Index Cond: (bdata_forks.ident_id = id)\n> SubPlan 1\n> -> Index Scan using bd_airport_pkey on bd_airport (cost=0.56..4.58 rows=1 width=20) (actual time=0.003..0.003 rows=1 loops=508)\n> Index Cond: (id = bdata_forks.destination_id)\n\n9.6:\n> -> Nested Loop Left Join (cost=0.57..14994960.40 rows=1 width=71) (actual time=931.479..327972.891 rows=508 loops=1)\n> Join Filter: (bdata_forks.ident_id = i.id)\n> Rows Removed by Join Filter: 1713127892\n> -> Index Scan using bdata_filed_departuretime on bdata_forks (cost=0.57..14894236.06 rows=1 width=36) (actual time=892.664..3025.653 rows=508 loops=1)\n...\n> -> Seq Scan on bd_ident i (cost=0.00..58566.00 rows=3372300 width=11) (actual time=0.002..280.966 rows=3372300 loops=508)\n ^^^^^^^^\n> SubPlan 1\n> -> Index Scan using bd_airport_pkey on bd_airport (cost=0.56..4.58 rows=1 width=20) (actual time=0.009..0.009 rows=1 loops=508)\n> Index Cond: (id = bdata_forks.destination_id)\n\nAltering the predicates somewhat (removing one of the timestamp \nconditions) results in the input to the outer part of the nested loop \nestimating at 326 rows instead of 1, which generates a good plan:\n\n> -> Nested Loop Left Join (cost=1.00..14535906.91 rows=326 width=71) (actual time=23.670..4558.273 rows=3543 loops=1)\n> -> Index Scan using bdata_filed_departuretime on bdata_forks (cost=0.57..14532973.05 rows=326 width=36) (actual time=23.647..4522.428 rows=3543 loops=1)\n \n ^^^^^^^^\n...\n> -> Index Scan using bd_ident_pkey on bd_ident i (cost=0.43..4.40 rows=1 width=11) (actual time=0.005..0.006 rows=1 loops=3543)\n> Index Cond: (bdata_forks.ident_id = id)\n> SubPlan 1\n> -> Index Scan using bd_airport_pkey on bd_airport (cost=0.56..4.58 rows=1 width=20) (actual time=0.003..0.003 rows=1 loops=3543)\n> Index Cond: (id = bdata_forks.destination_id)\n\n1: https://explain.depesz.com/s/2A90\n2: https://explain.depesz.com/s/jKdr\n3: https://explain.depesz.com/s/nFh\n-- \nJim C. Nasby, Data Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 8 Oct 2017 14:25:54 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Regression from 9.4-9.6"
},
{
"msg_contents": "Jim Nasby <[email protected]> writes:\n> I've got a query that's regressed from 9.4 to 9.6. I suspect it has \n> something to do with the work done around bad plans from single-row \n> estimates.\n\nWhy has this indexscan's cost estimate changed so much?\n\n>> -> Index Scan using bdata_filed_departuretime on bdata_forks (cost=0.57..50807.51 rows=1 width=36) (actual time=979.381..3207.777 rows=508 loops=1)\n\n>> -> Index Scan using bdata_filed_departuretime on bdata_forks (cost=0.57..14894236.06 rows=1 width=36) (actual time=892.664..3025.653 rows=508 loops=1)\n\nI think the reason it's discarding the preferable plan is that, with this\nhuge increment in the estimated cost getting added to both alternatives,\nthe two nestloop plans have fuzzily the same total cost, and it's picking\nthe one you don't want on the basis of some secondary criterion.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 08 Oct 2017 15:34:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regression from 9.4-9.6"
},
{
"msg_contents": "On 10/8/17 2:34 PM, Tom Lane wrote:\n> Jim Nasby <[email protected]> writes:\n>> I've got a query that's regressed from 9.4 to 9.6. I suspect it has\n>> something to do with the work done around bad plans from single-row\n>> estimates.\n> \n> Why has this indexscan's cost estimate changed so much?\n> \n>>> -> Index Scan using bdata_filed_departuretime on bdata_forks (cost=0.57..50807.51 rows=1 width=36) (actual time=979.381..3207.777 rows=508 loops=1)\n> \n>>> -> Index Scan using bdata_filed_departuretime on bdata_forks (cost=0.57..14894236.06 rows=1 width=36) (actual time=892.664..3025.653 rows=508 loops=1)\n> \n> I think the reason it's discarding the preferable plan is that, with this\n> huge increment in the estimated cost getting added to both alternatives,\n> the two nestloop plans have fuzzily the same total cost, and it's picking\n> the one you don't want on the basis of some secondary criterion.\n\nGreat question... the only thing that sticks out is the coalesce(). Let \nme see if an analyze with a higher stats target changes anything. FWIW, \nthe 9.6 database is copied from the 9.4 one once a week and then \npg_upgraded. I'm pretty sure an ANALYZE is part of that process.\n\n9.4:\n> -> Index Scan using bdata_filed_departuretime on bdata_forks (cost=0.57..50807.51 rows=1 width=36) (actual time=979.381..3207.777 rows=508 loops=1)\n> Index Cond: ((filed_departuretime >= '2017-07-20 05:00:00'::timestamp without time zone) AND (filed_departuretime <= '2017-07-30 04:59:59'::timestamp without time zone))\n> Filter: (((view_www IS NULL) OR (view_www IS TRUE)) AND (sch_block_out IS NOT NULL) AND (diverted IS NOT TRUE) AND (true_cancel IS NOT TRUE) AND (sch_block_out >= '2017-07-23 05:00:00'::timestamp without time zone) AND (sch_block_out <= '2017-07-24 04:59:59'::timestamp without time zone) AND (COALESCE(actualarrivaltime, cancellation) >= actualdeparturetime) AND ((act_block_out - sch_block_out) >= '00:15:00'::interval) AND (((SubPlan 2))::text = 'KORD'::text))\n> Rows Removed by Filter: 2696593\n> SubPlan 2\n> -> Index Scan using bd_airport_pkey on bd_airport bd_airport_1 (cost=0.56..4.58 rows=1 width=20) (actual time=0.003..0.003 rows=1 loops=21652)\n> Index Cond: (id = bdata_forks.origin_id)\n\n9.6:\n> -> Index Scan using bdata_filed_departuretime on bdata_forks (cost=0.57..14894236.06 rows=1 width=36) (actual time=892.664..3025.653 rows=508 loops=1)\n> Index Cond: ((filed_departuretime >= '2017-07-20 05:00:00'::timestamp without time zone) AND (filed_departuretime <= '2017-07-30 04:59:59'::timestamp without time zone))\n> Filter: (((view_www IS NULL) OR (view_www IS TRUE)) AND (sch_block_out IS NOT NULL) AND (diverted IS NOT TRUE) AND (true_cancel IS NOT TRUE) AND (sch_block_out >= '2017-07-23 05:00:00'::timestamp without time zone) AND (sch_block_out <= '2017-07-24 04:59:59'::timestamp without time zone) AND (COALESCE(actualarrivaltime, cancellation) >= actualdeparturetime) AND ((act_block_out - sch_block_out) >= '00:15:00'::interval) AND (((SubPlan 2))::text = 'KORD'::text))\n> Rows Removed by Filter: 2696592\n> SubPlan 2\n> -> Index Scan using bd_airport_pkey on bd_airport bd_airport_1 (cost=0.56..4.58 rows=1 width=20) (actual time=0.004..0.004 rows=1 loops=21652)\n> Index Cond: (id = bdata_forks.origin_id)\n\n\n-- \nJim C. Nasby, Data Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 8 Oct 2017 15:02:42 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Regression from 9.4-9.6"
},
{
"msg_contents": "On 10/8/17 3:02 PM, Jim Nasby wrote:\n>>\n>>>> -> Index Scan using bdata_filed_departuretime on bdata_forks \n>>>> (cost=0.57..50807.51 rows=1 width=36) (actual time=979.381..3207.777 \n>>>> rows=508 loops=1)\n>>\n>>>> -> Index Scan using bdata_filed_departuretime on bdata_forks \n>>>> (cost=0.57..14894236.06 rows=1 width=36) (actual \n>>>> time=892.664..3025.653 rows=508 loops=1)\n>>\n>> I think the reason it's discarding the preferable plan is that, with this\n>> huge increment in the estimated cost getting added to both alternatives,\n>> the two nestloop plans have fuzzily the same total cost, and it's picking\n>> the one you don't want on the basis of some secondary criterion.\n> \n> Great question... the only thing that sticks out is the coalesce(). Let \n> me see if an analyze with a higher stats target changes anything. FWIW, \n> the 9.6 database is copied from the 9.4 one once a week and then \n> pg_upgraded. I'm pretty sure an ANALYZE is part of that process.\n\nTurns out that analyze is the 'problem'. On the 9.4 database, pg_stats \nshows that the newest date in filed_departuretime is 3/18/2017, while \nthe 9.6 database is up-to-date. If I change the query to use 2/9/2018 \ninstead of 7/20/2017 I get the same results.\n\nSo, the larger cost estimate is theoretically more correct. If I set \nrandom_page_cost = 1 I end up with a good plan.\n-- \nJim C. Nasby, Data Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 8 Oct 2017 15:33:07 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Regression from 9.4-9.6"
},
{
"msg_contents": "Jim Nasby <[email protected]> writes:\n> On 10/8/17 2:34 PM, Tom Lane wrote:\n>> Why has this indexscan's cost estimate changed so much?\n\n> Great question... the only thing that sticks out is the coalesce(). Let \n> me see if an analyze with a higher stats target changes anything. FWIW, \n> the 9.6 database is copied from the 9.4 one once a week and then \n> pg_upgraded. I'm pretty sure an ANALYZE is part of that process.\n\nHm, now that I see the SubPlan in there, I wonder whether 9.6 is\naccounting more conservatively for the cost of the subplan. It\nprobably is assuming that the subplan gets run for each row fetched\nfrom the index, although the loops and rows-removed counts show\nthat the previous filter conditions reject 99% of the fetched rows.\n\nBut that code looks the same in 9.4, so I don't understand why\nthe 9.4 estimate isn't equally large ...\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 08 Oct 2017 16:37:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regression from 9.4-9.6"
},
{
"msg_contents": "On 10/8/17 3:37 PM, Tom Lane wrote:\n> Jim Nasby <[email protected]> writes:\n>> On 10/8/17 2:34 PM, Tom Lane wrote:\n>>> Why has this indexscan's cost estimate changed so much?\n> \n>> Great question... the only thing that sticks out is the coalesce(). Let\n>> me see if an analyze with a higher stats target changes anything. FWIW,\n>> the 9.6 database is copied from the 9.4 one once a week and then\n>> pg_upgraded. I'm pretty sure an ANALYZE is part of that process.\n> \n> Hm, now that I see the SubPlan in there, I wonder whether 9.6 is\n> accounting more conservatively for the cost of the subplan. It\n> probably is assuming that the subplan gets run for each row fetched\n> from the index, although the loops and rows-removed counts show\n> that the previous filter conditions reject 99% of the fetched rows.\n> \n> But that code looks the same in 9.4, so I don't understand why\n> the 9.4 estimate isn't equally large ...\n\nBesides the analyze issue, the other part of this is\n\[email protected]/20106> select \npg_size_pretty(pg_relation_size('bdata_forks'));\n pg_size_pretty\n----------------\n 106 GB\n(1 row)\n\[email protected]/20106> select relpages::bigint*8192/reltuples from \npg_class where relname='bdata_forks';\n ?column?\n------------------\n 185.559397863791\n(1 row)\n\nWith an effective_cache_size of 200GB that's not really helping things. \nBut it's also another example of the planner's reluctance towards index \nscans.\n-- \nJim C. Nasby, Data Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 8 Oct 2017 16:07:04 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Regression from 9.4-9.6"
}
] |
[
{
"msg_contents": "Hi,\n\nduring implementation of a runtime environment and the adjoining database\nabstraction layer I noticed (like many before me [0] and as correctly mentioned\nin the documentation) the sizeable performance impact of declaring a cursor\n\"with hold\" for queries with large result sets.\n\nOur use case very often looks like this:\n\nopen cursor for select from table1\nloop\n{ fetch some entries from cursor\n update table2\n commit\n}\n\nDuring iteration of the result set we commit changes to the database so we\nmust make sure to keep the cursor alive. One option is to use \"with hold\".\nUnfortunately the resultset is then instantly materialzed which is a huge\nperformance burden.\nIn our use case the \"commit\" of changes often does not affect the iteration set.\nAlso the loop might be aborted before the resultset was fully read so we never\nneeded the whole materialzed set anyway.\nTo workaround these problems, we already employ some static analysis to avoid\n\"with hold\" in all situations where there are no commits during the lifetime of\ncursor or portal. For other cursors we choose to use a different database\nconnection inside the same application to protect the cursors from commit\noperations and avoiding costly copy operations (if they would be used \"with\nhold\" on the main database connection).\nIn an attempt to further minimize the performance impact I am thinking about\nemploying a lazy \"with hold\" where I would fetch all the remaining result rows\nfrom a cursor or portal before a commit statement. This way I could at least\nhave great performance in all conflict-free situations until one would arrive at\nan impass. Naturally I am now wondering why the postgres cursor/portal is not\nalso employing the same trick (at least as an option): Postpone materialization\nof \"with hold\" cursors until it is required (like a commit operation is\ndispatched).\nProbably I am also missing many (internal) aspects but at that point it might be\npossible to optimize further. When, for instance, no changes were made to result\nset of the \"with hold\" cursor, it must not be materialized. From a previous\ndiscussions [1] I heard that one can in fact accomplish that by using a different\ndatabase connection which is one workaround we are using.\nI am not sure whether this kind of workaround/optimization work should be done\nin the database abstraction/interface layer or the database itself. Since a lot\nof people seem to run into the peformance issue many might profit from some\noptimization magic in the database for such use cases. We are very invested in\nthis performance issue and are happy to resolve it on either level.\n\nRegards,\nLeon\n\n[0] https://trac.osgeo.org/qgis/ticket/1175\n https://stackoverflow.com/questions/33635405/postgres-cursor-with-hold\n https://www.citusdata.com/blog/2016/03/30/five-ways-to-paginate/\n[1] https://bytes.com/topic/postgresql/answers/420717-cursors-transactions-why\n http://www.postgresql-archive.org/setFetchSize-td4935215.html\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 10 Oct 2017 14:20:39 +0200",
"msg_from": "Leon Winter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cursor With_Hold Performance Workarounds/Optimization"
}
] |
[
{
"msg_contents": "I stumbled upon a severe row count underestimation that confusingly\nwent away when two inner joins in the from clause were reordered. I\nwhittled it down to a reproducible test case.\n\nSchema:\n\nCREATE TABLE small (id serial primary key, ref_id int not null, subset\nint not null);\nCREATE TABLE big (id serial primary key, small_id int not null);\n\nINSERT INTO small (ref_id, subset) SELECT i/2+1, i/2+1 FROM\ngenerate_series(1,1000) i;\nINSERT INTO big (small_id) SELECT (i % 1000) + 1 FROM\ngenerate_series(1,1000000) i;\n\nCREATE INDEX ON small (ref_id);\nCREATE INDEX ON big (small_id);\n\nANALYZE;\n\nAnd the queries, differing in only the order of joins:\n\nSELECT * FROM\n small\n INNER JOIN big ON small.id = big.small_id\n INNER JOIN (SELECT 1 UNION ALL SELECT 2) lookup(ref) ON\nlookup.ref = small.ref_id\nWHERE small.subset = 42;\n\nSELECT * FROM\n small\n INNER JOIN (SELECT 1 UNION ALL SELECT 2) lookup(ref) ON\nlookup.ref = small.ref_id\n INNER JOIN big ON small.id = big.small_id\nWHERE small.subset = 42;\n\nResulting plan for the first case:\n Nested Loop (cost=20.45..2272.13 rows=8 width=24)\n -> Nested Loop (cost=0.28..16.69 rows=1 width=16)\n -> Append (cost=0.00..0.04 rows=2 width=4)\n -> Result (cost=0.00..0.01 rows=1 width=4)\n -> Result (cost=0.00..0.01 rows=1 width=4)\n -> Index Scan using small_ref_id_idx on small\n(cost=0.28..8.32 rows=1 width=12)\n Index Cond: (ref_id = (1))\n Filter: (subset = 42)\n -> Bitmap Heap Scan on big (cost=20.18..2245.44 rows=1000 width=8)\n Recheck Cond: (small_id = small.id)\n -> Bitmap Index Scan on big_small_id_idx (cost=0.00..19.93\nrows=1000 width=0)\n Index Cond: (small_id = small.id)\n\nSecond case plan is identical except row count of the topmost nest loop:\n Nested Loop (cost=20.45..2272.13 rows=1000 width=24)\n\nThe union subselect was in reality somewhat more complicated, but for\nthe row count issue the simplification does not seem to matter. The\nbehavior is seen both on 9.4 and on master.\n\nDoes anybody have any idea what is going on here? In the real world\ncase this is based on the estimation was 5 rows instead of 200k, which\nresulted in quite bad plan choices downstream.\n\nRegards,\nAnts Aasma\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 11 Oct 2017 12:33:31 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Rowcount estimation changes based on from clause order"
},
{
"msg_contents": "Ants Aasma <[email protected]> writes:\n> I stumbled upon a severe row count underestimation that confusingly\n> went away when two inner joins in the from clause were reordered.\n\nHm, looks more like an overestimate in this example, but anyway ...\n\n> Does anybody have any idea what is going on here?\n\nset_joinrel_size_estimates says\n\n * Since there is more than one way to make a joinrel for more than two\n * base relations, the results we get here could depend on which component\n * rel pair is provided. In theory we should get the same answers no matter\n * which pair is provided; in practice, since the selectivity estimation\n * routines don't handle all cases equally well, we might not. But there's\n * not much to be done about it.\n\nIn this example I think the core of the issue is actually not so much\nbad selectivity estimates as rowcount roundoff error.\n\nIf we first consider joining \"small\" with \"big\", we get an estimate of\n2000 rows (which is dead on for what would happen if we just joined\nthose). Then we estimate the final result size as the join of that to\n\"lookup\". The selectivity number for that step is somewhat hogwash but\nhappens to yield a result that's not awful (8 rows).\n\nIn the other case we first estimate the size of the join of \"small\" with\nthe \"lookup\" subquery, and we get a rounded-off estimate of one row,\nwhereas without the roundoff it would have been probably about 0.01.\nWhen that's joined to \"big\", we are computing one row times 1 million rows\ntimes a selectivity estimate that's about right for the \"small.id =\nbig.small_id\" clause; but because the roundoff already inflated the first\njoin's size so much, you end up with an inflated final result.\n\nThis suggests that there might be some value in considering the\nsub-relations from largest to smallest, so that roundoff error\nin the earlier estimates is less likely to contaminate the final\nanswer. Not sure how expensive it would be to do that or what\nsort of instability it might introduce into plan choices.\n\nWhether that's got anything directly to do with your original problem is\nhard to say. Joins to subqueries, which we normally lack any stats for,\ntend to produce pretty bogus selectivity numbers in themselves; so the\noriginal problem might've been more of that nature.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 12 Oct 2017 16:50:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rowcount estimation changes based on from clause order"
},
{
"msg_contents": "On Thu, Oct 12, 2017 at 11:50 PM, Tom Lane <[email protected]> wrote:\n> Ants Aasma <[email protected]> writes:\n>> I stumbled upon a severe row count underestimation that confusingly\n>> went away when two inner joins in the from clause were reordered.\n>\n> Hm, looks more like an overestimate in this example, but anyway ...\n>\n>> Does anybody have any idea what is going on here?\n>\n> set_joinrel_size_estimates says\n>\n> * Since there is more than one way to make a joinrel for more than two\n> * base relations, the results we get here could depend on which component\n> * rel pair is provided. In theory we should get the same answers no matter\n> * which pair is provided; in practice, since the selectivity estimation\n> * routines don't handle all cases equally well, we might not. But there's\n> * not much to be done about it.\n>\n> In this example I think the core of the issue is actually not so much\n> bad selectivity estimates as rowcount roundoff error.\n>\n> If we first consider joining \"small\" with \"big\", we get an estimate of\n> 2000 rows (which is dead on for what would happen if we just joined\n> those). Then we estimate the final result size as the join of that to\n> \"lookup\". The selectivity number for that step is somewhat hogwash but\n> happens to yield a result that's not awful (8 rows).\n>\n> In the other case we first estimate the size of the join of \"small\" with\n> the \"lookup\" subquery, and we get a rounded-off estimate of one row,\n> whereas without the roundoff it would have been probably about 0.01.\n> When that's joined to \"big\", we are computing one row times 1 million rows\n> times a selectivity estimate that's about right for the \"small.id =\n> big.small_id\" clause; but because the roundoff already inflated the first\n> join's size so much, you end up with an inflated final result.\n>\n> This suggests that there might be some value in considering the\n> sub-relations from largest to smallest, so that roundoff error\n> in the earlier estimates is less likely to contaminate the final\n> answer. Not sure how expensive it would be to do that or what\n> sort of instability it might introduce into plan choices.\n>\n> Whether that's got anything directly to do with your original problem is\n> hard to say. Joins to subqueries, which we normally lack any stats for,\n> tend to produce pretty bogus selectivity numbers in themselves; so the\n> original problem might've been more of that nature.\n\nThanks for pointing me in the correct direction. The original issue\nwas that values from lookup joined to ref_id and the subset filter in\nthe small table were almost perfectly correlated, which caused the\nunderestimate. In the second case this was hidden by the intermediate\nclamping to 1, accidentally resulting in a more correct estimate.\n\nI actually think that it might be better to consider relations from\nsmallest to largest. The reasoning being - a join cannot produce a\nfraction of a row, it will either produce 0 or 1, and we should\nprobably plan for the case when it does return something.\n\nGoing even further, and I haven't looked at how feasible this is, but\nI have run into several cases lately where cardinality underestimates\nclamping to 1 result in catastrophically bad plans. Like a stack of\nnested loops with unparameterized GroupAggregates and HashAggregates\nas inner sides bad. It seems to me that row estimates should clamp to\nsomething slightly larger than 1 unless it's provably going to be 1.\n\nRegards,\nAnts Aasma\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 13 Oct 2017 05:32:03 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Rowcount estimation changes based on from clause order"
}
] |
[
{
"msg_contents": "Hi,\n\nI wrote a query that joins several tables usually returning less than\n1000 rows, groups them and generates a JSON object of the result. In\n9.6 is was a question of milliseconds for that query to return the\nrequested data. Now, after upgrading to 10, the query never returns -\nat least it hasn't returned in the last hour.\n\nTo see what happens, I requested the query plan [1]. It looks complex\nand shows a lot of parallelization. I don't have the query plan from\n9.6, but I remember it being considerably simpler.\n\nCan anyone have a guess what altered the performance here so\ndramatically? Is there a way to disable new parallelization features\njust for this query to see if it makes any difference?\n\nBest\n Johannes\n\n\n[1] https://explain.depesz.com/s/xsPP\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 11 Oct 2017 13:06:46 +0200",
"msg_from": "=?UTF-8?Q?johannes_gra=C3=ABn?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance drop after upgrade (9.6 > 10)"
},
{
"msg_contents": "2017-10-11 13:06 GMT+02:00 johannes graën <[email protected]>:\n\n> Hi,\n>\n> I wrote a query that joins several tables usually returning less than\n> 1000 rows, groups them and generates a JSON object of the result. In\n> 9.6 is was a question of milliseconds for that query to return the\n> requested data. Now, after upgrading to 10, the query never returns -\n> at least it hasn't returned in the last hour.\n>\n> To see what happens, I requested the query plan [1]. It looks complex\n> and shows a lot of parallelization. I don't have the query plan from\n> 9.6, but I remember it being considerably simpler.\n>\n> Can anyone have a guess what altered the performance here so\n> dramatically? Is there a way to disable new parallelization features\n> just for this query to see if it makes any difference?\n>\n>\n\nhave you fresh statistics? After upgrade is necessary to run ANALYZE command\n\nRegards\n\nPavel\n\n\nBest\n> Johannes\n>\n>\n> [1] https://explain.depesz.com/s/xsPP\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n2017-10-11 13:06 GMT+02:00 johannes graën <[email protected]>:Hi,\n\nI wrote a query that joins several tables usually returning less than\n1000 rows, groups them and generates a JSON object of the result. In\n9.6 is was a question of milliseconds for that query to return the\nrequested data. Now, after upgrading to 10, the query never returns -\nat least it hasn't returned in the last hour.\n\nTo see what happens, I requested the query plan [1]. It looks complex\nand shows a lot of parallelization. I don't have the query plan from\n9.6, but I remember it being considerably simpler.\n\nCan anyone have a guess what altered the performance here so\ndramatically? Is there a way to disable new parallelization features\njust for this query to see if it makes any difference?\nhave you fresh statistics? After upgrade is necessary to run ANALYZE commandRegardsPavel \nBest\n Johannes\n\n\n[1] https://explain.depesz.com/s/xsPP\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 11 Oct 2017 13:11:17 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance drop after upgrade (9.6 > 10)"
},
{
"msg_contents": "On Wed, Oct 11, 2017 at 1:11 PM, Pavel Stehule <[email protected]> wrote:\n> have you fresh statistics? After upgrade is necessary to run ANALYZE command\n\nYes, that was missing indeed. I did ANALYZE but apparently on all\ndatabases but this one. I could have guessed that\n1,098,956,679,131,935,754,413,282,631,696,252,928 is not a reasonable\ncost value.\n\nThanks, Pavel.\n\nBest\n Johannes\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 11 Oct 2017 14:19:08 +0200",
"msg_from": "=?UTF-8?Q?johannes_gra=C3=ABn?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance drop after upgrade (9.6 > 10)"
},
{
"msg_contents": "Hi Pavel, *,\n\nyou were right with ANALYZing the DB first. However, even after doing\nso, I frequently see Seq Scans where an index was used before. This\nusually cooccurs with parallelization and looked different before\nupgrading to 10. I can provide an example for 10 [1], but I cannot\ngenerate a query plan for 9.6 anymore.\n\nAny ideas what makes the new version more seqscanny?\n\n\n[1] https://explain.depesz.com/s/gXD3\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 24 Oct 2017 16:15:59 +0200",
"msg_from": "=?UTF-8?Q?johannes_gra=C3=ABn?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance drop after upgrade (9.6 > 10)"
},
{
"msg_contents": "On Tue, Oct 24, 2017 at 04:15:59PM +0200, johannes gra�n wrote:\n> upgrading to 10. I can provide an example for 10 [1], but I cannot\n> generate a query plan for 9.6 anymore.\n\nYou could (re)install PG96 alongside PG10 and run a copy of the DB (even from\nyour homedir, or on a difference server) and pg_dump |pg_restore the relevant\ntables (just be sure to specify the alternate host/port/user/etc as needed for\nthe restore invocation).\n\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 24 Oct 2017 10:18:03 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance drop after upgrade (9.6 > 10)"
},
{
"msg_contents": "On 2017-10-24 17:18, Justin Pryzby wrote:\n> You could (re)install PG96 alongside PG10 and run a copy of the DB (even from\n> your homedir, or on a difference server) and pg_dump |pg_restore the relevant\n> tables (just be sure to specify the alternate host/port/user/etc as needed for\n> the restore invocation).\n\nI considered that but it is far too expensive just for getting the old\nquery plan. The database is more than 1 TB big and replaying it from a\ndump to another server took us several days, primarily due to the heavy\nuse of materialized views that are calculated over all rows of some\nlarge tables. As long as there is no safe pg_downgrade --link I'd rather\nkeep trying to improve query performance on the current version.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 24 Oct 2017 17:57:31 +0200",
"msg_from": "=?UTF-8?Q?Johannes_Gra=c3=abn?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance drop after upgrade (9.6 > 10)"
},
{
"msg_contents": "johannes gra�n wrote:\n> Hi Pavel, *,\n> \n> you were right with ANALYZing the DB first. However, even after doing\n> so, I frequently see Seq Scans where an index was used before. This\n> usually cooccurs with parallelization and looked different before\n> upgrading to 10. I can provide an example for 10 [1], but I cannot\n> generate a query plan for 9.6 anymore.\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 24 Oct 2017 18:17:15 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance drop after upgrade (9.6 > 10)"
},
{
"msg_contents": "On Tue, Oct 24, 2017 at 04:15:59PM +0200, johannes gra�n wrote:\n> Hi Pavel, *,\n> \n> you were right with ANALYZing the DB first. However, even after doing\n> so, I frequently see Seq Scans where an index was used before. This\n> usually cooccurs with parallelization and looked different before\n> upgrading to 10. I can provide an example for 10 [1], but I cannot\n> generate a query plan for 9.6 anymore.\n> \n> Any ideas what makes the new version more seqscanny?\n\nIs it because max_parallel_workers_per_gather now defaults to 2 ?\n\nBTW, I would tentatively expect a change in default to be documented in the\nrelease notes but can't see that it's.\n77cd477c4ba885cfa1ba67beaa82e06f2e182b85\n\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Oct 2017 14:45:15 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance drop after upgrade (9.6 > 10)"
},
{
"msg_contents": "On Thu, Oct 26, 2017 at 02:45:15PM -0500, Justin Pryzby wrote:\n> On Tue, Oct 24, 2017 at 04:15:59PM +0200, johannes gra�n wrote:\n> > Hi Pavel, *,\n> > \n> > you were right with ANALYZing the DB first. However, even after doing\n> > so, I frequently see Seq Scans where an index was used before. This\n> > usually cooccurs with parallelization and looked different before\n> > upgrading to 10. I can provide an example for 10 [1], but I cannot\n> > generate a query plan for 9.6 anymore.\n> > \n> > Any ideas what makes the new version more seqscanny?\n> \n> Is it because max_parallel_workers_per_gather now defaults to 2 ?\n> \n> BTW, I would tentatively expect a change in default to be documented in the\n> release notes but can't see that it's.\n> 77cd477c4ba885cfa1ba67beaa82e06f2e182b85\n\nOops, you are correct. The PG 10 release notes, which I wrote, should\nhave mentioned this. :-(\n\n\thttps://www.postgresql.org/docs/10/static/release-10.html\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n",
"msg_date": "Sun, 28 Jan 2018 18:53:10 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] performance drop after upgrade (9.6 > 10)"
},
{
"msg_contents": "Moving to -hackers;\n\nOn Sun, Jan 28, 2018 at 06:53:10PM -0500, Bruce Momjian wrote:\n> On Thu, Oct 26, 2017 at 02:45:15PM -0500, Justin Pryzby wrote:\n> > Is it because max_parallel_workers_per_gather now defaults to 2 ?\n> > \n> > BTW, I would tentatively expect a change in default to be documented in the\n> > release notes but can't see that it's.\n> > 77cd477c4ba885cfa1ba67beaa82e06f2e182b85\n> \n> Oops, you are correct. The PG 10 release notes, which I wrote, should\n> have mentioned this. :-(\n\nI just saw your January response to my October mail..\n\nMaybe it's silly to update PG10 notes 9 months after release..\n..but, any reason not to add to v10 release notes now (I don't know if the web\ndocs would be updated until the next point release?)\n\nJustin\n\n",
"msg_date": "Thu, 24 May 2018 20:00:25 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "add default parallel query to v10 release notes? (Re: [PERFORM]\n performance drop after upgrade (9.6 > 10))"
},
{
"msg_contents": "On Thu, May 24, 2018 at 08:00:25PM -0500, Justin Pryzby wrote:\n> Moving to -hackers;\n> \n> On Sun, Jan 28, 2018 at 06:53:10PM -0500, Bruce Momjian wrote:\n> > On Thu, Oct 26, 2017 at 02:45:15PM -0500, Justin Pryzby wrote:\n> > > Is it because max_parallel_workers_per_gather now defaults to 2 ?\n> > > \n> > > BTW, I would tentatively expect a change in default to be documented in the\n> > > release notes but can't see that it's.\n> > > 77cd477c4ba885cfa1ba67beaa82e06f2e182b85\n> > \n> > Oops, you are correct. The PG 10 release notes, which I wrote, should\n> > have mentioned this. :-(\n> \n> I just saw your January response to my October mail..\n> \n> Maybe it's silly to update PG10 notes 9 months after release..\n> ..but, any reason not to add to v10 release notes now (I don't know if the web\n> docs would be updated until the next point release?)\n\nSo I did some research on this, particularly to find out how it was\nmissed in the PG 10 release notes. It turns out that\nmax_parallel_workers_per_gather has always defaulted to 2 in head, and\nthis was changed to default to 0 in the 9.6 branch:\n\n\tcommit f85b1a84152f7bf019fd7a2c5eede97867dcddbb\n\tAuthor: Robert Haas <[email protected]>\n\tDate: Tue Aug 16 08:09:15 2016 -0400\n\t\n\t Disable parallel query by default.\n\t\n\t Per discussion, set the default value of max_parallel_workers_per_gather\n\t to 0 in 9.6 only. We'll leave it enabled in master so that it gets\n\t more testing and in the hope that it can be enable by default in v10.\n\nTherefore, there was no commit to find in the PG 10 commit logs. :-O \nNot sure how we can avoid this kind of problem in the future.\n\nThe attached patch adds a PG 10.0 release note item about this change. \nI put it at the bottom since it is newly added.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +",
"msg_date": "Wed, 20 Jun 2018 11:13:49 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add default parallel query to v10 release notes? (Re: [PERFORM]\n performance drop after upgrade (9.6 > 10))"
},
{
"msg_contents": "On Wed, Jun 20, 2018 at 8:43 PM, Bruce Momjian <[email protected]> wrote:\n> On Thu, May 24, 2018 at 08:00:25PM -0500, Justin Pryzby wrote:\n>\n> So I did some research on this, particularly to find out how it was\n> missed in the PG 10 release notes. It turns out that\n> max_parallel_workers_per_gather has always defaulted to 2 in head, and\n> this was changed to default to 0 in the 9.6 branch:\n>\n> commit f85b1a84152f7bf019fd7a2c5eede97867dcddbb\n> Author: Robert Haas <[email protected]>\n> Date: Tue Aug 16 08:09:15 2016 -0400\n>\n> Disable parallel query by default.\n>\n> Per discussion, set the default value of max_parallel_workers_per_gather\n> to 0 in 9.6 only. We'll leave it enabled in master so that it gets\n> more testing and in the hope that it can be enable by default in v10.\n>\n> Therefore, there was no commit to find in the PG 10 commit logs. :-O\n> Not sure how we can avoid this kind of problem in the future.\n>\n> The attached patch adds a PG 10.0 release note item about this change.\n>\n\nYour proposed text looks good to me.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n",
"msg_date": "Fri, 22 Jun 2018 14:53:36 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add default parallel query to v10 release notes? (Re: [PERFORM]\n performance drop after upgrade (9.6 > 10))"
},
{
"msg_contents": "On Fri, Jun 22, 2018 at 02:53:36PM +0530, Amit Kapila wrote:\n> On Wed, Jun 20, 2018 at 8:43 PM, Bruce Momjian <[email protected]> wrote:\n> > On Thu, May 24, 2018 at 08:00:25PM -0500, Justin Pryzby wrote:\n> >\n> > So I did some research on this, particularly to find out how it was\n> > missed in the PG 10 release notes. It turns out that\n> > max_parallel_workers_per_gather has always defaulted to 2 in head, and\n> > this was changed to default to 0 in the 9.6 branch:\n> >\n> > commit f85b1a84152f7bf019fd7a2c5eede97867dcddbb\n> > Author: Robert Haas <[email protected]>\n> > Date: Tue Aug 16 08:09:15 2016 -0400\n> >\n> > Disable parallel query by default.\n> >\n> > Per discussion, set the default value of max_parallel_workers_per_gather\n> > to 0 in 9.6 only. We'll leave it enabled in master so that it gets\n> > more testing and in the hope that it can be enable by default in v10.\n> >\n> > Therefore, there was no commit to find in the PG 10 commit logs. :-O\n> > Not sure how we can avoid this kind of problem in the future.\n> >\n> > The attached patch adds a PG 10.0 release note item about this change.\n> >\n> \n> Your proposed text looks good to me.\n\nDone, thanks.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n",
"msg_date": "Mon, 9 Jul 2018 11:19:28 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add default parallel query to v10 release notes? (Re: [PERFORM]\n performance drop after upgrade (9.6 > 10))"
}
] |
[
{
"msg_contents": "Hello all,\n\nMy scenario is: postgresql 10, Processor Xeon 2.8GHz / 4-core- 8gb Ram, OS\nDebian 8.\n\nWhen creating index on table of approximately 10GB of data, the DBMS hangs\n(I think), because even after waiting 10 hours there was no return of the\ncommand. It happened by creating Hash indexes and B + tree indexes.\nHowever, for some columns, it was successfully (L_RETURNFLAG, L_PARTKEY).\nThe data environment is the LINEITEM table (TPC-H benchmark) of link 1\n<http://kejser.org/wp-content/uploads/2014/06/image_thumb2.png>below. The\ncolumns/indexes that caught the creation were: * Hash Index in column:\nL_TAX * Btree Index in column: L_RECEIPTDATE.\n\nIf someone has a hint how to speed up index creation so that it completes\nsuccessfully. I know that PostgreSQL 10 has some parallelism features and\nsince my server is dedicated only to the DBMS, do I change the parameters:\nforce_parallel_mode, max_parallel_workers_per_gather could speed up index\ncreation on large tables? Any tip is welcome.\n\nDDL comand :\nL_ORDERKEY BIGINT NOT NULL, - references O_ORDERKEY\nL_PARTKEY BIGINT NOT NULL, - references P_PARTKEY (compound fk to PARTSUPP)\nL_SUPPKEY BIGINT NOT NULL, - references S_SUPPKEY (compound fk to PARTSUPP)\nL_LINENUMBER INTEGER,\nL_QUANTITY DECIMAL,\nL_EXTENDEDPRICE DECIMAL,\nL_DISCOUNT DECIMAL,\nL_TAX DECIMAL,\nL_RETURNFLAG CHAR (1),\nL_LINESTATUS CHAR (1),\nL_SHIPDATE DATE,\nL_COMMITDATE DATE,\nL_RECEIPTDATE DATE,\nL_SHIPINSTRUCT CHAR (25),\nL_SHIPMODE CHAR (10),\nL_COMMENT VARCHAR (44),PRIMARY KEY (L_ORDERKEY, L_LINENUMBER)\n\n1- http://kejser.org/wp-content/uploads/2014/06/image_thumb2.png\n\nbest Regards\n\nNeto\n\n\nHello all,My scenario is: postgresql 10, Processor Xeon 2.8GHz / 4-core- 8gb Ram, OS Debian 8.\nWhen creating index on table of approximately 10GB of data, the DBMS \nhangs (I think), because even after waiting 10 hours there was no return\n of the command. \nIt happened by creating Hash indexes and B + tree indexes. However, for \nsome columns, it was successfully (L_RETURNFLAG, L_PARTKEY).\nThe data environment is the LINEITEM table (TPC-H benchmark) of link 1below. The columns/indexes that caught the creation were:\n* Hash Index in column: L_TAX\n* Btree Index in column: L_RECEIPTDATE.\nIf someone has a hint how to speed up index creation so that it \ncompletes successfully. I know that PostgreSQL 10 has some parallelism \nfeatures and since my server is dedicated only to the DBMS, do I change \nthe parameters: force_parallel_mode, max_parallel_workers_per_gather \ncould speed up index creation on large tables?\nAny tip is welcome.\nDDL comand :\nL_ORDERKEY BIGINT NOT NULL, - references O_ORDERKEY\nL_PARTKEY BIGINT NOT NULL, - references P_PARTKEY (compound fk to PARTSUPP)\nL_SUPPKEY BIGINT NOT NULL, - references S_SUPPKEY (compound fk to PARTSUPP)\nL_LINENUMBER INTEGER,\nL_QUANTITY DECIMAL,\nL_EXTENDEDPRICE DECIMAL,\nL_DISCOUNT DECIMAL,\nL_TAX DECIMAL,\nL_RETURNFLAG CHAR (1),\nL_LINESTATUS CHAR (1),\nL_SHIPDATE DATE,\nL_COMMITDATE DATE,\nL_RECEIPTDATE DATE,\nL_SHIPINSTRUCT CHAR (25),\nL_SHIPMODE CHAR (10),\nL_COMMENT VARCHAR (44),\nPRIMARY KEY (L_ORDERKEY, L_LINENUMBER)\n1- http://kejser.org/wp-content/uploads/2014/06/image_thumb2.png\nbest Regards Neto",
"msg_date": "Wed, 11 Oct 2017 09:58:28 -0300",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "blocking index creation"
},
{
"msg_contents": "Neto pr wrote:\n> When creating index on table of approximately 10GB of data, the DBMS hangs (I think),\n> because even after waiting 10 hours there was no return of the command.\n> It happened by creating Hash indexes and B + tree indexes.\n> However, for some columns, it was successfully (L_RETURNFLAG, L_PARTKEY).\n\n> If someone has a hint how to speed up index creation so that it completes successfully.\n\nLook if CREATE INDEX is running or waiting for a lock (check the\n\"pg_locks\" table, see if the backend consumes CPU time).\n\nMaybe there is a long-running transaction that blocks the\nACCESS EXCLUSIVE lock required. It could also be a prepared\ntransaction.\n\nYours,\nLaurenz Albe\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 11 Oct 2017 15:46:19 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: blocking index creation"
},
{
"msg_contents": "2017-10-11 10:46 GMT-03:00 Laurenz Albe <[email protected]>:\n\n> Neto pr wrote:\n> > When creating index on table of approximately 10GB of data, the DBMS\n> hangs (I think),\n> > because even after waiting 10 hours there was no return of the command.\n> > It happened by creating Hash indexes and B + tree indexes.\n> > However, for some columns, it was successfully (L_RETURNFLAG, L_PARTKEY).\n>\n> > If someone has a hint how to speed up index creation so that it\n> completes successfully.\n>\n> Look if CREATE INDEX is running or waiting for a lock (check the\n> \"pg_locks\" table, see if the backend consumes CPU time).\n>\n>\nIn this moment now, there is an index being created in the Lineitem table\n(+ - 10 Gb), and apparently it is locked, since it started 7 hours ago.\nI've looked at the pg_locks table and look at the result, it's with\n\"ShareLock\" lock mode.\nIs this blocking correct? or should it be another type?\n\nBefore creating the index, should I set the type of transaction lock? What?\n-------------------------------------------------------------------------------------------\nSELECT\n L.mode, c.relname, locktype, l.GRANTED, l.transactionid,\nvirtualtransaction\nFROM pg_locks l, pg_class c\nwhere c.oid = l.relation\n\n-------------- RESULT\n--------------------------------------------------------------\nAccessShareLock pg_class_tblspc_relfilenode_index relation TRUE (null) 3/71\nAccessShareLock pg_class_relname_nsp_index relation TRUE (null) 3/71\nAccessShareLock pg_class_oid_index relation TRUE (null) 3/71\nAccessShareLock pg_class relation TRUE (null) 3/71\nAccessShareLock pg_locks relation TRUE (null) 3/71\nShareLock lineitem relation TRUE (null) 21/3769\n\n> Maybe there is a long-running transaction that blocks the\n> ACCESS EXCLUSIVE lock required. It could also be a prepared\n> transaction.\n>\n> Yours,\n> Laurenz Albe\n>\n\nBest Regards\nNeto\n\n2017-10-11 10:46 GMT-03:00 Laurenz Albe <[email protected]>:Neto pr wrote:\n> When creating index on table of approximately 10GB of data, the DBMS hangs (I think),\n> because even after waiting 10 hours there was no return of the command.\n> It happened by creating Hash indexes and B + tree indexes.\n> However, for some columns, it was successfully (L_RETURNFLAG, L_PARTKEY).\n\n> If someone has a hint how to speed up index creation so that it completes successfully.\n\nLook if CREATE INDEX is running or waiting for a lock (check the\n\"pg_locks\" table, see if the backend consumes CPU time).\nIn this moment now, there is an index being created in the Lineitem table (+ - 10 Gb), and apparently it is locked, since it started 7 hours ago.I've looked at the pg_locks table and look at the result, it's with \"ShareLock\" lock mode.Is this blocking correct? or should it be another type?Before creating the index, should I set the type of transaction lock? What?-------------------------------------------------------------------------------------------SELECT L.mode, c.relname, locktype, l.GRANTED, l.transactionid, virtualtransactionFROM pg_locks l, pg_class c where c.oid = l.relation-------------- RESULT --------------------------------------------------------------\n\n\n\n\n\n\n\n\nAccessShareLock\npg_class_tblspc_relfilenode_index\nrelation\nTRUE\n(null)\n3/71\n\n\nAccessShareLock\npg_class_relname_nsp_index\nrelation\nTRUE\n(null)\n3/71\n\n\nAccessShareLock\npg_class_oid_index\nrelation\nTRUE\n(null)\n3/71\n\n\nAccessShareLock\npg_class\nrelation\nTRUE\n(null)\n3/71\n\n\nAccessShareLock\npg_locks\nrelation\nTRUE\n(null)\n3/71\n\n\nShareLock\nlineitem\nrelation\nTRUE\n(null)\n21/3769\n\n \n\n\n\n\nMaybe there is a long-running transaction that blocks the\nACCESS EXCLUSIVE lock required. It could also be a prepared\ntransaction.\n\nYours,\nLaurenz Albe\nBest RegardsNeto",
"msg_date": "Wed, 11 Oct 2017 11:11:23 -0300",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: blocking index creation"
},
{
"msg_contents": "\n\nOn 10/11/2017 04:11 PM, Neto pr wrote:\n> \n> 2017-10-11 10:46 GMT-03:00 Laurenz Albe <[email protected]\n> <mailto:[email protected]>>:\n> \n> Neto pr wrote:\n> > When creating index on table of approximately 10GB of data, the DBMS hangs (I think),\n> > because even after waiting 10 hours there was no return of the command.\n> > It happened by creating Hash indexes and B + tree indexes.\n> > However, for some columns, it was successfully (L_RETURNFLAG, L_PARTKEY).\n> \n> > If someone has a hint how to speed up index creation so that it completes successfully.\n> \n> Look if CREATE INDEX is running or waiting for a lock (check the\n> \"pg_locks\" table, see if the backend consumes CPU time).\n> \n> \n> In this moment now, there is an index being created in the Lineitem\n> table (+ - 10 Gb), and apparently it is locked, since it started 7 hours\n> ago.\n> I've looked at the pg_locks table and look at the result, it's with\n> \"ShareLock\" lock mode.\n> Is this blocking correct? or should it be another type?\n> \n\nYes, CREATE INDEX acquire SHARE lock, see\n\n https://www.postgresql.org/docs/9.1/static/explicit-locking.html\n\n> Before creating the index, should I set the type of transaction lock? What?\n\nEeee? Not sure I understand. The command acquires all necessary locks\nautomatically.\n\n> -------------------------------------------------------------------------------------------\n> SELECT\n> L.mode, c.relname, locktype, l.GRANTED, l.transactionid,\n> virtualtransaction\n> FROM pg_locks l, pg_class c\n> where c.oid = l.relation\n> \n> -------------- RESULT\n> --------------------------------------------------------------\n> AccessShareLock \tpg_class_tblspc_relfilenode_index \trelation \tTRUE\n> (null) \t3/71\n> AccessShareLock \tpg_class_relname_nsp_index \trelation \tTRUE \t(null) \t3/71\n> AccessShareLock \tpg_class_oid_index \trelation \tTRUE \t(null) \t3/71\n> AccessShareLock \tpg_class \trelation \tTRUE \t(null) \t3/71\n> AccessShareLock \tpg_locks \trelation \tTRUE \t(null) \t3/71\n> ShareLock \tlineitem \trelation \tTRUE \t(null) \t21/3769\n> \n> \n\nWell, we see something is holding a SHARE lock on the \"lineitem\" table,\nbut we don't really know what the session is doing.\n\nThere's a PID in the pg_locks table, you can use it to lookup the\nsession in pg_stat_activity which includes the query (and also \"state\"\ncolumn that will tell you if it's active or waiting for a lock.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 11 Oct 2017 23:08:55 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: blocking index creation"
},
{
"msg_contents": "Hello all,\nI ran the query on PG_STAT_ACTIVITY table (Select * From\npg_stat_activity), see the complete result in this worksheet of the link\nbelow.\n\nhttps://sites.google.com/site/goissbr/img/Resultado_pg_stat_activity-create_index.xls\n\nThe CREATE INDEX command line is identified with the orange background.\nAt this point 18 hours have passed and the creation of a single index has\nnot yet been completed.\nI have verified that the command is Active status, but I do not know if\nit's waiting for anything, can you help me analyze the attached output.\n\nRegards\nNeto\n\n2017-10-11 18:08 GMT-03:00 Tomas Vondra <[email protected]>:\n\n>\n>\n> On 10/11/2017 04:11 PM, Neto pr wrote:\n> >\n> > 2017-10-11 10:46 GMT-03:00 Laurenz Albe <[email protected]\n> > <mailto:[email protected]>>:\n> >\n> > Neto pr wrote:\n> > > When creating index on table of approximately 10GB of data, the\n> DBMS hangs (I think),\n> > > because even after waiting 10 hours there was no return of the\n> command.\n> > > It happened by creating Hash indexes and B + tree indexes.\n> > > However, for some columns, it was successfully (L_RETURNFLAG,\n> L_PARTKEY).\n> >\n> > > If someone has a hint how to speed up index creation so that it\n> completes successfully.\n> >\n> > Look if CREATE INDEX is running or waiting for a lock (check the\n> > \"pg_locks\" table, see if the backend consumes CPU time).\n> >\n> >\n> > In this moment now, there is an index being created in the Lineitem\n> > table (+ - 10 Gb), and apparently it is locked, since it started 7 hours\n> > ago.\n> > I've looked at the pg_locks table and look at the result, it's with\n> > \"ShareLock\" lock mode.\n> > Is this blocking correct? or should it be another type?\n> >\n>\n> Yes, CREATE INDEX acquire SHARE lock, see\n>\n> https://www.postgresql.org/docs/9.1/static/explicit-locking.html\n>\n> > Before creating the index, should I set the type of transaction lock?\n> What?\n>\n> Eeee? Not sure I understand. The command acquires all necessary locks\n> automatically.\n>\n> > ------------------------------------------------------------\n> -------------------------------\n> > SELECT\n> > L.mode, c.relname, locktype, l.GRANTED, l.transactionid,\n> > virtualtransaction\n> > FROM pg_locks l, pg_class c\n> > where c.oid = l.relation\n> >\n> > -------------- RESULT\n> > --------------------------------------------------------------\n> > AccessShareLock pg_class_tblspc_relfilenode_index relation\n> TRUE\n> > (null) 3/71\n> > AccessShareLock pg_class_relname_nsp_index relation\n> TRUE (null) 3/71\n> > AccessShareLock pg_class_oid_index relation TRUE\n> (null) 3/71\n> > AccessShareLock pg_class relation TRUE (null)\n> 3/71\n> > AccessShareLock pg_locks relation TRUE (null)\n> 3/71\n> > ShareLock lineitem relation TRUE (null) 21/3769\n> >\n> >\n>\n> Well, we see something is holding a SHARE lock on the \"lineitem\" table,\n> but we don't really know what the session is doing.\n>\n> There's a PID in the pg_locks table, you can use it to lookup the\n> session in pg_stat_activity which includes the query (and also \"state\"\n> column that will tell you if it's active or waiting for a lock.\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nHello all,I ran the query on PG_STAT_ACTIVITY table (Select * From pg_stat_activity), see the complete result in this worksheet of the link below.https://sites.google.com/site/goissbr/img/Resultado_pg_stat_activity-create_index.xlsThe CREATE INDEX command line is identified with the orange background.At this point 18 hours have passed and the creation of a single index has not yet been completed.I have verified that the command is Active status, but I do not know if it's waiting for anything, can you help me analyze the attached output.RegardsNeto2017-10-11 18:08 GMT-03:00 Tomas Vondra <[email protected]>:\n\nOn 10/11/2017 04:11 PM, Neto pr wrote:\n>\n> 2017-10-11 10:46 GMT-03:00 Laurenz Albe <[email protected]\n> <mailto:[email protected]>>:\n>\n> Neto pr wrote:\n> > When creating index on table of approximately 10GB of data, the DBMS hangs (I think),\n> > because even after waiting 10 hours there was no return of the command.\n> > It happened by creating Hash indexes and B + tree indexes.\n> > However, for some columns, it was successfully (L_RETURNFLAG, L_PARTKEY).\n>\n> > If someone has a hint how to speed up index creation so that it completes successfully.\n>\n> Look if CREATE INDEX is running or waiting for a lock (check the\n> \"pg_locks\" table, see if the backend consumes CPU time).\n>\n>\n> In this moment now, there is an index being created in the Lineitem\n> table (+ - 10 Gb), and apparently it is locked, since it started 7 hours\n> ago.\n> I've looked at the pg_locks table and look at the result, it's with\n> \"ShareLock\" lock mode.\n> Is this blocking correct? or should it be another type?\n>\n\nYes, CREATE INDEX acquire SHARE lock, see\n\n https://www.postgresql.org/docs/9.1/static/explicit-locking.html\n\n> Before creating the index, should I set the type of transaction lock? What?\n\nEeee? Not sure I understand. The command acquires all necessary locks\nautomatically.\n\n> -------------------------------------------------------------------------------------------\n> SELECT\n> L.mode, c.relname, locktype, l.GRANTED, l.transactionid,\n> virtualtransaction\n> FROM pg_locks l, pg_class c\n> where c.oid = l.relation\n>\n> -------------- RESULT\n> --------------------------------------------------------------\n> AccessShareLock pg_class_tblspc_relfilenode_index relation TRUE\n> (null) 3/71\n> AccessShareLock pg_class_relname_nsp_index relation TRUE (null) 3/71\n> AccessShareLock pg_class_oid_index relation TRUE (null) 3/71\n> AccessShareLock pg_class relation TRUE (null) 3/71\n> AccessShareLock pg_locks relation TRUE (null) 3/71\n> ShareLock lineitem relation TRUE (null) 21/3769\n>\n> \n\nWell, we see something is holding a SHARE lock on the \"lineitem\" table,\nbut we don't really know what the session is doing.\n\nThere's a PID in the pg_locks table, you can use it to lookup the\nsession in pg_stat_activity which includes the query (and also \"state\"\ncolumn that will tell you if it's active or waiting for a lock.\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 11 Oct 2017 19:54:16 -0300",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: blocking index creation"
},
{
"msg_contents": "Dear,\nWith alternative, I tested the creation using concurrency\n(CREATE INDEX CONCURRENCY NAME_IDX ON TABLE USING HASH (COLUMN);\n\nfrom what I saw the index already appeared in the query result, because\nbefore this, the index did not even appear in the result, only the Lineitem\ntable:\n\nSELECT\n L.mode, c.relname, locktype, l.GRANTED, l.transactionid,\nvirtualtransaction\nFROM pg_locks l, pg_class c\nwhere c.oid = l.relation\n\nscreen result after concurrency: https://i.stack.imgur.com/htzIY.jpg\n\nNow, I'm waiting to finish creating the index.\n\n2017-10-11 19:54 GMT-03:00 Neto pr <[email protected]>:\n\n> Hello all,\n> I ran the query on PG_STAT_ACTIVITY table (Select * From\n> pg_stat_activity), see the complete result in this worksheet of the link\n> below.\n>\n> https://sites.google.com/site/goissbr/img/Resultado_pg_stat_\n> activity-create_index.xls\n>\n> The CREATE INDEX command line is identified with the orange background.\n> At this point 18 hours have passed and the creation of a single index has\n> not yet been completed.\n> I have verified that the command is Active status, but I do not know if\n> it's waiting for anything, can you help me analyze the attached output.\n>\n> Regards\n> Neto\n>\n> 2017-10-11 18:08 GMT-03:00 Tomas Vondra <[email protected]>:\n>\n>>\n>>\n>> On 10/11/2017 04:11 PM, Neto pr wrote:\n>> >\n>> > 2017-10-11 10:46 GMT-03:00 Laurenz Albe <[email protected]\n>> > <mailto:[email protected]>>:\n>> >\n>> > Neto pr wrote:\n>> > > When creating index on table of approximately 10GB of data, the\n>> DBMS hangs (I think),\n>> > > because even after waiting 10 hours there was no return of the\n>> command.\n>> > > It happened by creating Hash indexes and B + tree indexes.\n>> > > However, for some columns, it was successfully (L_RETURNFLAG,\n>> L_PARTKEY).\n>> >\n>> > > If someone has a hint how to speed up index creation so that it\n>> completes successfully.\n>> >\n>> > Look if CREATE INDEX is running or waiting for a lock (check the\n>> > \"pg_locks\" table, see if the backend consumes CPU time).\n>> >\n>> >\n>> > In this moment now, there is an index being created in the Lineitem\n>> > table (+ - 10 Gb), and apparently it is locked, since it started 7 hours\n>> > ago.\n>> > I've looked at the pg_locks table and look at the result, it's with\n>> > \"ShareLock\" lock mode.\n>> > Is this blocking correct? or should it be another type?\n>> >\n>>\n>> Yes, CREATE INDEX acquire SHARE lock, see\n>>\n>> https://www.postgresql.org/docs/9.1/static/explicit-locking.html\n>>\n>> > Before creating the index, should I set the type of transaction lock?\n>> What?\n>>\n>> Eeee? Not sure I understand. The command acquires all necessary locks\n>> automatically.\n>>\n>> > ------------------------------------------------------------\n>> -------------------------------\n>> > SELECT\n>> > L.mode, c.relname, locktype, l.GRANTED, l.transactionid,\n>> > virtualtransaction\n>> > FROM pg_locks l, pg_class c\n>> > where c.oid = l.relation\n>> >\n>> > -------------- RESULT\n>> > --------------------------------------------------------------\n>> > AccessShareLock pg_class_tblspc_relfilenode_index\n>> relation TRUE\n>> > (null) 3/71\n>> > AccessShareLock pg_class_relname_nsp_index relation\n>> TRUE (null) 3/71\n>> > AccessShareLock pg_class_oid_index relation TRUE\n>> (null) 3/71\n>> > AccessShareLock pg_class relation TRUE (null)\n>> 3/71\n>> > AccessShareLock pg_locks relation TRUE (null)\n>> 3/71\n>> > ShareLock lineitem relation TRUE (null) 21/3769\n>> >\n>> >\n>>\n>> Well, we see something is holding a SHARE lock on the \"lineitem\" table,\n>> but we don't really know what the session is doing.\n>>\n>> There's a PID in the pg_locks table, you can use it to lookup the\n>> session in pg_stat_activity which includes the query (and also \"state\"\n>> column that will tell you if it's active or waiting for a lock.\n>>\n>> regards\n>>\n>> --\n>> Tomas Vondra http://www.2ndQuadrant.com\n>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>>\n>\n>\n\nDear,With alternative, I tested the creation using concurrency (CREATE INDEX CONCURRENCY NAME_IDX ON TABLE USING HASH (COLUMN);from what I saw the index already appeared in the query result, because before this, the index did not even appear in the result, only the Lineitem table:SELECT L.mode, c.relname, locktype, l.GRANTED, l.transactionid, virtualtransactionFROM pg_locks l, pg_class cwhere c.oid = l.relationscreen result after concurrency: https://i.stack.imgur.com/htzIY.jpgNow, I'm waiting to finish creating the index.2017-10-11 19:54 GMT-03:00 Neto pr <[email protected]>:Hello all,I ran the query on PG_STAT_ACTIVITY table (Select * From pg_stat_activity), see the complete result in this worksheet of the link below.https://sites.google.com/site/goissbr/img/Resultado_pg_stat_activity-create_index.xlsThe CREATE INDEX command line is identified with the orange background.At this point 18 hours have passed and the creation of a single index has not yet been completed.I have verified that the command is Active status, but I do not know if it's waiting for anything, can you help me analyze the attached output.RegardsNeto2017-10-11 18:08 GMT-03:00 Tomas Vondra <[email protected]>:\n\nOn 10/11/2017 04:11 PM, Neto pr wrote:\n>\n> 2017-10-11 10:46 GMT-03:00 Laurenz Albe <[email protected]\n> <mailto:[email protected]>>:\n>\n> Neto pr wrote:\n> > When creating index on table of approximately 10GB of data, the DBMS hangs (I think),\n> > because even after waiting 10 hours there was no return of the command.\n> > It happened by creating Hash indexes and B + tree indexes.\n> > However, for some columns, it was successfully (L_RETURNFLAG, L_PARTKEY).\n>\n> > If someone has a hint how to speed up index creation so that it completes successfully.\n>\n> Look if CREATE INDEX is running or waiting for a lock (check the\n> \"pg_locks\" table, see if the backend consumes CPU time).\n>\n>\n> In this moment now, there is an index being created in the Lineitem\n> table (+ - 10 Gb), and apparently it is locked, since it started 7 hours\n> ago.\n> I've looked at the pg_locks table and look at the result, it's with\n> \"ShareLock\" lock mode.\n> Is this blocking correct? or should it be another type?\n>\n\nYes, CREATE INDEX acquire SHARE lock, see\n\n https://www.postgresql.org/docs/9.1/static/explicit-locking.html\n\n> Before creating the index, should I set the type of transaction lock? What?\n\nEeee? Not sure I understand. The command acquires all necessary locks\nautomatically.\n\n> -------------------------------------------------------------------------------------------\n> SELECT\n> L.mode, c.relname, locktype, l.GRANTED, l.transactionid,\n> virtualtransaction\n> FROM pg_locks l, pg_class c\n> where c.oid = l.relation\n>\n> -------------- RESULT\n> --------------------------------------------------------------\n> AccessShareLock pg_class_tblspc_relfilenode_index relation TRUE\n> (null) 3/71\n> AccessShareLock pg_class_relname_nsp_index relation TRUE (null) 3/71\n> AccessShareLock pg_class_oid_index relation TRUE (null) 3/71\n> AccessShareLock pg_class relation TRUE (null) 3/71\n> AccessShareLock pg_locks relation TRUE (null) 3/71\n> ShareLock lineitem relation TRUE (null) 21/3769\n>\n> \n\nWell, we see something is holding a SHARE lock on the \"lineitem\" table,\nbut we don't really know what the session is doing.\n\nThere's a PID in the pg_locks table, you can use it to lookup the\nsession in pg_stat_activity which includes the query (and also \"state\"\ncolumn that will tell you if it's active or waiting for a lock.\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 11 Oct 2017 22:35:53 -0300",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: blocking index creation"
},
{
"msg_contents": "Try the queries here to check locks:\n\nhttps://wiki.postgresql.org/wiki/Lock_Monitoring\n\nOn Wed, Oct 11, 2017 at 7:35 PM, Neto pr <[email protected]> wrote:\n> Dear,\n> With alternative, I tested the creation using concurrency\n> (CREATE INDEX CONCURRENCY NAME_IDX ON TABLE USING HASH (COLUMN);\n>\n> from what I saw the index already appeared in the query result, because\n> before this, the index did not even appear in the result, only the Lineitem\n> table:\n>\n> SELECT\n> L.mode, c.relname, locktype, l.GRANTED, l.transactionid,\n> virtualtransaction\n> FROM pg_locks l, pg_class c\n> where c.oid = l.relation\n>\n> screen result after concurrency: https://i.stack.imgur.com/htzIY.jpg\n>\n> Now, I'm waiting to finish creating the index.\n>\n> 2017-10-11 19:54 GMT-03:00 Neto pr <[email protected]>:\n>>\n>> Hello all,\n>> I ran the query on PG_STAT_ACTIVITY table (Select * From\n>> pg_stat_activity), see the complete result in this worksheet of the link\n>> below.\n>>\n>>\n>> https://sites.google.com/site/goissbr/img/Resultado_pg_stat_activity-create_index.xls\n>>\n>> The CREATE INDEX command line is identified with the orange background.\n>> At this point 18 hours have passed and the creation of a single index has\n>> not yet been completed.\n>> I have verified that the command is Active status, but I do not know if\n>> it's waiting for anything, can you help me analyze the attached output.\n>>\n>> Regards\n>> Neto\n>>\n>> 2017-10-11 18:08 GMT-03:00 Tomas Vondra <[email protected]>:\n>>>\n>>>\n>>>\n>>> On 10/11/2017 04:11 PM, Neto pr wrote:\n>>> >\n>>> > 2017-10-11 10:46 GMT-03:00 Laurenz Albe <[email protected]\n>>> > <mailto:[email protected]>>:\n>>> >\n>>> > Neto pr wrote:\n>>> > > When creating index on table of approximately 10GB of data, the\n>>> > DBMS hangs (I think),\n>>> > > because even after waiting 10 hours there was no return of the\n>>> > command.\n>>> > > It happened by creating Hash indexes and B + tree indexes.\n>>> > > However, for some columns, it was successfully (L_RETURNFLAG,\n>>> > L_PARTKEY).\n>>> >\n>>> > > If someone has a hint how to speed up index creation so that it\n>>> > completes successfully.\n>>> >\n>>> > Look if CREATE INDEX is running or waiting for a lock (check the\n>>> > \"pg_locks\" table, see if the backend consumes CPU time).\n>>> >\n>>> >\n>>> > In this moment now, there is an index being created in the Lineitem\n>>> > table (+ - 10 Gb), and apparently it is locked, since it started 7\n>>> > hours\n>>> > ago.\n>>> > I've looked at the pg_locks table and look at the result, it's with\n>>> > \"ShareLock\" lock mode.\n>>> > Is this blocking correct? or should it be another type?\n>>> >\n>>>\n>>> Yes, CREATE INDEX acquire SHARE lock, see\n>>>\n>>> https://www.postgresql.org/docs/9.1/static/explicit-locking.html\n>>>\n>>> > Before creating the index, should I set the type of transaction lock?\n>>> > What?\n>>>\n>>> Eeee? Not sure I understand. The command acquires all necessary locks\n>>> automatically.\n>>>\n>>> >\n>>> > -------------------------------------------------------------------------------------------\n>>> > SELECT\n>>> > L.mode, c.relname, locktype, l.GRANTED, l.transactionid,\n>>> > virtualtransaction\n>>> > FROM pg_locks l, pg_class c\n>>> > where c.oid = l.relation\n>>> >\n>>> > -------------- RESULT\n>>> > --------------------------------------------------------------\n>>> > AccessShareLock pg_class_tblspc_relfilenode_index relation\n>>> > TRUE\n>>> > (null) 3/71\n>>> > AccessShareLock pg_class_relname_nsp_index relation\n>>> > TRUE (null) 3/71\n>>> > AccessShareLock pg_class_oid_index relation TRUE\n>>> > (null) 3/71\n>>> > AccessShareLock pg_class relation TRUE (null)\n>>> > 3/71\n>>> > AccessShareLock pg_locks relation TRUE (null)\n>>> > 3/71\n>>> > ShareLock lineitem relation TRUE (null) 21/3769\n>>> >\n>>> >\n>>>\n>>> Well, we see something is holding a SHARE lock on the \"lineitem\" table,\n>>> but we don't really know what the session is doing.\n>>>\n>>> There's a PID in the pg_locks table, you can use it to lookup the\n>>> session in pg_stat_activity which includes the query (and also \"state\"\n>>> column that will tell you if it's active or waiting for a lock.\n>>>\n>>> regards\n>>>\n>>> --\n>>> Tomas Vondra http://www.2ndQuadrant.com\n>>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>>\n>>\n>\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 11 Oct 2017 20:25:44 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: blocking index creation"
}
] |
[
{
"msg_contents": "Hi,\n\nI've faced a problem that a query without limit works much faster than with\none. Steps to reproduce\n\ncreate extension pg_trgm;\ncreate table t (id serial, val text, constraint t_pk primary key (id));\ninsert into t (val) select (random() * 100)::text from generate_series(1,\n1000000);\ncreate index t_val_idx on t using gin (val gin_trgm_ops);\n\nquota_patient> explain (analyze,buffers) select id from t where val like (\nselect '6'::text) group by id;\n+--------------------------------------------------------------------------------------------------------------------------------+\n\n| QUERY PLAN\n\n |\n\n|--------------------------------------------------------------------------------------------------------------------------------|\n\n| HashAggregate (cost=6401.14..6451.14 rows=5000 width=4) (actual\ntime=22.861..22.861 rows=0 loops=1) |\n| Group Key: id\n\n |\n\n| Buffers: shared hit=5158\n\n |\n\n| InitPlan 1 (returns $0)\n\n |\n\n| -> Result (cost=0.00..0.01 rows=1 width=32) (actual\ntime=0.002..0.002 rows=1 loops=1) |\n| -> Bitmap Heap Scan on t (cost=50.75..6388.63 rows=5000 width=4)\n(actual time=22.835..22.835 rows=0 loops=1) |\n| Recheck Cond: (val ~~ $0)\n\n |\n\n| Rows Removed by Index Recheck: 10112\n\n |\n\n| Heap Blocks: exact=5097\n\n |\n\n| Buffers: shared hit=5158\n\n |\n\n| -> Bitmap Index Scan on t_val_idx (cost=0.00..49.50 rows=5000\nwidth=0) (actual time=8.762..8.762 rows=10112 loops=1) |\n| Index Cond: (val ~~ $0)\n\n |\n\n| Buffers: shared hit=61\n\n |\n\n| Planning time: 0.166 ms\n\n |\n\n| Execution time: 22.970 ms\n\n |\n\n+--------------------------------------------------------------------------------------------------------------------------------+\n\nEXPLAIN\nTime: 0.026s\n\nquota_patient> explain (analyze,buffers) select id from t where val like (\nselect '6'::text) group by id limit 1;\n+-------------------------------------------------------------------------------------------------------------------------------+\n\n| QUERY PLAN\n\n |\n\n|-------------------------------------------------------------------------------------------------------------------------------|\n\n| Limit (cost=0.43..7.41 rows=1 width=4) (actual time=439.561..439.561\nrows=0 loops=1) |\n| Buffers: shared hit=9105\n\n |\n\n| InitPlan 1 (returns $0)\n\n |\n\n| -> Result (cost=0.00..0.01 rows=1 width=32) (actual\ntime=0.002..0.002 rows=1 loops=1) |\n| -> Group (cost=0.42..34865.93 rows=5000 width=4) (actual\ntime=439.560..439.560 rows=0 loops=1) |\n| Group Key: id\n\n |\n\n| Buffers: shared hit=9105\n\n |\n\n| -> Index Scan using t_pk on t (cost=0.42..34853.43 rows=5000\nwidth=4) (actual time=439.557..439.557 rows=0 loops=1) |\n| Filter: (val ~~ $0)\n\n |\n\n| Rows Removed by Filter: 1000000\n\n |\n\n| Buffers: shared hit=9105\n\n |\n\n| Planning time: 0.205 ms\n\n |\n\n| Execution time: 439.610 ms\n\n |\n\n+-------------------------------------------------------------------------------------------------------------------------------+\n\nEXPLAIN\nTime: 0.443s\n\n\nI can't understand why adding limit after group by makes a planner fall to\nnon optimal plan. I tried to add more work_mem (up to 100Mb) but no effect.\nIs it a planner bug?\nBTW if I don't use subquery after like everything is ok\n\nquota_patient> explain (analyze,buffers) select id from t where val like '6'\n::text group by id limit 1;\n+-----------------------------------------------------------------------------------------------------------------------------------------+\n\n| QUERY PLAN\n\n |\n\n|-----------------------------------------------------------------------------------------------------------------------------------------|\n\n| Limit (cost=24.03..24.04 rows=1 width=4) (actual time=23.048..23.048\nrows=0 loops=1) |\n| Buffers: shared hit=5158\n\n |\n\n| -> Group (cost=24.03..24.04 rows=1 width=4) (actual\ntime=23.046..23.046 rows=0 loops=1)\n |\n| Group Key: id\n\n |\n\n| Buffers: shared hit=5158\n\n |\n\n| -> Sort (cost=24.03..24.04 rows=1 width=4) (actual\ntime=23.046..23.046 rows=0 loops=1)\n |\n| Sort Key: id\n\n |\n\n| Sort Method: quicksort Memory: 25kB\n\n |\n\n| Buffers: shared hit=5158\n\n |\n\n| -> Bitmap Heap Scan on t (cost=20.01..24.02 rows=1\nwidth=4) (actual time=23.036..23.036 rows=0 loops=1) |\n| Recheck Cond: (val ~~ '6'::text)\n\n |\n\n| Rows Removed by Index Recheck: 10112\n\n |\n\n| Heap Blocks: exact=5097\n\n |\n\n| Buffers: shared hit=5158\n\n |\n\n| -> Bitmap Index Scan on t_val_idx (cost=0.00..20.01\nrows=1 width=0) (actual time=8.740..8.740 rows=10112 loops=1) |\n| Index Cond: (val ~~ '6'::text)\n\n |\n\n| Buffers: shared hit=61\n\n |\n\n| Planning time: 0.190 ms\n\n |\n\n| Execution time: 23.105 ms\n\n |\n\n+-----------------------------------------------------------------------------------------------------------------------------------------+\n\nEXPLAIN\nTime: 0.026s\n\nHi,I've faced a problem that a query without limit works much faster than with one. Steps to reproducecreate extension pg_trgm;create table t (id serial, val text, constraint t_pk primary key (id));insert into t (val) select (random() * 100)::text from generate_series(1,1000000);create index t_val_idx on t using gin (val gin_trgm_ops);quota_patient> explain (analyze,buffers) select id from t where val like (select '6'::text) group by id;\r\n+--------------------------------------------------------------------------------------------------------------------------------+\r\n| QUERY PLAN |\r\n|--------------------------------------------------------------------------------------------------------------------------------|\r\n| HashAggregate (cost=6401.14..6451.14 rows=5000 width=4) (actual time=22.861..22.861 rows=0 loops=1) |\r\n| Group Key: id |\r\n| Buffers: shared hit=5158 |\r\n| InitPlan 1 (returns $0) |\r\n| -> Result (cost=0.00..0.01 rows=1 width=32) (actual time=0.002..0.002 rows=1 loops=1) |\r\n| -> Bitmap Heap Scan on t (cost=50.75..6388.63 rows=5000 width=4) (actual time=22.835..22.835 rows=0 loops=1) |\r\n| Recheck Cond: (val ~~ $0) |\r\n| Rows Removed by Index Recheck: 10112 |\r\n| Heap Blocks: exact=5097 |\r\n| Buffers: shared hit=5158 |\r\n| -> Bitmap Index Scan on t_val_idx (cost=0.00..49.50 rows=5000 width=0) (actual time=8.762..8.762 rows=10112 loops=1) |\r\n| Index Cond: (val ~~ $0) |\r\n| Buffers: shared hit=61 |\r\n| Planning time: 0.166 ms |\r\n| Execution time: 22.970 ms |\r\n+--------------------------------------------------------------------------------------------------------------------------------+\r\nEXPLAIN\r\nTime: 0.026squota_patient> explain (analyze,buffers) select id from t where val like (select '6'::text) group by id limit 1;\r\n+-------------------------------------------------------------------------------------------------------------------------------+\r\n| QUERY PLAN |\r\n|-------------------------------------------------------------------------------------------------------------------------------|\r\n| Limit (cost=0.43..7.41 rows=1 width=4) (actual time=439.561..439.561 rows=0 loops=1) |\r\n| Buffers: shared hit=9105 |\r\n| InitPlan 1 (returns $0) |\r\n| -> Result (cost=0.00..0.01 rows=1 width=32) (actual time=0.002..0.002 rows=1 loops=1) |\r\n| -> Group (cost=0.42..34865.93 rows=5000 width=4) (actual time=439.560..439.560 rows=0 loops=1) |\r\n| Group Key: id |\r\n| Buffers: shared hit=9105 |\r\n| -> Index Scan using t_pk on t (cost=0.42..34853.43 rows=5000 width=4) (actual time=439.557..439.557 rows=0 loops=1) |\r\n| Filter: (val ~~ $0) |\r\n| Rows Removed by Filter: 1000000 |\r\n| Buffers: shared hit=9105 |\r\n| Planning time: 0.205 ms |\r\n| Execution time: 439.610 ms |\r\n+-------------------------------------------------------------------------------------------------------------------------------+\r\nEXPLAIN\r\nTime: 0.443s\nI can't understand why adding limit after group by makes a planner fall to non optimal plan. I tried to add more work_mem (up to 100Mb) but no effect. Is it a planner bug?BTW if I don't use subquery after like everything is okquota_patient> explain (analyze,buffers) select id from t where val like '6'::text group by id limit 1;\r\n+-----------------------------------------------------------------------------------------------------------------------------------------+\r\n| QUERY PLAN |\r\n|-----------------------------------------------------------------------------------------------------------------------------------------|\r\n| Limit (cost=24.03..24.04 rows=1 width=4) (actual time=23.048..23.048 rows=0 loops=1) |\r\n| Buffers: shared hit=5158 |\r\n| -> Group (cost=24.03..24.04 rows=1 width=4) (actual time=23.046..23.046 rows=0 loops=1) |\r\n| Group Key: id |\r\n| Buffers: shared hit=5158 |\r\n| -> Sort (cost=24.03..24.04 rows=1 width=4) (actual time=23.046..23.046 rows=0 loops=1) |\r\n| Sort Key: id |\r\n| Sort Method: quicksort Memory: 25kB |\r\n| Buffers: shared hit=5158 |\r\n| -> Bitmap Heap Scan on t (cost=20.01..24.02 rows=1 width=4) (actual time=23.036..23.036 rows=0 loops=1) |\r\n| Recheck Cond: (val ~~ '6'::text) |\r\n| Rows Removed by Index Recheck: 10112 |\r\n| Heap Blocks: exact=5097 |\r\n| Buffers: shared hit=5158 |\r\n| -> Bitmap Index Scan on t_val_idx (cost=0.00..20.01 rows=1 width=0) (actual time=8.740..8.740 rows=10112 loops=1) |\r\n| Index Cond: (val ~~ '6'::text) |\r\n| Buffers: shared hit=61 |\r\n| Planning time: 0.190 ms |\r\n| Execution time: 23.105 ms |\r\n+-----------------------------------------------------------------------------------------------------------------------------------------+\r\nEXPLAIN\r\nTime: 0.026s",
"msg_date": "Thu, 12 Oct 2017 10:18:39 +1000",
"msg_from": "=?UTF-8?B?0JTQtdC90LjRgSDQodC80LjRgNC90L7Qsg==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Wrong plane for limit after group by"
}
] |
[
{
"msg_contents": "Hi\n I would like your advice and recommendation about the following infrastructure problem :\nWhat is the best way to optimize synchronization between an instance PostgreSQL on Windows 7 workstation and an Oracle 11gR2 database on linux RHEL ?\nHere are more detailed explanations\nIn our company we have people who collect data in a 9.6 postgresql instance on their workstation that is disconnected from the internet.\nIn the evening, they connect to the Internet and synchronize the collected data to a remote 11gr2 Oracle database.\nWhat is the best performant way to do this ( Oracle_FDW ?, flat files ?, ...)\n\nThanks in advance\n\nBest Regards\n[cid:[email protected]]\nDidier ROS\nDSP/CSP IT-DMA/Solutions Groupe EDF/Expertise Applicative\nExpertise SGBD\nMail : [email protected]\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 12 Oct 2017 09:13:03 +0000",
"msg_from": "ROS Didier <[email protected]>",
"msg_from_op": true,
"msg_subject": "synchronization between PostgreSQL and Oracle"
},
{
"msg_contents": "On Thu, Oct 12, 2017 at 5:13 AM, ROS Didier <[email protected]> wrote:\n\n> Hi\n>\n> I would like your advice and recommendation about the\n> following infrastructure problem :\n>\n> What is the best way to optimize synchronization between an instance\n> PostgreSQL on Windows 7 workstation and an Oracle 11gR2 database on linux\n> RHEL ?\n>\n> Here are more detailed explanations\n>\n> In our company we have people who collect data in a 9.6 postgresql\n> instance on their workstation that is disconnected from the internet.\n>\n> In the evening, they connect to the Internet and synchronize the collected\n> data to a remote 11gr2 Oracle database.\n>\n> What is the best performant way to do this ( Oracle_FDW ?, flat files ?, …)\n>\n>\n>\nThere are several ways to go about this, but for your use case I'd\nrecommend SymmetricDS -- http://symmetricds.org (or for the commercial\nversion: http://jumpmind.com)\n\nSymmetricDS was originally designed to collect data from cash registers in\na vastly distributed set of small databases and aggregate those results\nback into both regional and national data warehouses. It also pushed data\nthe other way - when pricing was updated at corporate headquarters, the\ndata was pushed back into the cash registers. It works with a wide variety\nof database technologies, scales well, and has many synchronization\noptions. It is also being used by some organizations these days to\nsynchronize small databases on IOS and Android devices with their parent\ndatabases back at HQ.\n\nI first used it to implement an Oracle to PostgreSQL data migration that\nhad to be done without down time. I've used it successfully for real time\ndata pushes from MySQL and PG OLTP systems into an Oracle DataMart. I\nalso used to use it for PostgreSQL bidirectional replication before other\ntools became easier to use. Because of its great flexibility, SymmetricDS\nhas a ton of knobs to turn and buttons and configuration options and may\ntake a bit to get it working optimally. If you are short on time to\nimplement a solution, I'd suggest going with the commercial version.\n\nOn Thu, Oct 12, 2017 at 5:13 AM, ROS Didier <[email protected]> wrote:\n\n\nHi\n I would like your advice and recommendation about the following infrastructure problem :\nWhat is the best way to optimize synchronization between an instance PostgreSQL on Windows 7 workstation and an Oracle 11gR2 database on linux RHEL ?\nHere are more detailed explanations\nIn our company we have people who collect data in a 9.6 postgresql instance on their workstation that is disconnected from the internet.\nIn the evening, they connect to the Internet and synchronize the collected data to a remote 11gr2 Oracle database.\nWhat is the best performant way to do this ( Oracle_FDW ?, flat files ?, …)\nThere are several ways to go about this, but for your use case I'd recommend SymmetricDS -- http://symmetricds.org (or for the commercial version: http://jumpmind.com)SymmetricDS was originally designed to collect data from cash registers in a vastly distributed set of small databases and aggregate those results back into both regional and national data warehouses. It also pushed data the other way - when pricing was updated at corporate headquarters, the data was pushed back into the cash registers. It works with a wide variety of database technologies, scales well, and has many synchronization options. It is also being used by some organizations these days to synchronize small databases on IOS and Android devices with their parent databases back at HQ.I first used it to implement an Oracle to PostgreSQL data migration that had to be done without down time. I've used it successfully for real time data pushes from MySQL and PG OLTP systems into an Oracle DataMart. I also used to use it for PostgreSQL bidirectional replication before other tools became easier to use. Because of its great flexibility, SymmetricDS has a ton of knobs to turn and buttons and configuration options and may take a bit to get it working optimally. If you are short on time to implement a solution, I'd suggest going with the commercial version.",
"msg_date": "Thu, 12 Oct 2017 06:01:40 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: synchronization between PostgreSQL and Oracle"
},
{
"msg_contents": "ROS Didier wrote:\n> I would like your advice and recommendation about the following infrastructure problem :\n> What is the best way to optimize synchronization between an instance PostgreSQL on Windows 7 workstation and an Oracle 11gR2 database on linux RHEL ?\n> Here are more detailed explanations\n> In our company we have people who collect data in a 9.6 postgresql instance on their workstation that is disconnected from the internet.\n> In the evening, they connect to the Internet and synchronize the collected data to a remote 11gr2 Oracle database.\n> What is the best performant way to do this ( Oracle_FDW ?, flat files ?, …)\n\nIf the synchronization is triggered from the workstation with\nPostgreSQL on it, you can either use oracle_fdw or pg_dump/sql*loader\nto transfer the data.\n\nUsing oracle_fdw is probably simpler, but it is not very performant\nfor bulk update operations.\n\nIf performance is the main objective, use export/import.\n\nYours,\nLaurenz Albe\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 12 Oct 2017 12:04:32 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: synchronization between PostgreSQL and Oracle"
}
] |
[
{
"msg_contents": "Hello,\n\nWe are running workload on a EDB Postgres Advanced Server 9.6 and we see\nthat 99% of the time is spent on WAL wait events:\n\n\n\n\n\n\n\n*System Wait Information WAIT NAME COUNT WAIT TIME % WAIT\n---------------------------------------------------------------------------\nwal flush 564552 298.789464 41.67 wal write 521514 211.601124 29.51 wal\nfile sync 521546 205.519643 28.66*\nDisk IO performance is not an issue and WAL is on a dedicated disk.\n\nCan somebody pls suggest if there is any possibility to improve this & how?\n\nWe already tried wal_buffers=96m, wal_sync_method=open_sync/open_datasync,\ncheckpoint_completion_target=0.9 but none of those helped.\n\nSystem has 32GB RAM and shared_buffers=8GB. All transactions are happening\non a single table which has about 1.5m records and the table size is 1.7GB\nwith just one PK index.\n\nMany Thanks\n\nRegards\n\nHello,We are running workload on a EDB Postgres Advanced Server 9.6 and we see that 99% of the time is spent on WAL wait events:System Wait\nInformation\n\nWAIT NAME COUNT WAIT TIME % WAIT\n---------------------------------------------------------------------------\nwal flush 564552 298.789464 41.67\nwal write 521514 211.601124 29.51\nwal file sync 521546 205.519643 28.66\n\nDisk IO performance is not an issue and WAL is on a dedicated disk.Can somebody pls suggest if there is any possibility to improve this & how?We already tried wal_buffers=96m, wal_sync_method=open_sync/open_datasync, checkpoint_completion_target=0.9 but none of those helped.System has 32GB RAM and shared_buffers=8GB. All transactions are\nhappening on a single table which has about 1.5m records and the table size is\n1.7GB with just one PK index.Many ThanksRegards",
"msg_date": "Mon, 16 Oct 2017 19:04:36 +0530",
"msg_from": "Purav Chovatia <[email protected]>",
"msg_from_op": true,
"msg_subject": "99% time spent in WAL wait events"
},
{
"msg_contents": "Kindly ignore this post. It was an oversight - the wait times are in\nmillisec and hence even if we manage to reduce these waits to 0, we will\ngain only 1000 msec of savings during a workload of 40min.\n\nRegards\n\nOn 16 Oct 2017 7:04 pm, \"Purav Chovatia\" <[email protected]> wrote:\n\nHello,\n\nWe are running workload on a EDB Postgres Advanced Server 9.6 and we see\nthat 99% of the time is spent on WAL wait events:\n\n\n\n\n\n\n\n*System Wait Information WAIT NAME COUNT WAIT TIME % WAIT\n---------------------------------------------------------------------------\nwal flush 564552 298.789464 41.67 wal write 521514 211.601124 29.51 wal\nfile sync 521546 205.519643 28.66*\nDisk IO performance is not an issue and WAL is on a dedicated disk.\n\nCan somebody pls suggest if there is any possibility to improve this & how?\n\nWe already tried wal_buffers=96m, wal_sync_method=open_sync/open_datasync,\ncheckpoint_completion_target=0.9 but none of those helped.\n\nSystem has 32GB RAM and shared_buffers=8GB. All transactions are happening\non a single table which has about 1.5m records and the table size is 1.7GB\nwith just one PK index.\n\nMany Thanks\n\nRegards\n\nKindly ignore this post. It was an oversight - the wait times are in millisec and hence even if we manage to reduce these waits to 0, we will gain only 1000 msec of savings during a workload of 40min. RegardsOn 16 Oct 2017 7:04 pm, \"Purav Chovatia\" <[email protected]> wrote:Hello,We are running workload on a EDB Postgres Advanced Server 9.6 and we see that 99% of the time is spent on WAL wait events:System Wait\nInformation\n\nWAIT NAME COUNT WAIT TIME % WAIT\n---------------------------------------------------------------------------\nwal flush 564552 298.789464 41.67\nwal write 521514 211.601124 29.51\nwal file sync 521546 205.519643 28.66\n\nDisk IO performance is not an issue and WAL is on a dedicated disk.Can somebody pls suggest if there is any possibility to improve this & how?We already tried wal_buffers=96m, wal_sync_method=open_sync/open_datasync, checkpoint_completion_target=0.9 but none of those helped.System has 32GB RAM and shared_buffers=8GB. All transactions are\nhappening on a single table which has about 1.5m records and the table size is\n1.7GB with just one PK index.Many ThanksRegards",
"msg_date": "Mon, 16 Oct 2017 21:01:32 +0530",
"msg_from": "Purav Chovatia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 99% time spent in WAL wait events"
}
] |
[
{
"msg_contents": "we are using cloud server\n\n*this are memory info*\n\nfree -h\n total used free shared buffers cached\nMem: 15G 15G 197M 194M 121M 14G\n-/+ buffers/cache: 926M 14G\nSwap: 15G 32M 15G\n\n*this are disk info:*\n df -h\n\nFilesystem Size Used Avail Use% Mounted on\n/dev/vda1 20G 1.7G 17G 10% /\ndevtmpfs 7.9G 0 7.9G 0% /dev\ntmpfs 7.9G 4.0K 7.9G 1% /dev/shm\ntmpfs 7.9G 17M 7.9G 1% /run\ntmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup\n/dev/mapper/vgzero-lvhome 99G 189M 94G 1% /home\n/dev/mapper/vgzero-lvdata 1.2T 75G 1.1T 7% /data\n/dev/mapper/vgzero-lvbackup 296G 6.2G 274G 3% /backup\n/dev/mapper/vgzero-lvxlog 197G 61M 187G 1% /pg_xlog\n/dev/mapper/vgzero-lvarchive 197G 67G 121G 36% /archive\n\n\n\ni allocated memory as per following list:\nshared_buffers = 2GB (10-30 %)\neffective_cache_size =7GB (70-75 %) ---->>(shared_buffers+page cache) for\ndedicated server only\nwork_mem = 128MB (0.3-1 %)\nmaintenance_work_mem = 512MB (0.5-4 % )\ntemp_Buffer = 8MB ---->>default is better( setting can\nbe changed within individual sessions)\n\ncheckpoint_segments = 64\ncheckpoint_completion_target = 0.9\nrandom_page_cost = 3.5\ncpu_tuple_cost = 0.05\nwal_buffers = 32MB leave this default 3% of shared buffer is better\n\n\n\nis it better or do i want to modify any thing\n\nour server is getting too slow again and again\n\nplease give me a suggestion\n\nwe are using cloud server this are memory infofree -h total used free shared buffers cachedMem: 15G 15G 197M 194M 121M 14G-/+ buffers/cache: 926M 14GSwap: 15G 32M 15Gthis are disk info: df -hFilesystem Size Used Avail Use% Mounted on/dev/vda1 20G 1.7G 17G 10% /devtmpfs 7.9G 0 7.9G 0% /devtmpfs 7.9G 4.0K 7.9G 1% /dev/shmtmpfs 7.9G 17M 7.9G 1% /runtmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup/dev/mapper/vgzero-lvhome 99G 189M 94G 1% /home/dev/mapper/vgzero-lvdata 1.2T 75G 1.1T 7% /data/dev/mapper/vgzero-lvbackup 296G 6.2G 274G 3% /backup/dev/mapper/vgzero-lvxlog 197G 61M 187G 1% /pg_xlog/dev/mapper/vgzero-lvarchive 197G 67G 121G 36% /archivei allocated memory as per following list:shared_buffers = 2GB (10-30 %)effective_cache_size =7GB (70-75 %) ---->>(shared_buffers+page cache) for dedicated server onlywork_mem = 128MB (0.3-1 %)maintenance_work_mem = 512MB (0.5-4 % )temp_Buffer = 8MB ---->>default is better( setting can be changed within individual sessions)checkpoint_segments = 64checkpoint_completion_target = 0.9random_page_cost = 3.5cpu_tuple_cost = 0.05wal_buffers = 32MB leave this default 3% of shared buffer is betteris it better or do i want to modify any thingour server is getting too slow again and againplease give me a suggestion",
"msg_date": "Tue, 17 Oct 2017 14:58:45 +0530",
"msg_from": "nijam J <[email protected]>",
"msg_from_op": true,
"msg_subject": "memory allocation"
},
{
"msg_contents": "nijam J wrote:\n> our server is getting too slow again and again\n\nUse \"vmstat 1\" and \"iostat -mNx 1\" to see if you are\nrunning out of memory, CPU capacity or I/O bandwith.\n\nFigure out if the slowness is due to slow queries or\nan overloaded system.\n\nYours,\nLaurenz Albe\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 19 Oct 2017 14:51:59 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: memory allocation"
}
] |
[
{
"msg_contents": "Hello.\n\nI have not used row level security policies in the past but am \nconsidering using them for a project in which I would like to restrict \nthe set returned in a query based on specific fields. This is more as a \nconvenience issue (for me) rather than a security issue.\n\nWhat I was wondering is what is the performance differences between a \nrow level security implementation:\n\nCREATE POLICY <policy name> ON <table> TO <role> USING \n(<field>=ANY(<values>));\n<series of selects>\nDROP POLICY <policy name>\n\nand an implementation where I add on the constraints as part of each \nselect statement:\n\nSELECT <whatever> FROM <table> WHERE <constraints> AND <field>=ANY(<values>)\n\nIn my (admittedly small) number of EXPLAINs I've looked at, it appears \nthat the policy logic is added to the SELECT statement as a constraint. \nSo I would not expect any fundamental performance difference in the 2 \ndifferent forms.\n\nIs this true? Or is there some extra behind-the-scenes things to be \naware of? Can there be excessive overhead from the CREATE/DROP POLICY \nstatements?\n\nThanks,\n\nJoe\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 17 Oct 2017 13:44:24 -0700",
"msg_from": "Joe Carlson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Row level security policy policy versus SQL constraints. Any\n performance difference?"
},
{
"msg_contents": "Hi,\n\nOn 10/17/2017 10:44 PM, Joe Carlson wrote:\n> Hello.\n> \n> I have not used row level security policies in the past but am\n> considering using them for a project in which I would like to restrict\n> the set returned in a query based on specific fields. This is more as a\n> convenience issue (for me) rather than a security issue.\n> \n> What I was wondering is what is the performance differences between a\n> row level security implementation:\n> \n> CREATE POLICY <policy name> ON <table> TO <role> USING\n> (<field>=ANY(<values>));\n> <series of selects>\n> DROP POLICY <policy name>\n> \n> and an implementation where I add on the constraints as part of each\n> select statement:\n> \n> SELECT <whatever> FROM <table> WHERE <constraints> AND\n> <field>=ANY(<values>)\n> \n> In my (admittedly small) number of EXPLAINs I've looked at, it appears\n> that the policy logic is added to the SELECT statement as a constraint.\n> So I would not expect any fundamental performance difference in the 2\n> different forms.\n> \n> Is this true? Or is there some extra behind-the-scenes things to be\n> aware of? Can there be excessive overhead from the CREATE/DROP POLICY\n> statements?\n> \n\nThe main point of the RLS is enforcing an order in which the conditions\nare evaluated. That is, the \"security\" quals (coming from RLS policies)\nhave to be evaluated first, before any quals that might leak information\nabout the values (imagine a custom PL/pgSQL function inserting the data\nsomewhere, or perhaps just printing debug messages).\n\n(Many built-in operators are however exempt from that, as we consider\nthem leak-proof. This allows us to use non-RLS conditions for index\nscans etc. which might be impossible otherwise)\n\nOtherwise yes - it's pretty much the same as if you combine the\nconditions using AND. It's \"just\" much more convenient approach.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 17 Oct 2017 23:35:58 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row level security policy policy versus SQL\n constraints. Any performance difference?"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 10/17/2017 10:44 PM, Joe Carlson wrote:\n>> What I was wondering is what is the performance differences between a\n>> row level security implementation:\n>> ...\n>> and an implementation where I add on the constraints as part of each\n>> select statement:\n\n> The main point of the RLS is enforcing an order in which the conditions\n> are evaluated.\n\nYeah. Because of that, I would *not* recommend RLS if you can equally\nwell stick the equivalent conditions into your queries. There is way\ntoo much risk of taking a serious performance hit due to a bad plan.\n\nAn alternative you might consider, if simplifying the input queries\nis useful, is to put the fixed conditions into a view and query the\nview instead. That way there's not an enforced evaluation order.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 17 Oct 2017 18:06:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Row level security policy policy versus SQL constraints. Any\n performance difference?"
},
{
"msg_contents": "Thanks for your suggestions.\n\nI had pretty much given up on this idea. At first, I had thought there \nwould only be 2 or 3 different constraint cases to consider. I had \nthought of using distinct credentials for my connection and using RLS to \ngive different cuts on the same table. The different policies could be \nestablished in advance and never touched.\n\nBut then it became clear that I actually would need a very large number \nof different restrictions on the tables - too many to create in advance. \nAt this point it's easiest to apply constraints on each select rather \nthan apply a policy every time.\n\nThanks,\n\nJoe\n\nOn 10/17/2017 03:06 PM, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> On 10/17/2017 10:44 PM, Joe Carlson wrote:\n>>> What I was wondering is what is the performance differences between a\n>>> row level security implementation:\n>>> ...\n>>> and an implementation where I add on the constraints as part of each\n>>> select statement:\n>> The main point of the RLS is enforcing an order in which the conditions\n>> are evaluated.\n> Yeah. Because of that, I would *not* recommend RLS if you can equally\n> well stick the equivalent conditions into your queries. There is way\n> too much risk of taking a serious performance hit due to a bad plan.\n>\n> An alternative you might consider, if simplifying the input queries\n> is useful, is to put the fixed conditions into a view and query the\n> view instead. That way there's not an enforced evaluation order.\n>\n> \t\t\tregards, tom lane\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 17 Oct 2017 15:18:49 -0700",
"msg_from": "Joe Carlson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Row level security policy policy versus SQL\n constraints. Any performance difference?"
}
] |
[
{
"msg_contents": "Hi there,\n\nThat's my first question in this mailing list! :)\n\nIs it possible (node.js connecting to PG 9.6 on RDS) to set a lower\npriority to a connection so that that particular process (BATCH INSERT)\nwould have a low impact on other running processes on PG, like live queries\nand single inserts/updates?\n\nI would like the batch insert to complete as soon as possible, but at the\nsame time keep individual queries and inserts running on maximum speed.\n\n*SINGLE SELECTS (HIGH PRIORITY)*\n*SINGLE INSERTS/UPDATES (HIGH PRIORITY)*\nBATCH INSERT (LOW PRIORITY)\nBATCH SELECT (LOW PRIORITY)\n\n\n\nIs that a good idea? Is this feasible with Node.js + PG?\n\nThanks\n\nHi there,That's my first question in this mailing list! :)Is it possible (node.js connecting to PG 9.6 on RDS) to set a lower priority to a connection so that that particular process (BATCH INSERT) would have a low impact on other running processes on PG, like live queries and single inserts/updates?I would like the batch insert to complete as soon as possible, but at the same time keep individual queries and inserts running on maximum speed.SINGLE SELECTS (HIGH PRIORITY)SINGLE INSERTS/UPDATES (HIGH PRIORITY)BATCH INSERT (LOW PRIORITY)BATCH SELECT (LOW PRIORITY)Is that a good idea? Is this feasible with Node.js + PG?Thanks",
"msg_date": "Thu, 19 Oct 2017 14:10:12 -0200",
"msg_from": "Jean Baro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Low priority batch insert"
},
{
"msg_contents": "On Fri, Oct 20, 2017 at 1:10 AM, Jean Baro <[email protected]> wrote:\n> That's my first question in this mailing list! :)\n\nWelcome!\n\n> Is it possible (node.js connecting to PG 9.6 on RDS) to set a lower priority\n> to a connection so that that particular process (BATCH INSERT) would have a\n> low impact on other running processes on PG, like live queries and single\n> inserts/updates?\n>\n> Is that a good idea? Is this feasible with Node.js + PG?\n\nThe server could be changed so as backend processes use setpriority\nand getpriority using a GUC parameter, and you could leverage priority\nof processes using that. The good news is that this can be done as a\nmodule, see an example from Fujii Masao's pg_cheat_funcs that caught\nmy attention actually yesterday:\nhttps://github.com/MasaoFujii/pg_cheat_funcs/commit/a39ec1549e2af72bf101da5075c4e12d079f7c5b\nThe bad news is that you are on RDS, so vendor locking is preventing\nyou from loading any custom modules.\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 20 Oct 2017 07:54:16 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Low priority batch insert"
}
] |
[
{
"msg_contents": "Hello Experts,\n\nWe are trying to tune our postgresql DB using perf. We are running a C\nprogram that connects to postgres DB and calls very simple StoredProcs, one\neach for SELECT, INSERT & UPDATE.\n\nThe SPs are very simple.\n*SELECT_SP*:\nCREATE OR REPLACE PROCEDURE query_dept_new(p1 IN numeric, p2 OUT numeric,p3\nOUT numeric,.......,p205 OUT numeric) AS\nBEGIN\n SELECT c2,c3,......,c205\n INTO p2,p3,.......,p205\n FROM dept_new\n WHERE c1 = p1;\nEND;\n\n*UPDATE_SP*:\nCREATE OR REPLACE PROCEDURE query_dept_update(p1 IN numeric, p2 IN\nnumeric,........,p205 IN numeric) AS\nBEGIN\n update dept_new set c2 = p2,c3 = p3,.....,c205 = p205\n WHERE c1 = p1;\ncommit;\nEND;\n\n*INSERT_SP*:\nCREATE OR REPLACE PROCEDURE query_dept_insert(p1 IN numeric, p2 IN\nnumeric,.....,p205 IN numeric) AS\nBEGIN\ninsert into dept_new values(p1,p2,.....,p205);\ncommit;\nEND;\n\nAs shown above, its all on a single table. Before every test, the table is\ntruncated and loaded with 1m rows. WAL is on a separate disk.\n\nIts about 3x slower as compared to Oracle and major events are WAL related.\nWith fsync=off or sync_commit=off it gets 10% better but still far from\nOracle. Vacuuming the table does not help. Checkpoint too is not an issue.\n\nSince we dont see any other way to find out what is slowing it down, we\ngathered data using the perf tool. Can somebody pls help on how do we go\nabout reading the perf report.\n\nThanks & Regards\n\nHello Experts,We are trying to tune our postgresql DB using perf. We are running a C program that connects to postgres DB and calls very simple StoredProcs, one each for SELECT, INSERT & UPDATE. The SPs are very simple. SELECT_SP:CREATE OR REPLACE PROCEDURE query_dept_new(p1 IN numeric, p2 OUT numeric,p3 OUT numeric,.......,p205 OUT numeric) ASBEGIN SELECT c2,c3,......,c205 INTO p2,p3,.......,p205 FROM dept_new WHERE c1 = p1;END;UPDATE_SP:CREATE OR REPLACE PROCEDURE query_dept_update(p1 IN numeric, p2 IN numeric,........,p205 IN numeric) ASBEGIN update dept_new set c2 = p2,c3 = p3,.....,c205 = p205 WHERE c1 = p1; commit;END;INSERT_SP:CREATE OR REPLACE PROCEDURE query_dept_insert(p1 IN numeric, p2 IN numeric,.....,p205 IN numeric) ASBEGIN insert into dept_new values(p1,p2,.....,p205); commit;END;As shown above, its all on a single table. Before every test, the table is truncated and loaded with 1m rows. WAL is on a separate disk.Its about 3x slower as compared to Oracle and major events are WAL related. With fsync=off or sync_commit=off it gets 10% better but still far from Oracle. Vacuuming the table does not help. Checkpoint too is not an issue. Since we dont see any other way to find out what is slowing it down, we gathered data using the perf tool. Can somebody pls help on how do we go about reading the perf report. Thanks & Regards",
"msg_date": "Tue, 24 Oct 2017 00:49:20 +0530",
"msg_from": "Purav Chovatia <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgresql tuning with perf"
},
{
"msg_contents": "Hi,\ncould you providence the code used with PG ?\nHas table dept_new an index/pk on c1 ?\nDo you analyze this table after loading it ?\n\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 23 Oct 2017 13:29:17 -0700 (MST)",
"msg_from": "legrand legrand <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql tuning with perf"
},
{
"msg_contents": "\n\nOn 10/23/2017 09:19 PM, Purav Chovatia wrote:\n> Hello Experts,\n> \n> We are trying to tune our postgresql DB using perf.\n\nCan you share some of the perf reports, then?\n\n> We are running a C program that connects to postgres DB and calls\n> very simple StoredProcs, one each for SELECT, INSERT & UPDATE.\n> \n> The SPs are very simple. \n> *SELECT_SP*:\n> CREATE OR REPLACE PROCEDURE query_dept_new(p1 IN numeric, p2 OUT\n> numeric,p3 OUT numeric,.......,p205 OUT numeric) AS\n> BEGIN\n> SELECT c2,c3,......,c205\n> INTO p2,p3,.......,p205\n> FROM dept_new\n> WHERE c1 = p1;\n> END;\n> \n> *UPDATE_SP*:\n> CREATE OR REPLACE PROCEDURE query_dept_update(p1 IN numeric, p2 IN\n> numeric,........,p205 IN numeric) AS\n> BEGIN\n> update dept_new set c2 = p2,c3 = p3,.....,c205 = p205 \n> WHERE c1 = p1;\n> commit;\n> END;\n> \n> *INSERT_SP*:\n> CREATE OR REPLACE PROCEDURE query_dept_insert(p1 IN numeric, p2 IN\n> numeric,.....,p205 IN numeric) AS\n> BEGIN\n> insert into dept_new values(p1,p2,.....,p205);\n> commit;\n> END;\n> \n> As shown above, its all on a single table. Before every test, the table\n> is truncated and loaded with 1m rows. WAL is on a separate disk.\n> \n\nIt'd be nice if you could share more details about the structure of the\ntable, hardware and observed metrics (throughput, ...). Otherwise we\ncan't try reproducing it, for example.\n\n> Its about 3x slower as compared to Oracle and major events are WAL\n> related. With fsync=off or sync_commit=off it gets 10% better but still\n> far from Oracle. Vacuuming the table does not help. Checkpoint too is\n> not an issue. \n\nSo how do you know the major events are WAL related? Can you share how\nyou measure that and the measurements?\n\n> \n> Since we dont see any other way to find out what is slowing it down, we\n> gathered data using the perf tool. Can somebody pls help on how do we go\n> about reading the perf report.\n\nWell, that's hard to do when you haven't shared the report.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 23 Oct 2017 22:55:56 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql tuning with perf"
},
{
"msg_contents": "\n> On Oct 23, 2017, at 12:19 PM, Purav Chovatia <[email protected]> wrote:\n> \n> Hello Experts,\n> \n> We are trying to tune our postgresql DB using perf. We are running a C program that connects to postgres DB and calls very simple StoredProcs, one each for SELECT, INSERT & UPDATE. \n> \n> The SPs are very simple. \n> SELECT_SP:\n> CREATE OR REPLACE PROCEDURE query_dept_new(p1 IN numeric, p2 OUT numeric,p3 OUT numeric,.......,p205 OUT numeric) AS\n> BEGIN\n> SELECT c2,c3,......,c205\n> INTO p2,p3,.......,p205\n> FROM dept_new\n> WHERE c1 = p1;\n> END;\n\nPerhaps I'm confused, but I didn't think PostgreSQL had stored procedures. If the code you're actually running looks like this then I don't think you're using PostgreSQL.\n\nCheers,\n Steve\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 23 Oct 2017 14:59:47 -0700",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql tuning with perf"
},
{
"msg_contents": "The language used for stored procedures is EDBSPL. Even if we dont use\nEDBSPL, and instead use PLPgPSQL, the performance is still the same.\n\nThanks\n\nOn 24 October 2017 at 03:29, Steve Atkins <[email protected]> wrote:\n\n>\n> > On Oct 23, 2017, at 12:19 PM, Purav Chovatia <[email protected]> wrote:\n> >\n> > Hello Experts,\n> >\n> > We are trying to tune our postgresql DB using perf. We are running a C\n> program that connects to postgres DB and calls very simple StoredProcs, one\n> each for SELECT, INSERT & UPDATE.\n> >\n> > The SPs are very simple.\n> > SELECT_SP:\n> > CREATE OR REPLACE PROCEDURE query_dept_new(p1 IN numeric, p2 OUT\n> numeric,p3 OUT numeric,.......,p205 OUT numeric) AS\n> > BEGIN\n> > SELECT c2,c3,......,c205\n> > INTO p2,p3,.......,p205\n> > FROM dept_new\n> > WHERE c1 = p1;\n> > END;\n>\n> Perhaps I'm confused, but I didn't think PostgreSQL had stored procedures.\n> If the code you're actually running looks like this then I don't think\n> you're using PostgreSQL.\n>\n> Cheers,\n> Steve\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThe language used for stored procedures is EDBSPL. Even if we dont use EDBSPL, and instead use PLPgPSQL, the performance is still the same.ThanksOn 24 October 2017 at 03:29, Steve Atkins <[email protected]> wrote:\n> On Oct 23, 2017, at 12:19 PM, Purav Chovatia <[email protected]> wrote:\n>\n> Hello Experts,\n>\n> We are trying to tune our postgresql DB using perf. We are running a C program that connects to postgres DB and calls very simple StoredProcs, one each for SELECT, INSERT & UPDATE.\n>\n> The SPs are very simple.\n> SELECT_SP:\n> CREATE OR REPLACE PROCEDURE query_dept_new(p1 IN numeric, p2 OUT numeric,p3 OUT numeric,.......,p205 OUT numeric) AS\n> BEGIN\n> SELECT c2,c3,......,c205\n> INTO p2,p3,.......,p205\n> FROM dept_new\n> WHERE c1 = p1;\n> END;\n\nPerhaps I'm confused, but I didn't think PostgreSQL had stored procedures. If the code you're actually running looks like this then I don't think you're using PostgreSQL.\n\nCheers,\n Steve\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 24 Oct 2017 13:06:24 +0530",
"msg_from": "Purav Chovatia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql tuning with perf"
},
{
"msg_contents": "Thanks Tomas.\n\nAnd thanks again that you plan to reproduce it.\n\nWould appreciate if somebody can help understand as to how does one go\nabout troubleshooting performance in the postgresql world. In Oracle, I\nwould look at statspack and the wait events and most likely we would get\nthe root cause.\n\nTable has PK on col c1 and the predicate of the SELECT & UPDATE includes c1.\n\nServer is HP DL 380 dual cpu, each cpu with 6 cores with 36GB RAM. Table\nsize including index is 1.7GB. Shared_buffers=8GB, so the table is fully\ncached. Effective_cache_size=26GB. CPU util is 5-6% while running the\nworkload. EDB is processing ~1800 requests/sec whereas Oracle is processing\n~3300 req/sec.\n\nbmdb=# desc dept_new\n Table \"public.dept_new\"\n Column | Type | Modifiers\n--------+---------------+-----------\n c1 | numeric(10,0) | not null\n c2 | numeric(10,0) |\n.\n.\n.\n.\n.\n c205 | numeric(10,0) |\nIndexes:\n \"dept_new_pkey\" PRIMARY KEY, btree (c1)\n\nbmdb=#\n\nWe queried pg_stat_activity thrice every sec like this:\nbmdb# \\o wait_events.lst\nbmdb# SELECT wait_event_type, wait_event FROM pg_stat_activity WHERE pid !=\npg_backend_pid() and wait_event is not null;\nbmdb# \\watch 0.3\n\nWe see WALWriteLock events (and that too very few). However, with either\nfsync=off or sync_commit=off the time gain is only about 10-15%. So\neliminating those waits does not give the expected benefit. Since we dont\nsee any other waits, we believe its actually burning the cpu but we cant\nfigure out why.\n\nAttached herewith is the output of perf report -g -i perf.data redirected\nto perf_rep.lst. I am not too sure if this is how perf reports are shared,\nso pls let me know if the correct method. Also, given below is a snapshot\nof perf report.\n[image: Inline images 1]\n\nThanks & Regards\n\nOn 24 October 2017 at 02:25, Tomas Vondra <[email protected]>\nwrote:\n\n>\n>\n> On 10/23/2017 09:19 PM, Purav Chovatia wrote:\n> > Hello Experts,\n> >\n> > We are trying to tune our postgresql DB using perf.\n>\n> Can you share some of the perf reports, then?\n>\n> > We are running a C program that connects to postgres DB and calls\n> > very simple StoredProcs, one each for SELECT, INSERT & UPDATE.\n> >\n> > The SPs are very simple.\n> > *SELECT_SP*:\n> > CREATE OR REPLACE PROCEDURE query_dept_new(p1 IN numeric, p2 OUT\n> > numeric,p3 OUT numeric,.......,p205 OUT numeric) AS\n> > BEGIN\n> > SELECT c2,c3,......,c205\n> > INTO p2,p3,.......,p205\n> > FROM dept_new\n> > WHERE c1 = p1;\n> > END;\n> >\n> > *UPDATE_SP*:\n> > CREATE OR REPLACE PROCEDURE query_dept_update(p1 IN numeric, p2 IN\n> > numeric,........,p205 IN numeric) AS\n> > BEGIN\n> > update dept_new set c2 = p2,c3 = p3,.....,c205 = p205\n> > WHERE c1 = p1;\n> > commit;\n> > END;\n> >\n> > *INSERT_SP*:\n> > CREATE OR REPLACE PROCEDURE query_dept_insert(p1 IN numeric, p2 IN\n> > numeric,.....,p205 IN numeric) AS\n> > BEGIN\n> > insert into dept_new values(p1,p2,.....,p205);\n> > commit;\n> > END;\n> >\n> > As shown above, its all on a single table. Before every test, the table\n> > is truncated and loaded with 1m rows. WAL is on a separate disk.\n> >\n>\n> It'd be nice if you could share more details about the structure of the\n> table, hardware and observed metrics (throughput, ...). Otherwise we\n> can't try reproducing it, for example.\n>\n> > Its about 3x slower as compared to Oracle and major events are WAL\n> > related. With fsync=off or sync_commit=off it gets 10% better but still\n> > far from Oracle. Vacuuming the table does not help. Checkpoint too is\n> > not an issue.\n>\n> So how do you know the major events are WAL related? Can you share how\n> you measure that and the measurements?\n>\n> >\n> > Since we dont see any other way to find out what is slowing it down, we\n> > gathered data using the perf tool. Can somebody pls help on how do we go\n> > about reading the perf report.\n>\n> Well, that's hard to do when you haven't shared the report.\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 24 Oct 2017 17:03:17 +0530",
"msg_from": "Purav Chovatia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql tuning with perf"
},
{
"msg_contents": "Hi Pascal,\n\nDo you mean the sample program that acts as the application, do you want me\nto share that? I can do that, but I guess my post will get blocked.\n\nYes, c1 is the PK. Pls see below:\nbmdb=# desc dept_new\n Table \"public.dept_new\"\n Column | Type | Modifiers\n--------+---------------+-----------\n c1 | numeric(10,0) | not null\n c2 | numeric(10,0) |\n.\n.\n.\n.\n.\n c205 | numeric(10,0) |\nIndexes:\n \"dept_new_pkey\" PRIMARY KEY, btree (c1)\n\nbmdb=#\n\nWe dont analyze after loading the table. But I guess thats required only if\nthe query plan is in doubt, lets say its doing a full table scan or alike,\nisnt it? That is not the case. The query is using PK index but it just\nseems to be slow.\n\nThanks\n\nOn 24 October 2017 at 01:59, legrand legrand <[email protected]>\nwrote:\n\n> Hi,\n> could you providence the code used with PG ?\n> Has table dept_new an index/pk on c1 ?\n> Do you analyze this table after loading it ?\n>\n> Regards\n> PAscal\n>\n>\n>\n> --\n> Sent from: http://www.postgresql-archive.org/PostgreSQL-performance-\n> f2050081.html\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi Pascal,Do you mean the sample program that acts as the application, do you want me to share that? I can do that, but I guess my post will get blocked.Yes, c1 is the PK. Pls see below:bmdb=# desc dept_new Table \"public.dept_new\" Column | Type | Modifiers--------+---------------+----------- c1 | numeric(10,0) | not null c2 | numeric(10,0) |..... c205 | numeric(10,0) |Indexes: \"dept_new_pkey\" PRIMARY KEY, btree (c1)bmdb=#We dont analyze after loading the table. But I guess thats required only if the query plan is in doubt, lets say its doing a full table scan or alike, isnt it? That is not the case. The query is using PK index but it just seems to be slow.ThanksOn 24 October 2017 at 01:59, legrand legrand <[email protected]> wrote:Hi,\ncould you providence the code used with PG ?\nHas table dept_new an index/pk on c1 ?\nDo you analyze this table after loading it ?\n\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 24 Oct 2017 17:09:01 +0530",
"msg_from": "Purav Chovatia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql tuning with perf"
},
{
"msg_contents": "Please share how you monitor your perfs.\n\nAt less duration for each plpgsql proc / oracle proc.\nPlease share your plpgsql code, and commit strategy.\n\n(for support with edb please check with your contract manager)\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 24 Oct 2017 07:51:11 -0700 (MST)",
"msg_from": "legrand legrand <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql tuning with perf"
},
{
"msg_contents": "We record like this: perf record -g -u enterprisedb\nWe report like this: perf report -g -i perf.data\n\nIs this what you were looking for? Sorry, we are new to perf so we might be\nsharing something different as compared to what you asked.\n\nWe already shared the SP code in the original post.\n\nThanks\n\nOn 24 October 2017 at 20:21, legrand legrand <[email protected]>\nwrote:\n\n> Please share how you monitor your perfs.\n>\n> At less duration for each plpgsql proc / oracle proc.\n> Please share your plpgsql code, and commit strategy.\n>\n> (for support with edb please check with your contract manager)\n>\n>\n>\n> --\n> Sent from: http://www.postgresql-archive.org/PostgreSQL-performance-\n> f2050081.html\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nWe record like this: perf record -g -u enterprisedbWe report like this: perf report -g -i perf.dataIs this what you were looking for? Sorry, we are new to perf so we might be sharing something different as compared to what you asked.We already shared the SP code in the original post.ThanksOn 24 October 2017 at 20:21, legrand legrand <[email protected]> wrote:Please share how you monitor your perfs.\n\nAt less duration for each plpgsql proc / oracle proc.\nPlease share your plpgsql code, and commit strategy.\n\n(for support with edb please check with your contract manager)\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 24 Oct 2017 20:36:33 +0530",
"msg_from": "Purav Chovatia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql tuning with perf"
},
{
"msg_contents": "Thanks Tomas.\n\nAnd thanks again that you plan to reproduce it.\n\nWould appreciate if somebody can help understand as to how does one go\nabout troubleshooting performance in the postgresql world. In Oracle, I\nwould look at statspack and the wait events and most likely we would get\nthe root cause.\n\nTable has PK on col c1 and the predicate of the SELECT & UPDATE includes c1.\n\nServer is HP DL 380 dual cpu, each cpu with 6 cores with 36GB RAM. Table\nsize including index is 1.7GB. Shared_buffers=8GB, so the table is fully\ncached. Effective_cache_size=26GB. CPU util is 5-6% while running the\nworkload. EDB is processing ~1800 requests/sec whereas Oracle is processing\n~3300 req/sec.\n\nbmdb=# desc dept_new\n Table \"public.dept_new\"\n Column | Type | Modifiers\n--------+---------------+-----------\n c1 | numeric(10,0) | not null\n c2 | numeric(10,0) |\n.\n.\n.\n.\n.\n c205 | numeric(10,0) |\nIndexes:\n \"dept_new_pkey\" PRIMARY KEY, btree (c1)\n\nbmdb=#\n\nWe queried pg_stat_activity thrice every sec like this:\nbmdb# \\o wait_events.lst\nbmdb# SELECT wait_event_type, wait_event FROM pg_stat_activity WHERE pid !=\npg_backend_pid() and wait_event is not null;\nbmdb# \\watch 0.3\n\nWe see WALWriteLock events (and that too very few). However, with either\nfsync=off or sync_commit=off the time gain is only about 10-15%. So\neliminating those waits does not give the expected benefit. Since we dont\nsee any other waits, we believe its actually burning the cpu but we cant\nfigure out why.\n\nAttached herewith is the output of perf report -g -i perf.data redirected\nto perf_rep.lst. I am not too sure if this is how perf reports are shared,\nso pls let me know if the correct method. Also, given below is a snapshot\nof perf report.\n[image: Inline images 1]\n\nThanks & Regards\n\nOn 24 October 2017 at 02:25, Tomas Vondra <[email protected]>\nwrote:\n\n>\n>\n> On 10/23/2017 09:19 PM, Purav Chovatia wrote:\n> > Hello Experts,\n> >\n> > We are trying to tune our postgresql DB using perf.\n>\n> Can you share some of the perf reports, then?\n>\n> > We are running a C program that connects to postgres DB and calls\n> > very simple StoredProcs, one each for SELECT, INSERT & UPDATE.\n> >\n> > The SPs are very simple.\n> > *SELECT_SP*:\n> > CREATE OR REPLACE PROCEDURE query_dept_new(p1 IN numeric, p2 OUT\n> > numeric,p3 OUT numeric,.......,p205 OUT numeric) AS\n> > BEGIN\n> > SELECT c2,c3,......,c205\n> > INTO p2,p3,.......,p205\n> > FROM dept_new\n> > WHERE c1 = p1;\n> > END;\n> >\n> > *UPDATE_SP*:\n> > CREATE OR REPLACE PROCEDURE query_dept_update(p1 IN numeric, p2 IN\n> > numeric,........,p205 IN numeric) AS\n> > BEGIN\n> > update dept_new set c2 = p2,c3 = p3,.....,c205 = p205\n> > WHERE c1 = p1;\n> > commit;\n> > END;\n> >\n> > *INSERT_SP*:\n> > CREATE OR REPLACE PROCEDURE query_dept_insert(p1 IN numeric, p2 IN\n> > numeric,.....,p205 IN numeric) AS\n> > BEGIN\n> > insert into dept_new values(p1,p2,.....,p205);\n> > commit;\n> > END;\n> >\n> > As shown above, its all on a single table. Before every test, the table\n> > is truncated and loaded with 1m rows. WAL is on a separate disk.\n> >\n>\n> It'd be nice if you could share more details about the structure of the\n> table, hardware and observed metrics (throughput, ...). Otherwise we\n> can't try reproducing it, for example.\n>\n> > Its about 3x slower as compared to Oracle and major events are WAL\n> > related. With fsync=off or sync_commit=off it gets 10% better but still\n> > far from Oracle. Vacuuming the table does not help. Checkpoint too is\n> > not an issue.\n>\n> So how do you know the major events are WAL related? Can you share how\n> you measure that and the measurements?\n>\n> >\n> > Since we dont see any other way to find out what is slowing it down, we\n> > gathered data using the perf tool. Can somebody pls help on how do we go\n> > about reading the perf report.\n>\n> Well, that's hard to do when you haven't shared the report.\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>",
"msg_date": "Tue, 24 Oct 2017 20:38:01 +0530",
"msg_from": "Purav Chovatia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql tuning with perf"
},
{
"msg_contents": "Once again you are speaking about edb port of postgresql. The edb pl sql code\nis not public. This is not the good place to get support: please ask your\nedb contract manager.\nIf you want support hère: please rewrite your oracle proc in pl pqsql, share\nthat code and commit strategy ... Postgres doesn't support commit in pl ...\nThis is a big difference\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 24 Oct 2017 08:21:02 -0700 (MST)",
"msg_from": "legrand legrand <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql tuning with perf"
}
] |
[
{
"msg_contents": "Hi,\n\nI see in the v10 release notes (2017-10-05) that there's been a change to \"Improve performance of queries affected by row-level security restrictions\". I am using RLS in a Postgres 9.5 database and am seeing some very bad performance when joining tables. Upgrading this DB to v10 shows a huge performance increase in some cases where RLS has proven to be an issue, but not all.\n\nI see here (https://www.postgresql.org/message-id/14730.1508278004%40sss.pgh.pa.us), that Tom Lane (author of the commit for the aforementioned release note) remarked on 2017-10-17: \"I would *not* recommend RLS if you can equally well stick the equivalent conditions into your queries. There is way too much risk of taking a serious performance hit due to a bad plan.\"\n\nWhat's the current advice, and future plans for row-level security performance optimisations?\nThough things have improved in v10, is there likely to always be that risk of a bad plan arising?\n\nRegards,\nJason Borg.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 24 Oct 2017 02:51:26 +0000",
"msg_from": "Jason Borg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Row-level security performance"
}
] |
[
{
"msg_contents": "Hello,\n\nWe're encountering some problems with WAL growth in production with\nPostgreSQL 9.6.3 and 9.6.2. From what I know a WAL file can either be\nrecycled(and would be reused) or deleted.\nWe'd like to have better control over the amount of WAL that is kept around.\nThere were a few occasions where we had to resize partitions because\npg_xlog grew as much as it did.\n\nAccording to the docs [1] there are some parameters in GUC (postgresql.conf) about this.\nThe parameters I've been able to identify are the following:\n\n* wal_keep_segments\n* max_wal_size\n* min_wal_size \n\nOur WAL grows a lot around the time of our product upgrades (that is,\nwhen we upgrade to a new version of our database, so not a Pg upgrade,\njust a newer version of our db schema, plpgsql code etc).\nAs part of this upgrade, we add new columns or have some large UPDATEs\non tables as big as 300M (but in one case we also have one with 1.5B rows).\n\nI am seeing the following int he docs [3]\n\n min_wal_size (integer)\n As long as WAL disk usage stays below this setting, old WAL files are \n always recycled for future use at a checkpoint, rather than removed.\n This can be used to ensure that enough WAL space is reserved to handle\n spikes in WAL usage, for example when running large batch jobs. The default\n is 80 MB. This parameter can only be set in the postgresql.conf file or\n on the server command line.\n\nThis sounds very familiar because, that's essentially what we're doing. There\nare some large jobs that cause a lot of workload and changes and generate a lot of WAL.\n\nSo far, the way I interpret this is min_wal_size is the amount of WAL\nrecycled (that is kept around to be reused) and max_wal_size is the\ntotal amount of WAL allowed to be kept on disk.\n\nI would also like to interpret the default values of min_wal_size and max_wal_size.\nSo if I run the following query:\n\n psql -c \"select name, setting from pg_settings where name like '%wal_size';\"\n\nI get the following:\n\n max_wal_size|2097152\n min_wal_size|1048576\n\nDo these two values look ok?\n\nBoth these values were generated by pgtune [4], but it seems like pgtune\nthinks they're expressed by default in KB.\nLooking at the PostgreSQL code, it seems to me that these two are\nexpressed in MB, at least that's what I understand when I see\nGUC_UNIT_MB in the source code [6].\n\nSo maybe the pgtune fork we're using has a bug in the sense that it\nproduces an incorrect value for those two parameters? (should be in MB\nbut is expressed in KB, therefore much higher than what it should be).\n\nAnother question is, how can I use any of the checkpoint settings\nto control the WAL that is kept around?\n\n* checkpoint_timeout \n* checkpoint_completion_target \n* checkpoint_flush_after \n* checkpoint_warning \n\n=========\n\nI actually tried something with these settings on a test environment.\nI've used the following settings:\n\n checkpoint_timeout = 40s\n min_wal_size = 600MB\n max_wal_size = 900MB\n\nThen I've created a db named x1 and ran this on it four or five times.\n\n pgbench -i -s 70 x1\n\nThe pg_xlog directory grew to 2.2G and after a few minutes, it decreased to 2.0G\nAfter about 40 minutes it decreased to 1.4G and it's not going any lower.\nI was expecting pg_xlog's size to be 600MB after the first WAL removal had run.\nShould I expect that the size will eventually drop to 600MB or will it just sit there at 1.4G?\n\n=========\n\nOther thoughts:\n\nI have looked a bit at Pg internals too, I'm seeing four functions\nthere that are responsible for removing WAL: XLogArchiveIsReady,\nRemoveXlogFile, RemoveOldXlogFiles, XLOGfileslop.\nAll of these belong to /src/backend/access/transam/xlog.c\n\nThe only place in the code that seems to take a decision about how much\nWAL to recycle and how much to remove is the function XLOGfileslop [2].\n\nIt seems like XLOGfileslop is an estimate for the number of WAL to keep\naround(recycled WAL). Both max_wal_size and min_wal_size are used inside\nXLOGfileslop.\n\nAs far as checkpoint_* GUC settings go, they seem to be involved as well.\nSo far, the only thing I know about checkpoints is that between\ncheckpoints, many WAL are created. The amount of WAL between checkpoints\ncan vary. I don't have a good understanding about the interplay between\ncheckpoints and WAL.\n\n\nI'd be grateful for any thoughts on how to improve this, and better control\nthe amount of WAL kept in pg_xlog. \n\nThank you,\nStefan\n\n[1] https://www.postgresql.org/docs/9.6/static/wal-configuration.html\n[2] https://github.com/postgres/postgres/blob/0c5803b450e0cc29b3527df3f352e6f18a038cc6/src/backend/access/transam/xlog.c#L2258\n[3] https://www.postgresql.org/docs/9.6/static/runtime-config-wal.html#RUNTIME-CONFIG-WAL-CHECKPOINTS\n[4] https://github.com/kmatt/pgtune\n[5] https://github.com/kmatt/pgtune/blob/master/pgtune#L560\n[6] https://github.com/postgres/postgres/blob/f49842d1ee31b976c681322f76025d7732e860f3/src/backend/utils/misc/guc.c#L2268\n\n\nStefan Petrea\nSystem Engineer\n\[email protected] \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 27 Oct 2017 11:28:15 +0000",
"msg_from": "Stefan Petrea <[email protected]>",
"msg_from_op": true,
"msg_subject": "WAL still kept in pg_xlog even long after heavy workload is done"
},
{
"msg_contents": "To get the values right, you have to consider the \"unit\" column in \npg_settings. On mine, it is 16M for both min and max wal size. So it \nwould be\n1024 x 1024 x 16 x <value> (pg_settings.min_wal_size or \npg_settings.max_wal_size)\n\nThe result of this formula should be close to what you specified in \npostgresql.conf.\n\nSo modify your SQL a bit:\npsql -c \"select name, setting, unit from pg_settings where name like \n'%wal_size';\"\n\nRegards,\nMichael Vitale\n\n> Stefan Petrea <mailto:[email protected]>\n> Friday, October 27, 2017 7:28 AM\n> Hello,\n>\n> We're encountering some problems with WAL growth in production with\n> PostgreSQL 9.6.3 and 9.6.2. From what I know a WAL file can either be\n> recycled(and would be reused) or deleted.\n> We'd like to have better control over the amount of WAL that is kept \n> around.\n> There were a few occasions where we had to resize partitions because\n> pg_xlog grew as much as it did.\n>\n> According to the docs [1] there are some parameters in GUC \n> (postgresql.conf) about this.\n> The parameters I've been able to identify are the following:\n>\n> * wal_keep_segments\n> * max_wal_size\n> * min_wal_size\n>\n> Our WAL grows a lot around the time of our product upgrades (that is,\n> when we upgrade to a new version of our database, so not a Pg upgrade,\n> just a newer version of our db schema, plpgsql code etc).\n> As part of this upgrade, we add new columns or have some large UPDATEs\n> on tables as big as 300M (but in one case we also have one with 1.5B \n> rows).\n>\n> I am seeing the following int he docs [3]\n>\n> min_wal_size (integer)\n> As long as WAL disk usage stays below this setting, old WAL files are\n> always recycled for future use at a checkpoint, rather than removed.\n> This can be used to ensure that enough WAL space is reserved to handle\n> spikes in WAL usage, for example when running large batch jobs. The \n> default\n> is 80 MB. This parameter can only be set in the postgresql.conf file or\n> on the server command line.\n>\n> This sounds very familiar because, that's essentially what we're \n> doing. There\n> are some large jobs that cause a lot of workload and changes and \n> generate a lot of WAL.\n>\n> So far, the way I interpret this is min_wal_size is the amount of WAL\n> recycled (that is kept around to be reused) and max_wal_size is the\n> total amount of WAL allowed to be kept on disk.\n>\n> I would also like to interpret the default values of min_wal_size and \n> max_wal_size.\n> So if I run the following query:\n>\n> psql -c \"select name, setting from pg_settings where name like \n> '%wal_size';\"\n>\n> I get the following:\n>\n> max_wal_size|2097152\n> min_wal_size|1048576\n>\n> Do these two values look ok?\n>\n> Both these values were generated by pgtune [4], but it seems like pgtune\n> thinks they're expressed by default in KB.\n> Looking at the PostgreSQL code, it seems to me that these two are\n> expressed in MB, at least that's what I understand when I see\n> GUC_UNIT_MB in the source code [6].\n>\n> So maybe the pgtune fork we're using has a bug in the sense that it\n> produces an incorrect value for those two parameters? (should be in MB\n> but is expressed in KB, therefore much higher than what it should be).\n>\n> Another question is, how can I use any of the checkpoint settings\n> to control the WAL that is kept around?\n>\n> * checkpoint_timeout\n> * checkpoint_completion_target\n> * checkpoint_flush_after\n> * checkpoint_warning\n>\n> =========\n>\n> I actually tried something with these settings on a test environment.\n> I've used the following settings:\n>\n> checkpoint_timeout = 40s\n> min_wal_size = 600MB\n> max_wal_size = 900MB\n>\n> Then I've created a db named x1 and ran this on it four or five times.\n>\n> pgbench -i -s 70 x1\n>\n> The pg_xlog directory grew to 2.2G and after a few minutes, it \n> decreased to 2.0G\n> After about 40 minutes it decreased to 1.4G and it's not going any lower.\n> I was expecting pg_xlog's size to be 600MB after the first WAL removal \n> had run.\n> Should I expect that the size will eventually drop to 600MB or will it \n> just sit there at 1.4G?\n>\n> =========\n>\n> Other thoughts:\n>\n> I have looked a bit at Pg internals too, I'm seeing four functions\n> there that are responsible for removing WAL: XLogArchiveIsReady,\n> RemoveXlogFile, RemoveOldXlogFiles, XLOGfileslop.\n> All of these belong to /src/backend/access/transam/xlog.c\n>\n> The only place in the code that seems to take a decision about how much\n> WAL to recycle and how much to remove is the function XLOGfileslop [2].\n>\n> It seems like XLOGfileslop is an estimate for the number of WAL to keep\n> around(recycled WAL). Both max_wal_size and min_wal_size are used inside\n> XLOGfileslop.\n>\n> As far as checkpoint_* GUC settings go, they seem to be involved as well.\n> So far, the only thing I know about checkpoints is that between\n> checkpoints, many WAL are created. The amount of WAL between checkpoints\n> can vary. I don't have a good understanding about the interplay between\n> checkpoints and WAL.\n>\n>\n> I'd be grateful for any thoughts on how to improve this, and better \n> control\n> the amount of WAL kept in pg_xlog.\n>\n> Thank you,\n> Stefan\n>\n> [1] https://www.postgresql.org/docs/9.6/static/wal-configuration.html\n> [2] \n> https://github.com/postgres/postgres/blob/0c5803b450e0cc29b3527df3f352e6f18a038cc6/src/backend/access/transam/xlog.c#L2258\n> [3] \n> https://www.postgresql.org/docs/9.6/static/runtime-config-wal.html#RUNTIME-CONFIG-WAL-CHECKPOINTS\n> [4] https://github.com/kmatt/pgtune\n> [5] https://github.com/kmatt/pgtune/blob/master/pgtune#L560\n> [6] \n> https://github.com/postgres/postgres/blob/f49842d1ee31b976c681322f76025d7732e860f3/src/backend/utils/misc/guc.c#L2268\n>\n>\n> Stefan Petrea\n> System Engineer\n>\n> [email protected]\n>\n>\n>\n\n\n\n\nTo get the values right, \nyou have to consider the \"unit\" column in pg_settings.� On mine, it is \n16M for both min and max wal size.� So it would be\n1024 x 1024 x 16 x <value> (pg_settings.min_wal_size or \npg_settings.max_wal_size)\n\nThe result of this formula should be close to what you specified in \npostgresql.conf.\n\nSo modify your SQL a bit:\npsql -c \"select name, setting, unit\n from pg_settings where name like '%wal_size';\"\n\nRegards,\nMichael Vitale\n\n\n\n \nStefan Petrea Friday,\n October 27, 2017 7:28 AM \nHello,We're \nencountering some problems with WAL growth in production withPostgreSQL\n 9.6.3 and 9.6.2. From what I know a WAL file can either berecycled(and\n would be reused) or deleted.We'd like to have better control over \nthe amount of WAL that is kept around.There were a few occasions \nwhere we had to resize partitions becausepg_xlog grew as much as it \ndid.According to the docs [1] there are some parameters in GUC \n(postgresql.conf) about this.The parameters I've been able to \nidentify are the following:* wal_keep_segments* max_wal_size*\n min_wal_size Our WAL grows a lot around the time of our product\n upgrades (that is,when we upgrade to a new version of our database,\n so not a Pg upgrade,just a newer version of our db schema, plpgsql \ncode etc).As part of this upgrade, we add new columns or have some \nlarge UPDATEson tables as big as 300M (but in one case we also have \none with 1.5B rows).I am seeing the following int he docs [3]\n min_wal_size (integer) As long as WAL disk usage stays below \nthis setting, old WAL files are always recycled for future use \nat a checkpoint, rather than removed. This can be used to ensure \nthat enough WAL space is reserved to handle spikes in WAL usage, \nfor example when running large batch jobs. The default is 80 MB. \nThis parameter can only be set in the postgresql.conf file or on \nthe server command line.This sounds very familiar because, \nthat's essentially what we're doing. Thereare some large jobs that \ncause a lot of workload and changes and generate a lot of WAL.So\n far, the way I interpret this is min_wal_size is the amount of WALrecycled\n (that is kept around to be reused) and max_wal_size is thetotal \namount of WAL allowed to be kept on disk.I would also like to \ninterpret the default values of min_wal_size and max_wal_size.So if I\n run the following query: psql -c \"select name, setting from \npg_settings where name like '%wal_size';\"I get the following:\n max_wal_size|2097152 min_wal_size|1048576Do these two\n values look ok?Both these values were generated by pgtune [4], \nbut it seems like pgtunethinks they're expressed by default in KB.Looking\n at the PostgreSQL code, it seems to me that these two areexpressed \nin MB, at least that's what I understand when I seeGUC_UNIT_MB in \nthe source code [6].So maybe the pgtune fork we're using has a \nbug in the sense that itproduces an incorrect value for those two \nparameters? (should be in MBbut is expressed in KB, therefore much \nhigher than what it should be).Another question is, how can I \nuse any of the checkpoint settingsto control the WAL that is kept \naround?* checkpoint_timeout * checkpoint_completion_target *\n checkpoint_flush_after * checkpoint_warning =========I\n actually tried something with these settings on a test environment.I've\n used the following settings: checkpoint_timeout = 40s \n min_wal_size = 600MB max_wal_size = 900MBThen I've \ncreated a db named x1 and ran this on it four or five times. \npgbench -i -s 70 x1The pg_xlog directory grew to 2.2G and after a\n few minutes, it decreased to 2.0GAfter about 40 minutes it \ndecreased to 1.4G and it's not going any lower.I was expecting \npg_xlog's size to be 600MB after the first WAL removal had run.Should\n I expect that the size will eventually drop to 600MB or will it just \nsit there at 1.4G?=========Other thoughts:I have\n looked a bit at Pg internals too, I'm seeing four functionsthere \nthat are responsible for removing WAL: XLogArchiveIsReady,RemoveXlogFile,\n RemoveOldXlogFiles, XLOGfileslop.All of these belong to \n/src/backend/access/transam/xlog.cThe only place in the code \nthat seems to take a decision about how muchWAL to recycle and how \nmuch to remove is the function XLOGfileslop [2].It seems like \nXLOGfileslop is an estimate for the number of WAL to keeparound(recycled\n WAL). Both max_wal_size and min_wal_size are used insideXLOGfileslop.As\n far as checkpoint_* GUC settings go, they seem to be involved as well.So\n far, the only thing I know about checkpoints is that betweencheckpoints,\n many WAL are created. The amount of WAL between checkpointscan \nvary. I don't have a good understanding about the interplay betweencheckpoints\n and WAL.I'd be grateful for any thoughts on how to improve \nthis, and better controlthe amount of WAL kept in pg_xlog. Thank\n you,Stefan[1] \nhttps://www.postgresql.org/docs/9.6/static/wal-configuration.html[2]\n \nhttps://github.com/postgres/postgres/blob/0c5803b450e0cc29b3527df3f352e6f18a038cc6/src/backend/access/transam/xlog.c#L2258[3]\n \nhttps://www.postgresql.org/docs/9.6/static/runtime-config-wal.html#RUNTIME-CONFIG-WAL-CHECKPOINTS[4]\n https://github.com/kmatt/pgtune[5] \nhttps://github.com/kmatt/pgtune/blob/master/pgtune#L560[6] \nhttps://github.com/postgres/postgres/blob/f49842d1ee31b976c681322f76025d7732e860f3/src/backend/utils/misc/guc.c#L2268Stefan\n PetreaSystem [email protected]",
"msg_date": "Fri, 27 Oct 2017 08:50:01 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL still kept in pg_xlog even long after heavy workload\n is done"
}
] |
[
{
"msg_contents": "Hello everyone,\n\nPlease consider the following three semantically equivalent, but differently written queries:\n\nQuery A:\n\nSELECT * FROM items a INNER JOIN (\n SELECT item, sum(amount) stock FROM stocktransactions GROUP BY item HAVING sum(amount) >= 1\n) b ON b.item = a. \"ID\"\n\nQuery B:\n\nSELECT * FROM items a INNER JOIN (\n SELECT item, sum(amount) stock FROM stocktransactions GROUP BY item\n) b ON b.item = a. \"ID\" WHERE b.stock >= 1\n\nQuery C:\n\nSELECT * FROM items a INNER JOIN (\n SELECT item, sum(amount) stock FROM stocktransactions b GROUP BY item OFFSET 0\n) b ON b.item = a. \"ID\" WHERE b.stock >= 1\n\nFYI: stocktransactions.item and stocktransactions.amount have not null constraints and stocktransactions.item is a foreign key referencing items.ID, the primary key of items.\n\nQueries A + B generate the same plan and execute as follows:\n\nMerge Join (cost=34935.30..51701.59 rows=22285 width=344) (actual time=463.824..659.553 rows=15521 loops=1)\n Merge Cond: (a.\"ID\" = b.item)\n -> Index Scan using \"PK_items_ID\" on items a (cost=0.42..15592.23 rows=336083 width=332) (actual time=0.012..153.899 rows=336064 loops=1)\n -> Sort (cost=34934.87..34990.59 rows=22285 width=12) (actual time=463.677..466.146 rows=15521 loops=1)\n Sort Key: b.item\n Sort Method: quicksort Memory: 1112kB\n -> Finalize HashAggregate (cost=32879.78..33102.62 rows=22285 width=12) (actual time=450.724..458.667 rows=15521 loops=1)\n Group Key: b.item\n Filter: (sum(b.amount) >= '1'::double precision)\n Rows Removed by Filter: 48277\n -> Gather (cost=27865.65..32545.50 rows=44570 width=12) (actual time=343.715..407.243 rows=162152 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Partial HashAggregate (cost=26865.65..27088.50 rows=22285 width=12) (actual time=336.416..348.105 rows=54051 loops=3)\n Group Key: b.item\n -> Parallel Seq Scan on stocktransactions b (cost=0.00..23281.60 rows=716810 width=12) (actual time=0.015..170.646 rows=579563 loops=3)\nPlanning time: 0.277 ms\nExecution time: 661.342 ms\n\n\nPlan C though, thanks to the \"offset optimization fence\", executes the following, more efficient plan:\n\n\nNested Loop (cost=32768.77..41146.56 rows=7428 width=344) (actual time=456.611..525.395 rows=15521 loops=1 total=525.395)\n -> Subquery Scan on c (cost=32768.35..33269.76 rows=7428 width=12) (actual time=456.591..475.204 rows=15521 loops=1 total=475.204)\n Filter: (c.stock >= '1'::double precision)\n Rows Removed by Filter: 48277\n -> Finalize HashAggregate (cost=32768.35..32991.20 rows=22285 width=12) (actual time=456.582..468.124 rows=63798 loops=1 total=468.124)\n Group Key: b.item\n -> Gather (cost=27865.65..32545.50 rows=44570 width=12) (actual time=348.479..415.463 rows=162085 loops=1 total=415.463)\n Workers Planned: 2\n Workers Launched: 2\n -> Partial HashAggregate (cost=26865.65..27088.50 rows=22285 width=12) (actual time=343.952..355.912 rows=54028 loops=3 total=1067.736)\n Group Key: b.item\n -> Parallel Seq Scan on stocktransactions b (cost=0.00..23281.60 rows=716810 width=12) (actual time=0.015..172.235 rows=579563 loops=3 total=516.705)\n -> Index Scan using \"PK_items_ID\" on items a (cost=0.42..1.05 rows=1 width=332) (actual time=0.003..0.003 rows=1 loops=15521 total=46.563)\n Index Cond: (\"ID\" = c.item)\nPlanning time: 0.223 ms\nExecution time: 526.203 ms\n\n\nI'm wondering, given that Query C's plan has lower overall costs than Query A/B's, why wouldn't the planner choose to execute that plan for queries A+B as well?\nIt has lower projected startup cost as well as lower total cost so apparently the optimzer does not consider such a plan with a subquery scan at all (otherwise it would choose it based on the lower cost estimates, right?) unless one forces it to via OFFSET 0.\n\nThough I wouldn't necessarily consider this a bug, it is an issue that one has to explicitly work around with inadvisable optimization fences and it would be great if this could be fixed.\n\nThanks to the developer community for delivering this great product, I hope this helps in enhancing it.\n\nCheers,\n\nBenjamin\n\n-- \n\nBejamin Coutu\[email protected]\n\nZeyOS, Inc.\nhttp://www.zeyos.com\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 29 Oct 2017 12:24:20 +0100",
"msg_from": "Benjamin Coutu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cheaper subquery scan not considered unless offset 0"
},
{
"msg_contents": "On 30 October 2017 at 00:24, Benjamin Coutu <[email protected]> wrote:\n> -> Index Scan using \"PK_items_ID\" on items a (cost=0.42..1.05 rows=1 width=332) (actual time=0.003..0.003 rows=1 loops=15521 total=46.563)\n\nI've never seen EXPLAIN output like that before.\n\nIs this some modified version of PostgreSQL?\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Oct 2017 00:46:42 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cheaper subquery scan not considered unless offset 0"
},
{
"msg_contents": "Benjamin Coutu <[email protected]> writes:\n> Please consider the following three semantically equivalent, but differently written queries:\n> ...\n> Queries A + B generate the same plan and execute as follows:\n\n> -> Finalize HashAggregate (cost=32879.78..33102.62 rows=22285 width=12) (actual time=450.724..458.667 rows=15521 loops=1)\n> Group Key: b.item\n> Filter: (sum(b.amount) >= '1'::double precision)\n> Rows Removed by Filter: 48277\n\n> Plan C though, thanks to the \"offset optimization fence\", executes the following, more efficient plan:\n\n> -> Subquery Scan on c (cost=32768.35..33269.76 rows=7428 width=12) (actual time=456.591..475.204 rows=15521 loops=1 total=475.204)\n> Filter: (c.stock >= '1'::double precision)\n> Rows Removed by Filter: 48277\n> -> Finalize HashAggregate (cost=32768.35..32991.20 rows=22285 width=12) (actual time=456.582..468.124 rows=63798 loops=1 total=468.124)\n> Group Key: b.item\n\nHuh. So we can see that the grouping step produces 63798 rows in reality,\nof which 15521 pass the >= filter condition. In Plan C, the planner\nestimates the total number of group rows at 22285; then, having no\ninformation about the statistics of c.stock, it uses DEFAULT_INEQ_SEL\n(0.333) as the filter selectivity estimate, arriving at 7428 as the\nestimated number of result rows for the subquery.\n\nIn Plan A+B, the planner presumably estimated the number of group rows at\n22285 as well, but then it comes up with 22285 as the overall result.\nUh, what about the HAVING?\n\nEvidently, the difference between 7428 and 22285 estimated rows out of\nthe subquery is enough to prompt a change in join plan for this query.\nSince the true number is in between, it's just luck that Plan C is faster.\nI don't put any great amount of stock in one join plan or the other\nhaving been chosen for this case based on those estimates.\n\nBut ... what about the HAVING? I took a quick look around and couldn't\nfind anyplace where the selectivity of an aggregate's filter condition\ngets accounted for, which explains this observed behavior. That seems\nlike a big oversight :-(\n\nNow, it's true that we're basically never gonna be able to do better than\ndefault selectivity estimates for post-aggregation filter conditions.\nMaybe, at some point in the dim past, somebody intentionally decided that\napplying the standard selectivity estimation logic to HAVING clauses was a\nloser. But I don't see any comments to that effect, and anyway taking the\nselectivity as 1.0 all the time doesn't seem very bright either.\n\nChanging this in back branches might be too much of a behavioral change,\nbut it seems like we oughta change HEAD to apply standard selectivity\nestimation to the HAVING clause.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 29 Oct 2017 10:58:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cheaper subquery scan not considered unless offset 0"
}
] |
[
{
"msg_contents": "It's not a modified postgres version. It's simply for my convenience that my tooling calculats \"total\" as \"actual time\" multiplied by \"loops\". Looks like I didn't properly strip that away when copy-pasting.\n\nHere are the queries and original plans again, sorry for the confusion.\n\nQuery A:\n\nSELECT * FROM items a INNER JOIN (\n SELECT item, sum(amount) stock FROM stocktransactions b GROUP BY item HAVING sum(amount) >= 1\n) c ON c.item = a.\"ID\"\n\nQuery B:\n\nSELECT * FROM items a INNER JOIN (\n SELECT item, sum(amount) stock FROM stocktransactions b GROUP BY item\n) c ON c.item = a.\"ID\" WHERE c.stock >= 1\n\nQuery C:\n\nSELECT * FROM items a INNER JOIN (\n SELECT item, sum(amount) stock FROM stocktransactions b GROUP BY item OFFSET 0\n) c ON c.item = a.\"ID\" WHERE c.stock >= 1\n\nQueries A + B generate the same plan and execute as follows:\n\nMerge Join (cost=34935.30..51701.59 rows=22285 width=344) (actual time=463.824..659.553 rows=15521 loops=1)\n Merge Cond: (a.\"ID\" = b.item)\n -> Index Scan using \"PK_items_ID\" on items a (cost=0.42..15592.23 rows=336083 width=332) (actual time=0.012..153.899 rows=336064 loops=1)\n -> Sort (cost=34934.87..34990.59 rows=22285 width=12) (actual time=463.677..466.146 rows=15521 loops=1)\n Sort Key: b.item\n Sort Method: quicksort Memory: 1112kB\n -> Finalize HashAggregate (cost=32879.78..33102.62 rows=22285 width=12) (actual time=450.724..458.667 rows=15521 loops=1)\n Group Key: b.item\n Filter: (sum(b.amount) >= '1'::double precision)\n Rows Removed by Filter: 48277\n -> Gather (cost=27865.65..32545.50 rows=44570 width=12) (actual time=343.715..407.243 rows=162152 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Partial HashAggregate (cost=26865.65..27088.50 rows=22285 width=12) (actual time=336.416..348.105 rows=54051 loops=3)\n Group Key: b.item\n -> Parallel Seq Scan on stocktransactions b (cost=0.00..23281.60 rows=716810 width=12) (actual time=0.015..170.646 rows=579563 loops=3)\nPlanning time: 0.277 ms\nExecution time: 661.342 ms\n\nPlan C though, thanks to the \"offset optimization fence\", executes the following, more efficient plan:\n\nNested Loop (cost=32768.77..41146.56 rows=7428 width=344) (actual time=456.611..525.395 rows=15521 loops=1)\n -> Subquery Scan on c (cost=32768.35..33269.76 rows=7428 width=12) (actual time=456.591..475.204 rows=15521 loops=1)\n Filter: (c.stock >= '1'::double precision)\n Rows Removed by Filter: 48277\n -> Finalize HashAggregate (cost=32768.35..32991.20 rows=22285 width=12) (actual time=456.582..468.124 rows=63798 loops=1)\n Group Key: b.item\n -> Gather (cost=27865.65..32545.50 rows=44570 width=12) (actual time=348.479..415.463 rows=162085 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Partial HashAggregate (cost=26865.65..27088.50 rows=22285 width=12) (actual time=343.952..355.912 rows=54028 loops=3)\n Group Key: b.item\n -> Parallel Seq Scan on stocktransactions b (cost=0.00..23281.60 rows=716810 width=12) (actual time=0.015..172.235 rows=579563 loops=3)\n -> Index Scan using \"PK_items_ID\" on items a (cost=0.42..1.05 rows=1 width=332) (actual time=0.003..0.003 rows=1 loops=15521)\n Index Cond: (\"ID\" = c.item)\nPlanning time: 0.223 ms\nExecution time: 526.203 ms\n\n\n========== Original ==========\nFrom: David Rowley <[email protected]>\nTo: Benjamin Coutu <[email protected]>\nDate: Sun, 29 Oct 2017 12:46:42 +0100\nSubject: Re: [PERFORM] Cheaper subquery scan not considered unless offset 0\n\n> \n> \n> On 30 October 2017 at 00:24, Benjamin Coutu <[email protected]> wrote:\n> > -> Index Scan using \"PK_items_ID\" on items a (cost=0.42..1.05 rows=1 width=332) (actual time=0.003..0.003 rows=1 loops=15521 total=46.563)\n> \n> I've never seen EXPLAIN output like that before.\n> \n> Is this some modified version of PostgreSQL?\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 29 Oct 2017 14:17:19 +0100",
"msg_from": "Benjamin Coutu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cheaper subquery scan not considered unless offset 0"
}
] |
[
{
"msg_contents": "There is actually another separate issue here apart from that the planner obviously choosing the wrong plan as originally described in my last message, a plan it knows to be more expensive based on cost estimates.\n\nTake a look at the way the filter condition is treated differently when estimating the number of returned rows when applied in different nodes.\n\nQueries A/B:\n\n -> Finalize HashAggregate (cost=32879.78..33102.62 rows=22285 width=12) (actual time=450.724..458.667 rows=15521 loops=1)\n Group Key: b.item\n Filter: (sum(b.amount) >= '1'::double precision)\n Rows Removed by Filter: 48277\n -> Gather ...\n\nQuery C:\n\n -> Subquery Scan on c (cost=32768.35..33269.76 rows=7428 width=12) (actual time=456.591..475.204 rows=15521 loops=1)\n Filter: (c.stock >= '1'::double precision)\n Rows Removed by Filter: 48277\n -> Finalize HashAggregate (cost=32768.35..32991.20 rows=22285 width=12) (actual time=456.582..468.124 rows=63798 loops=1)\n Group Key: b.item\n -> Gather ...\n\nInterestingly enough the subquery scan with query C correctly accounts for the filter when estimating rows=7428, while A/B doesn't seem to account for the filter in the HasAggregate node (estimated rows=22285). This looks like a bug.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 29 Oct 2017 15:41:20 +0100",
"msg_from": "Benjamin Coutu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cheaper subquery scan not considered unless offset 0"
}
] |
[
{
"msg_contents": " From performance standpoint I thought set operation was better than Cursor.\nBut I found Cursor to be more effective than Set operation. Is there a way\nwe can force optimizer to use cursor plan. QUERY PLAN \n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Oct 2017 15:51:06 -0700 (MST)",
"msg_from": "patibandlakoshal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cursor vs Set Operation"
},
{
"msg_contents": "On Mon, Oct 30, 2017 at 5:51 PM, patibandlakoshal\n<[email protected]> wrote:\n> From performance standpoint I thought set operation was better than Cursor.\n> But I found Cursor to be more effective than Set operation. Is there a way\n> we can force optimizer to use cursor plan. QUERY PLAN\n\nYou're going to have to be a little more specific. In particular, I\nhave no idea what a 'cursor plan' is. What precise operations did you\ndo?\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 1 Nov 2017 11:28:12 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cursor vs Set Operation"
}
] |
[
{
"msg_contents": "I'm working on an application which performs a lot of inserts in 2 large\ntables.\nPreviously we didn't know about lwlocks, but we're now testing in Amazon RDS\nAurora - PostgreSQL (9.6.3).\nIn previous load tests, both local servers and classic Amazon RDS, there was\nsome scalability limit we couldn't find - CPU / memory / IO were all low,\nbut still there was contention that wasn't visible in PostgreSQL views.\nNow with Aurora it shows that most of the sessions are blocking on\nLWLock:buffer_content.\n\nI would like some insights, as we have 2 tables with ~35 million rows each,\nand they have several indexes (shown below).\nThis request is a crucial operation for our system, and each application\nrequest must insert on those 2 large tables in a single transaction, plus\nsome other selects.\n\nI've searched a lot and found nothing on how to mitigate this issue. Just\nfound that it might be related to inserts.\n\nAny tips?\n\nFor reference, here are the descriptions of both tables:\n\n\\d transactions\n Tabela\n\"public.transactions\"\n Coluna | Tipo \n| Modificadores \n---------------------------------------------------+-----------------------------+-------------------------------------------------------------------\n id | bigint \n| não nulo valor padrão de nextval('transactions_id_seq'::regclass)\n subclass | character varying(31) \n| \n amount | numeric \n| não nulo\n authorization_status | character varying(255) \n| \n date | timestamp without time\nzone | não nulo\n description | text \n| \n transaction_feedback_expiration_notified | boolean \n| \n transaction_feedback_expiration_reminder_notified | boolean \n| \n transaction_feedback_reminder_notified | boolean \n| \n by_id | bigint \n| \n channel_id | bigint \n| não nulo\n feedback_id | bigint \n| \n from_user_id | bigint \n| \n next_authorization_level_id | bigint \n| \n to_user_id | bigint \n| \n type_id | bigint \n| não nulo\n order_id | bigint \n| \n status | character varying(255) \n| \n received | boolean \n| \n principal_type_id | bigint \n| \n access_client_id | bigint \n| \n original_transfer_id | bigint \n| \n show_to_receiver | boolean \n| \n expiration_date | timestamp without time\nzone | \n scheduled | boolean \n| \n first_installment_immediate | boolean \n| \n installments_count | integer \n| \n process_date | timestamp without time\nzone | \n comments | text \n| \n transaction_id | bigint \n| \n sms_code | character varying(255) \n| \n external_principal_value | character varying(255) \n| \n external_principal_type_id | bigint \n| \n received_by_id | bigint \n| \n from_name | character varying(255) \n| \n to_name | character varying(255) \n| \n next_occurrence_date | timestamp without time\nzone | \n occurrences_count | integer \n| \n occurrence_interval_amount | integer \n| \n occurrence_interval_field | character varying(255) \n| \n last_occurrence_failure_id | bigint \n| \n last_occurrence_success_id | bigint \n| \n by_self | boolean \n| \n from_system | boolean \n| \n to_system | boolean \n| \n ticket_number | character varying(255) \n| \n cancel_url | character varying(255) \n| \n success_url | character varying(255) \n| \n transaction_number | character varying(255) \n| \n expiration_date_comments | text \n| \nÍndices:\n \"transactions_pkey\" PRIMARY KEY, btree (id)\n \"ix_external_principal_value\" btree (external_principal_value) WHERE\nexternal_principal_value IS NOT NULL\n \"ix_recurring_next_occurrence_date\" btree (next_occurrence_date) WHERE\nnext_occurrence_date IS NOT NULL\n \"ix_ticket_number\" btree (lower(ticket_number::text)) WHERE\nticket_number IS NOT NULL\n \"ix_transactions_amount\" btree (amount)\n \"ix_transactions_date\" btree (date)\n \"ix_transactions_fk_transactions_access_client_id\" btree\n(access_client_id) WHERE access_client_id IS NOT NULL\n \"ix_transactions_fk_transactions_by_id\" btree (by_id) WHERE by_id IS NOT\nNULL\n \"ix_transactions_fk_transactions_channel_id\" btree (channel_id)\n \"ix_transactions_fk_transactions_external_principal_type_id\" btree\n(external_principal_type_id) WHERE external_principal_type_id IS NOT NULL\n \"ix_transactions_fk_transactions_feedback_id\" btree (feedback_id) WHERE\nfeedback_id IS NOT NULL\n \"ix_transactions_fk_transactions_from_user_id\" btree (from_user_id)\nWHERE from_user_id IS NOT NULL\n \"ix_transactions_fk_transactions_last_occurrence_failure_id\" btree\n(last_occurrence_failure_id) WHERE last_occurrence_failure_id IS NOT NULL\n \"ix_transactions_fk_transactions_last_occurrence_success_id\" btree\n(last_occurrence_success_id) WHERE last_occurrence_success_id IS NOT NULL\n \"ix_transactions_fk_transactions_next_authorization_level_id\" btree\n(next_authorization_level_id) WHERE next_authorization_level_id IS NOT NULL\n \"ix_transactions_fk_transactions_order_id\" btree (order_id) WHERE\norder_id IS NOT NULL\n \"ix_transactions_fk_transactions_original_transfer_id\" btree\n(original_transfer_id) WHERE original_transfer_id IS NOT NULL\n \"ix_transactions_fk_transactions_principal_type_id\" btree\n(principal_type_id) WHERE principal_type_id IS NOT NULL\n \"ix_transactions_fk_transactions_received_by_id\" btree (received_by_id)\nWHERE received_by_id IS NOT NULL\n \"ix_transactions_fk_transactions_to_user_id\" btree (to_user_id) WHERE\nto_user_id IS NOT NULL\n \"ix_transactions_fk_transactions_transaction_id\" btree (transaction_id)\nWHERE transaction_id IS NOT NULL\n \"ix_transactions_fk_transactions_type_id\" btree (type_id)\n \"ix_transactions_subclass\" btree (subclass)\n \"ix_transactions_transaction_number\" btree\n(lower(transaction_number::text)) WHERE transaction_number IS NOT NULL\n \"next_occurrence_date\" btree (next_occurrence_date)\nRestrições de chave estrangeira:\n \"fk_transactions_access_client_id\" FOREIGN KEY (access_client_id)\nREFERENCES access_clients(id)\n \"fk_transactions_by_id\" FOREIGN KEY (by_id) REFERENCES users(id)\n \"fk_transactions_channel_id\" FOREIGN KEY (channel_id) REFERENCES\nchannels(id)\n \"fk_transactions_external_principal_type_id\" FOREIGN KEY\n(external_principal_type_id) REFERENCES principal_types(id)\n \"fk_transactions_feedback_id\" FOREIGN KEY (feedback_id) REFERENCES\nrefs(id)\n \"fk_transactions_from_user_id\" FOREIGN KEY (from_user_id) REFERENCES\nusers(id)\n \"fk_transactions_last_occurrence_failure_id\" FOREIGN KEY\n(last_occurrence_failure_id) REFERENCES failed_payment_occurrences(id)\n \"fk_transactions_last_occurrence_success_id\" FOREIGN KEY\n(last_occurrence_success_id) REFERENCES transfers(id)\n \"fk_transactions_next_authorization_level_id\" FOREIGN KEY\n(next_authorization_level_id) REFERENCES authorization_levels(id)\n \"fk_transactions_order_id\" FOREIGN KEY (order_id) REFERENCES\nad_orders(id)\n \"fk_transactions_original_transfer_id\" FOREIGN KEY\n(original_transfer_id) REFERENCES transfers(id)\n \"fk_transactions_principal_type_id\" FOREIGN KEY (principal_type_id)\nREFERENCES principal_types(id)\n \"fk_transactions_received_by_id\" FOREIGN KEY (received_by_id) REFERENCES\nusers(id)\n \"fk_transactions_to_user_id\" FOREIGN KEY (to_user_id) REFERENCES\nusers(id)\n \"fk_transactions_transaction_id\" FOREIGN KEY (transaction_id) REFERENCES\ntransactions(id)\n \"fk_transactions_type_id\" FOREIGN KEY (type_id) REFERENCES\ntransfer_types(id)\nReferenciada por:\n TABLE \"amount_reservations\" CONSTRAINT\n\"fk_amount_reservations_external_payment_id\" FOREIGN KEY\n(external_payment_id) REFERENCES transactions(id)\n TABLE \"amount_reservations\" CONSTRAINT\n\"fk_amount_reservations_scheduled_payment_id\" FOREIGN KEY\n(scheduled_payment_id) REFERENCES transactions(id)\n TABLE \"amount_reservations\" CONSTRAINT\n\"fk_amount_reservations_transaction_id\" FOREIGN KEY (transaction_id)\nREFERENCES transactions(id)\n TABLE \"failed_payment_occurrences\" CONSTRAINT\n\"fk_failed_payment_occurrences_recurring_payment_id\" FOREIGN KEY\n(recurring_payment_id) REFERENCES transactions(id)\n TABLE \"refs\" CONSTRAINT \"fk_refs_transaction_id\" FOREIGN KEY\n(transaction_id) REFERENCES transactions(id)\n TABLE \"scheduled_payment_installments\" CONSTRAINT\n\"fk_scheduled_payment_installments_scheduled_payment_id\" FOREIGN KEY\n(scheduled_payment_id) REFERENCES transactions(id)\n TABLE \"transaction_authorizations\" CONSTRAINT\n\"fk_transaction_authorizations_transaction_id\" FOREIGN KEY (transaction_id)\nREFERENCES transactions(id)\n TABLE \"transaction_custom_field_values\" CONSTRAINT\n\"fk_transaction_custom_field_values_owner_id\" FOREIGN KEY (owner_id)\nREFERENCES transactions(id)\n TABLE \"transactions\" CONSTRAINT \"fk_transactions_transaction_id\" FOREIGN\nKEY (transaction_id) REFERENCES transactions(id)\n TABLE \"transfers\" CONSTRAINT \"fk_transfers_transaction_id\" FOREIGN KEY\n(transaction_id) REFERENCES transactions(id)\n TABLE \"voucher_packs\" CONSTRAINT \"fk_voucher_packs_buy_id\" FOREIGN KEY\n(buy_id) REFERENCES transactions(id)\n TABLE \"vouchers\" CONSTRAINT \"fk_vouchers_redeem_id\" FOREIGN KEY\n(redeem_id) REFERENCES transactions(id)\n\n\n------------------------------------------------------------------\n\n\\d transfers\n Tabela\n\"public.transfers\"\n Coluna | Tipo | \nModificadores \n----------------------------------+-----------------------------+----------------------------------------------------------------\n id | bigint | não nulo\nvalor padrão de nextval('transfers_id_seq'::regclass)\n subclass | character varying(31) | \n amount | numeric | não nulo\n date | timestamp without time zone | não nulo\n emission_date | timestamp without time zone | \n expiration_date | timestamp without time zone | \n from_id | bigint | não nulo\n parent_id | bigint | \n to_id | bigint | não nulo\n type_id | bigint | não nulo\n charged_back_by_id | bigint | \n user_account_fee_log_id | bigint | \n chargeback_of_id | bigint | \n transaction_id | bigint | \n scheduled_payment_installment_id | bigint | \n transfer_fee_id | bigint | \n number | integer | \n by_id | bigint | \n transaction_number | character varying(255) | \nÍndices:\n \"transfers_pkey\" PRIMARY KEY, btree (id)\n \"ix_transfers_amount\" btree (amount)\n \"ix_transfers_date\" btree (date)\n \"ix_transfers_fk_transfers_by_id\" btree (by_id) WHERE by_id IS NOT NULL\n \"ix_transfers_fk_transfers_chargeback_of_id\" btree (chargeback_of_id)\nWHERE chargeback_of_id IS NOT NULL\n \"ix_transfers_fk_transfers_charged_back_by_id\" btree\n(charged_back_by_id) WHERE charged_back_by_id IS NOT NULL\n \"ix_transfers_fk_transfers_from_id\" btree (from_id)\n \"ix_transfers_fk_transfers_parent_id\" btree (parent_id) WHERE parent_id\nIS NOT NULL\n \"ix_transfers_fk_transfers_scheduled_payment_installment_id\" btree\n(scheduled_payment_installment_id) WHERE scheduled_payment_installment_id IS\nNOT NULL\n \"ix_transfers_fk_transfers_to_id\" btree (to_id)\n \"ix_transfers_fk_transfers_transaction_id\" btree (transaction_id) WHERE\ntransaction_id IS NOT NULL\n \"ix_transfers_fk_transfers_transfer_fee_id\" btree (transfer_fee_id)\nWHERE transfer_fee_id IS NOT NULL\n \"ix_transfers_fk_transfers_type_id\" btree (type_id)\n \"ix_transfers_fk_transfers_user_account_fee_log_id\" btree\n(user_account_fee_log_id) WHERE user_account_fee_log_id IS NOT NULL\n \"ix_transfers_transaction_number\" btree\n(lower(transaction_number::text)) WHERE transaction_number IS NOT NULL\nRestrições de chave estrangeira:\n \"fk_transfers_by_id\" FOREIGN KEY (by_id) REFERENCES users(id)\n \"fk_transfers_chargeback_of_id\" FOREIGN KEY (chargeback_of_id)\nREFERENCES transfers(id)\n \"fk_transfers_charged_back_by_id\" FOREIGN KEY (charged_back_by_id)\nREFERENCES transfers(id)\n \"fk_transfers_from_id\" FOREIGN KEY (from_id) REFERENCES accounts(id)\n \"fk_transfers_parent_id\" FOREIGN KEY (parent_id) REFERENCES\ntransfers(id)\n \"fk_transfers_scheduled_payment_installment_id\" FOREIGN KEY\n(scheduled_payment_installment_id) REFERENCES\nscheduled_payment_installments(id)\n \"fk_transfers_to_id\" FOREIGN KEY (to_id) REFERENCES accounts(id)\n \"fk_transfers_transaction_id\" FOREIGN KEY (transaction_id) REFERENCES\ntransactions(id)\n \"fk_transfers_transfer_fee_id\" FOREIGN KEY (transfer_fee_id) REFERENCES\ntransfer_fees(id)\n \"fk_transfers_type_id\" FOREIGN KEY (type_id) REFERENCES\ntransfer_types(id)\n \"fk_transfers_user_account_fee_log_id\" FOREIGN KEY\n(user_account_fee_log_id) REFERENCES user_account_fee_logs(id)\nReferenciada por:\n TABLE \"account_balances\" CONSTRAINT \"fk_account_balances_transfer_id\"\nFOREIGN KEY (transfer_id) REFERENCES transfers(id)\n TABLE \"failed_payment_occurrences\" CONSTRAINT\n\"fk_failed_payment_occurrences_transfer_id\" FOREIGN KEY (transfer_id)\nREFERENCES transfers(id)\n TABLE \"transactions\" CONSTRAINT\n\"fk_transactions_last_occurrence_success_id\" FOREIGN KEY\n(last_occurrence_success_id) REFERENCES transfers(id)\n TABLE \"transactions\" CONSTRAINT \"fk_transactions_original_transfer_id\"\nFOREIGN KEY (original_transfer_id) REFERENCES transfers(id)\n TABLE \"transfer_status_logs\" CONSTRAINT\n\"fk_transfer_status_logs_transfer_id\" FOREIGN KEY (transfer_id) REFERENCES\ntransfers(id)\n TABLE \"transfers\" CONSTRAINT \"fk_transfers_chargeback_of_id\" FOREIGN KEY\n(chargeback_of_id) REFERENCES transfers(id)\n TABLE \"transfers\" CONSTRAINT \"fk_transfers_charged_back_by_id\" FOREIGN\nKEY (charged_back_by_id) REFERENCES transfers(id)\n TABLE \"transfers\" CONSTRAINT \"fk_transfers_parent_id\" FOREIGN KEY\n(parent_id) REFERENCES transfers(id)\n TABLE \"transfers_transfer_status_flows\" CONSTRAINT\n\"fk_transfers_transfer_status_flows_transfer_id\" FOREIGN KEY (transfer_id)\nREFERENCES transfers(id)\n\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 31 Oct 2017 09:30:08 -0700 (MST)",
"msg_from": "luisfpg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Massive insert vs heavy contention in LWLock:buffer_content"
}
] |
[
{
"msg_contents": "Hello All I'm researching on Index-Advisor Tools to be applied in SQL\nqueries. At first I found this: - EnterpriseDB -\nhttps://www.enterprisedb.com/docs/en/9.5/asguide/EDB_Postgres_Advanced_Server_Guide.1.56.html\nSomeone would know of other tools for this purpose. I'd appreciate it if\nyou can help me.\n\nBest Regards\nNeto\n\nHello All\n\nI'm researching on Index-Advisor Tools to be applied in SQL queries.\nAt first I found this:\n\n- EnterpriseDB - https://www.enterprisedb.com/docs/en/9.5/asguide/EDB_Postgres_Advanced_Server_Guide.1.56.html\n\nSomeone would know of other tools for this purpose. I'd appreciate it if you can help me.\nBest RegardsNeto",
"msg_date": "Tue, 31 Oct 2017 15:12:03 -0200",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index-Advisor Tools"
},
{
"msg_contents": "Hi Neto, maybe HypoPG\nCan help you:\n\nhttps://github.com/dalibo/hypopg\n\nEl 31 oct. 2017 2:13 PM, \"Neto pr\" <[email protected]> escribió:\n\n>\n> Hello All I'm researching on Index-Advisor Tools to be applied in SQL\n> queries. At first I found this: - EnterpriseDB -\n> https://www.enterprisedb.com/docs/en/9.5/asguide/EDB_\n> Postgres_Advanced_Server_Guide.1.56.html Someone would know of other\n> tools for this purpose. I'd appreciate it if you can help me.\n>\n> Best Regards\n> Neto\n>\n\nHi Neto, maybe HypoPGCan help you:https://github.com/dalibo/hypopgEl 31 oct. 2017 2:13 PM, \"Neto pr\" <[email protected]> escribió:Hello All\n\nI'm researching on Index-Advisor Tools to be applied in SQL queries.\nAt first I found this:\n\n- EnterpriseDB - https://www.enterprisedb.com/docs/en/9.5/asguide/EDB_Postgres_Advanced_Server_Guide.1.56.html\n\nSomeone would know of other tools for this purpose. I'd appreciate it if you can help me.\nBest RegardsNeto",
"msg_date": "Tue, 31 Oct 2017 14:19:58 -0300",
"msg_from": "Anthony Sotolongo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index-Advisor Tools"
},
{
"msg_contents": "Thanks for reply Antony.\nBut from what I've read, HYPOPG only allows you to create hypothetical\nindexes, so the DBA can analyze if it brings benefits.\nWhat I would like is a tool that from a SQL Query indicates which indexes\nwould be recommended to decrease the response time.\n\nBest Regards\nNeto\n\n2017-10-31 15:19 GMT-02:00 Anthony Sotolongo <[email protected]>:\n\n> Hi Neto, maybe HypoPG\n> Can help you:\n>\n> https://github.com/dalibo/hypopg\n>\n> El 31 oct. 2017 2:13 PM, \"Neto pr\" <[email protected]> escribió:\n>\n>>\n>> Hello All I'm researching on Index-Advisor Tools to be applied in SQL\n>> queries. At first I found this: - EnterpriseDB -\n>> https://www.enterprisedb.com/docs/en/9.5/asguide/EDB_Postgre\n>> s_Advanced_Server_Guide.1.56.html Someone would know of other tools for\n>> this purpose. I'd appreciate it if you can help me.\n>>\n>> Best Regards\n>> Neto\n>>\n>\n\nThanks for reply Antony. But from what I've read, HYPOPG only allows you to create hypothetical indexes, so the DBA can analyze if it brings benefits.What I would like is a tool that from a SQL Query indicates which indexes would be recommended to decrease the response time.Best RegardsNeto2017-10-31 15:19 GMT-02:00 Anthony Sotolongo <[email protected]>:Hi Neto, maybe HypoPGCan help you:https://github.com/dalibo/hypopgEl 31 oct. 2017 2:13 PM, \"Neto pr\" <[email protected]> escribió:Hello All\n\nI'm researching on Index-Advisor Tools to be applied in SQL queries.\nAt first I found this:\n\n- EnterpriseDB - https://www.enterprisedb.com/docs/en/9.5/asguide/EDB_Postgres_Advanced_Server_Guide.1.56.html\n\nSomeone would know of other tools for this purpose. I'd appreciate it if you can help me.\nBest RegardsNeto",
"msg_date": "Tue, 31 Oct 2017 15:25:55 -0200",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index-Advisor Tools"
},
{
"msg_contents": "I will be very happy with a tool(or a stats table) that shows the most\nsearched values from a table(since a statistic reset). i.e.:\n\ntable foo (id int, year int)\n\ntop 3 searched value for year field: 2017(500x), 2016(300x), 2015(55x)\n\nWith this info we can create partial indexes or do a table partitioning.\n\n\n\n2017-10-31 15:25 GMT-02:00 Neto pr <[email protected]>:\n\n> Thanks for reply Antony.\n> But from what I've read, HYPOPG only allows you to create hypothetical\n> indexes, so the DBA can analyze if it brings benefits.\n> What I would like is a tool that from a SQL Query indicates which indexes\n> would be recommended to decrease the response time.\n>\n> Best Regards\n> Neto\n>\n> 2017-10-31 15:19 GMT-02:00 Anthony Sotolongo <[email protected]>:\n>\n>> Hi Neto, maybe HypoPG\n>> Can help you:\n>>\n>> https://github.com/dalibo/hypopg\n>>\n>> El 31 oct. 2017 2:13 PM, \"Neto pr\" <[email protected]> escribió:\n>>\n>>>\n>>> Hello All I'm researching on Index-Advisor Tools to be applied in SQL\n>>> queries. At first I found this: - EnterpriseDB -\n>>> https://www.enterprisedb.com/docs/en/9.5/asguide/EDB_Postgre\n>>> s_Advanced_Server_Guide.1.56.html Someone would know of other tools for\n>>> this purpose. I'd appreciate it if you can help me.\n>>>\n>>> Best Regards\n>>> Neto\n>>>\n>>\n>\n\nI will be very happy with a tool(or a stats table) that shows the most searched values from a table(since a statistic reset). i.e.:table foo (id int, year int)top 3 searched value for year field: 2017(500x), 2016(300x), 2015(55x)With this info we can create partial indexes or do a table partitioning.2017-10-31 15:25 GMT-02:00 Neto pr <[email protected]>:Thanks for reply Antony. But from what I've read, HYPOPG only allows you to create hypothetical indexes, so the DBA can analyze if it brings benefits.What I would like is a tool that from a SQL Query indicates which indexes would be recommended to decrease the response time.Best RegardsNeto2017-10-31 15:19 GMT-02:00 Anthony Sotolongo <[email protected]>:Hi Neto, maybe HypoPGCan help you:https://github.com/dalibo/hypopgEl 31 oct. 2017 2:13 PM, \"Neto pr\" <[email protected]> escribió:Hello All\n\nI'm researching on Index-Advisor Tools to be applied in SQL queries.\nAt first I found this:\n\n- EnterpriseDB - https://www.enterprisedb.com/docs/en/9.5/asguide/EDB_Postgres_Advanced_Server_Guide.1.56.html\n\nSomeone would know of other tools for this purpose. I'd appreciate it if you can help me.\nBest RegardsNeto",
"msg_date": "Tue, 31 Oct 2017 17:25:36 -0200",
"msg_from": "Alexandre de Arruda Paes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index-Advisor Tools"
},
{
"msg_contents": "\nI have not used it yet, but from the presentation, very promising:\n\nhttps://medium.com/@ankane/introducing-dexter-the-automatic-indexer-for-postgres-5f8fa8b28f27\n\nhttps://github.com/ankane/dexter\n\n-- \nhttps://yves.zioup.com\ngpg: 4096R/32B0F416 \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 31 Oct 2017 14:04:03 -0600",
"msg_from": "Yves Dorfsman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index-Advisor Tools"
},
{
"msg_contents": "On Tue, Oct 31, 2017 at 8:25 PM, Alexandre de Arruda Paes\n<[email protected]> wrote:\n> I will be very happy with a tool(or a stats table) that shows the most\n> searched values from a table(since a statistic reset). i.e.:\n>\n> table foo (id int, year int)\n>\n> top 3 searched value for year field: 2017(500x), 2016(300x), 2015(55x)\n>\n> With this info we can create partial indexes or do a table partitioning.\n>\n>\n>\n> 2017-10-31 15:25 GMT-02:00 Neto pr <[email protected]>:\n>>\n>> Thanks for reply Antony.\n>> But from what I've read, HYPOPG only allows you to create hypothetical\n>> indexes, so the DBA can analyze if it brings benefits.\n>> What I would like is a tool that from a SQL Query indicates which indexes\n>> would be recommended to decrease the response time.\n\npowa + pg_qualstats will give you this kind of information, and it can\nanalyse the actual queries and suggest indexes that could boost them,\nor show constant repartition for the different WHERE clauses.\n\nYou can get more information on http://powa.readthedocs.io/en/latest/.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 31 Oct 2017 21:04:12 +0100",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index-Advisor Tools"
},
{
"msg_contents": "On Tue, Oct 31, 2017 at 8:06 PM Julien Rouhaud <[email protected]> wrote:\n\n> On Tue, Oct 31, 2017 at 8:25 PM, Alexandre de Arruda Paes\n> <[email protected]> wrote:\n> > I will be very happy with a tool(or a stats table) that shows the most\n> > searched values from a table(since a statistic reset).\n>\n\nAs a vendor, I normally stay silent on this list, but I feel compelled to\nspeak here. This is a feature we built support for in VividCortex. (I'm the\nfounder and CEO). Unlike most PostgreSQL monitoring tools, our product not\nonly aggregates query activity into metrics, but retains a rich and\nrepresentative sample set of the actual statements that executed, including\nfull parameters (even for prepared statements), and all of the properties\nfor the query: the connection's origin, the timestamp, latency, etc. These\nare mapped visually to a scatterplot, and you can instantly see where there\nare clusters of latency outliers, etc, and inspect those quickly. It\nincludes EXPLAIN plans and everything else you need to understand how that\nstatement executed. VividCortex may not be suitable for your scenario, but\nour customers do use it frequently for finding queries that need indexes\nand determining what indexes to add.\n\nOn Tue, Oct 31, 2017 at 8:06 PM Julien Rouhaud <[email protected]> wrote:On Tue, Oct 31, 2017 at 8:25 PM, Alexandre de Arruda Paes\n<[email protected]> wrote:\n> I will be very happy with a tool(or a stats table) that shows the most\n> searched values from a table(since a statistic reset).As a vendor, I normally stay silent on this list, but I feel compelled to speak here. This is a feature we built support for in VividCortex. (I'm the founder and CEO). Unlike most PostgreSQL monitoring tools, our product not only aggregates query activity into metrics, but retains a rich and representative sample set of the actual statements that executed, including full parameters (even for prepared statements), and all of the properties for the query: the connection's origin, the timestamp, latency, etc. These are mapped visually to a scatterplot, and you can instantly see where there are clusters of latency outliers, etc, and inspect those quickly. It includes EXPLAIN plans and everything else you need to understand how that statement executed. VividCortex may not be suitable for your scenario, but our customers do use it frequently for finding queries that need indexes and determining what indexes to add.",
"msg_date": "Mon, 06 Nov 2017 18:52:44 +0000",
"msg_from": "Baron Schwartz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index-Advisor Tools"
}
] |
[
{
"msg_contents": "Hi, this is Gunther, have been with PgSQL for decades, on an off this \nlist. Haven't been on for a long time making my way just fine. But there \nis one thing that keeps bothering me both with Oracle and PgSQL. And \nthat is the preference for Nested Loops.\n\nOver the years the archives have questions about Nested Loops being \nchosen over Hash Joins. But the responses seem too specific to the \npeople's queries, ask many questions, make them post the query plans, \nand often end up frustrating with suggestions to change the data model \nor to add an index and stuff like that.\n\nOne should not have to go into that personal detail.\n\nThere are some clear boundaries that a smart database should just never \ncross.\n\nEspecially with OLAP queries. Think a database that is fine for OLTP, \nhas indexes and the index based accesses for a few records joined with a \ndozen other tables all with indexes is no problem. If you fall into a \nSeq Scan scenario or unwanted Hash Join, you usually forgot to add an \nindex or forgot to put index columns into your join or other \nconstraints. Such are novice questions and we should be beyond that.\n\nBut the issue is bulk searches, reports, and any analytic queries \nscenarios. In those queries Nested Loops are almost always a bad choice, \neven if there is an index. In over 20 years of working with RDBMs this \nhas been my unfailing heuristics. A report runs slow? Look at plan, is \nthere a Nested Loop? Yes? Squash it! And the report runs 10x faster \ninstantaneously.\n\nSo, all the more troublesome is if any database system (here PgSQL) \nwould ever fall into a Nested Loop trap with CPU spinning at 100% for \nseveral minutes, with a Nested Loop body of anything from a Seq Scan or \nworse with a cardinality of anything over 10 or 100. Nested Loops of \nNested Loops or Nested Loops of other complex query plan fragments \nshould be a no-no and chosen only as an absolute last resort when the \nsystem cannot find enough memory, even then disk based merge sort should \nbe better, i.e., Nested Loops should never be chosen. Period.\n\nIf you can set enable_nestloop off and the Hash Join is chosen and the \nperformance goes from 1 hour of 100% CPU to 10 seconds completion time, \nthen something is deadly wrong. And it doesn't matter to me if I should \nhave re-written my query in some funny ways or tweaked my data model, \nthese are all unacceptable options when you have a complex system with \nhybrid OLTP/OLAP uses. Don't tell me to de-normalize. I know I can \nmaterialize joins in tables which I can then use again in joins to save \ntime. But that is not the point here.\n\nAnd I don't think tweaking optimizer statistics is the solution either. \nBecause optimizer statistics quickly become worthless when your criteria \nget more complex.\n\nThe point is that Nested Loops should never be chosen except in index \nlookup situations or may be memory constraints.\n\nHow can I prevent it on a query by query scope? I cannot set \nenable_nestloop = off because one query will be for a full report, wile \nanother one might have indexed constraints running in the same session, \nand I don't want to manage side effects and remember to set \nenable_nestloop parameter on and off.\n\nThere must be a way to tell the optimizer to penalize nested loops to \nmake them the last resort. In Oracle there are those infamous hints, but \nthey don't always work either (or it is easy to make mistakes that you \nget no feedback about).\n\nIs there any chance PgSQL can get something like a hint feature? Or is \nthere a way to use postgresql.conf to penalize nested loops so that they \nwould only ever be chosen in the most straight-forward situations as \nwith query parameters that are indexed? I know I need to have sufficient \nwork_mem, but if you can set enable_nestloop = off and you get the \ndesired Hash Join, there is obviously sufficient work_mem, so that isn't \nthe answer either.\n\nThanks for listening to my rant.\n\nregards,\n-Gunther\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 1 Nov 2017 20:28:41 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "OLAP/reporting queries fall into nested loops over seq scans or other\n horrible planner choices"
},
{
"msg_contents": "\r\n> -----Original Message-----\r\n> From: [email protected] [mailto:pgsql-performance-\r\n> [email protected]] On Behalf Of Gunther\r\n> Sent: Wednesday, November 01, 2017 20:29\r\n> To: [email protected]\r\n> Subject: [PERFORM] OLAP/reporting queries fall into nested loops over seq\r\n> scans or other horrible planner choices\r\n> \r\n> Hi, this is Gunther, have been with PgSQL for decades, on an off this list.\r\n> Haven't been on for a long time making my way just fine. But there is one thing\r\n> that keeps bothering me both with Oracle and PgSQL. And that is the\r\n> preference for Nested Loops.\r\n> \r\n> Over the years the archives have questions about Nested Loops being chosen\r\n> over Hash Joins. But the responses seem too specific to the people's queries,\r\n> ask many questions, make them post the query plans, and often end up\r\n> frustrating with suggestions to change the data model or to add an index and\r\n> stuff like that.\r\n> \r\n> One should not have to go into that personal detail.\r\n> \r\n> There are some clear boundaries that a smart database should just never cross.\r\n> \r\n> Especially with OLAP queries. Think a database that is fine for OLTP, has\r\n> indexes and the index based accesses for a few records joined with a dozen\r\n> other tables all with indexes is no problem. If you fall into a Seq Scan scenario\r\n> or unwanted Hash Join, you usually forgot to add an index or forgot to put index\r\n> columns into your join or other constraints. Such are novice questions and we\r\n> should be beyond that.\r\n> \r\n> But the issue is bulk searches, reports, and any analytic queries scenarios. In\r\n> those queries Nested Loops are almost always a bad choice, even if there is an\r\n> index. In over 20 years of working with RDBMs this has been my unfailing\r\n> heuristics. A report runs slow? Look at plan, is there a Nested Loop? Yes?\r\n> Squash it! And the report runs 10x faster instantaneously.\r\n> \r\n> So, all the more troublesome is if any database system (here PgSQL) would\r\n> ever fall into a Nested Loop trap with CPU spinning at 100% for several\r\n> minutes, with a Nested Loop body of anything from a Seq Scan or worse with a\r\n> cardinality of anything over 10 or 100. Nested Loops of Nested Loops or Nested\r\n> Loops of other complex query plan fragments should be a no-no and chosen\r\n> only as an absolute last resort when the system cannot find enough memory,\r\n> even then disk based merge sort should be better, i.e., Nested Loops should\r\n> never be chosen. Period.\r\n> \r\n> If you can set enable_nestloop off and the Hash Join is chosen and the\r\n> performance goes from 1 hour of 100% CPU to 10 seconds completion time,\r\n> then something is deadly wrong. And it doesn't matter to me if I should have\r\n> re-written my query in some funny ways or tweaked my data model, these are\r\n> all unacceptable options when you have a complex system with hybrid\r\n> OLTP/OLAP uses. Don't tell me to de-normalize. I know I can materialize joins\r\n> in tables which I can then use again in joins to save time. But that is not the\r\n> point here.\r\n> \r\n> And I don't think tweaking optimizer statistics is the solution either.\r\n> Because optimizer statistics quickly become worthless when your criteria get\r\n> more complex.\r\n> \r\n> The point is that Nested Loops should never be chosen except in index lookup\r\n> situations or may be memory constraints.\r\n> \r\n> How can I prevent it on a query by query scope? I cannot set enable_nestloop =\r\n> off because one query will be for a full report, wile another one might have\r\n> indexed constraints running in the same session, and I don't want to manage\r\n> side effects and remember to set enable_nestloop parameter on and off.\r\n> \r\n> There must be a way to tell the optimizer to penalize nested loops to make\r\n> them the last resort. In Oracle there are those infamous hints, but they don't\r\n> always work either (or it is easy to make mistakes that you get no feedback\r\n> about).\r\n> \r\n> Is there any chance PgSQL can get something like a hint feature? Or is there a\r\n> way to use postgresql.conf to penalize nested loops so that they would only ever\r\n> be chosen in the most straight-forward situations as with query parameters\r\n> that are indexed? I know I need to have sufficient work_mem, but if you can set\r\n> enable_nestloop = off and you get the desired Hash Join, there is obviously\r\n> sufficient work_mem, so that isn't the answer either.\r\n> \r\n> Thanks for listening to my rant.\r\n> \r\n> regards,\r\n> -Gunther\r\n> \r\n> \r\n> \r\n> --\r\n> Sent via pgsql-performance mailing list ([email protected])\r\n> To make changes to your subscription:\r\n> http://www.postgresql.org/mailpref/pgsql-performance\r\n\r\n [Laurent Hasson] \r\nHello Gunther,\r\n\r\nJust adding to your voice. I recently experienced the same issue with a complex multi-table view, including pivots, and was surprised to see all the nested loops everywhere in spite of indices being available. I spent a lot of time optimizing the query and went from about 1h to about 3mn, but penalizing nested loops in favor of other \"joining\" techniques seem to make sense as a strategy. Either that, or there is something I really don't understand here either and would love to be educated :)\r\n\r\nLaurent.\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 Nov 2017 02:44:06 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OLAP/reporting queries fall into nested loops over seq\n scans or other horrible planner choices"
},
{
"msg_contents": "Gunther wrote:\n> But there \n> is one thing that keeps bothering me both with Oracle and PgSQL. And \n> that is the preference for Nested Loops.\n\n[...]\n\n> But the issue is bulk searches, reports, and any analytic queries \n> scenarios. In those queries Nested Loops are almost always a bad choice, \n> even if there is an index. In over 20 years of working with RDBMs this \n> has been my unfailing heuristics. A report runs slow? Look at plan, is \n> there a Nested Loop? Yes? Squash it! And the report runs 10x faster \n> instantaneously.\n\n[...]\n\n> If you can set enable_nestloop off and the Hash Join is chosen and the \n> performance goes from 1 hour of 100% CPU to 10 seconds completion time, \n> then something is deadly wrong.\n\n[...]\n\n> The point is that Nested Loops should never be chosen except in index \n> lookup situations or may be memory constraints.\n> \n> How can I prevent it on a query by query scope? I cannot set \n> enable_nestloop = off because one query will be for a full report, wile \n> another one might have indexed constraints running in the same session, \n> and I don't want to manage side effects and remember to set \n> enable_nestloop parameter on and off.\n> \n> There must be a way to tell the optimizer to penalize nested loops to \n> make them the last resort. In Oracle there are those infamous hints, but \n> they don't always work either (or it is easy to make mistakes that you \n> get no feedback about).\n> \n> Is there any chance PgSQL can get something like a hint feature?\n\nPostgreSQL doesn't have a way to tell if a query is an OLAP query\nrunning against a star schema or a regular OLTP query, it will treat\nboth in the same fashion.\n\nI also have had to deal with wrongly chosen nested loop joins, and\ntesting a query with \"enable_nestloop=off\" is one of the first things\nto try in my experience.\n\nHowever, it is not true that PostgreSQL \"perfers nested loops\".\nSometimes a nested loop join is the only sane and efficient way to\nprocess a query, and removing that capability would be just as\nbad a disaster as you are experiencing with your OLAP queries.\n\nBad choices are almost always caused by bad estimates.\nGranted, there is no way that estimates can ever be perfect.\n\nSo what could be done?\n\nOne pragmatic solution would be to wrap every query that you know\nto be an OLAP query with\n\nBEGIN;\nSET LOCAL enable_nestloop=off;\nSELECT ...\nCOMMIT;\n\nLooking deeper, I would say that wrongly chosen nested loop joins\noften come from an underestimate that is close to zero.\nPostgreSQL already clamps row count estimates to 1, that is, it will\nchoose an estimate of 1 whenever it thinks fewer rows will be returned.\n\nPerhaps using a higher clamp like 2 would get rid of many of your\nproblems, but it is a difficult gamble as it will also prevent some\nnested loop joins that would have been the best solution.\n\nFinally, even though the official line of PostgreSQL is to *not* have\nquery hints, and for a number of good reasons, this is far from being\nan unanimous decision. The scales may tip at some point, though I\npersonally hope that this point is not too close.\n\nYours,\nLaurenz Albe\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 02 Nov 2017 09:30:28 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OLAP/reporting queries fall into nested loops over\n seq scans or other horrible planner choices"
},
{
"msg_contents": "Thanks for your support Laurent.\n\nI have an idea on one thing you said:\n\n> Just adding to your voice. I recently experienced the same issue with a complex multi-table view, including pivots, and was surprised to see all the nested loops everywhere\nand here is the clue for me:\n> in spite of indices being available.\nI would say that sometimes indexes are detrimental. If you don't need \nthem for other reasons, you might want to not have them. And without the \nindex, the Nested Loop strategy might not be chosen.\n\nBut that is a side-issue, because it can often not be avoided. Just \nsaying in case it might help.\n\nI also found the opposite now. In the query that made me \"blow the lid\" \nand \"complain\" here, my team decided to add an index and that did not \nget rid of Nested Loops but at least made the inner table access indexed \nrather than a table scan and the performance ended up OK. But it's not \nalways predictable, and these indexes could trap the planner into \nsub-optimal solutions still.\n\nI think there is an opportunity for a PgSQL query plan extension, \nespecially wen dealing with CTE (WITH-clauses), PgSQL could make them a \ntemporary table and add indexes that it needs for it on the fly, because \nafter it has done one pass over the inner loop sequential scan it knows \nperfectly well how many rows it has, and knowing how many more \niterations are coming from the sub-query that's driving the Nested Loop, \nit could decide that it's much faster to put an index on the nested \nrelation, temporarily materialized. Or it could even decide to change \nit's plan mid-way and do the Hash Join.\n\nThis is why I had always dreamed that the PgSQL optimizer had some easy \nAPI where one could plug in experimental strategies. I personally am \nextremely efficient with XSLT for complex intelligent algorithms, and I \ndream of a PgSQL query plan structure exposed as XML which an XSLT \nplugin could then process to edit the plan. People could experiment with \nawesome intelligent new strategies based on statistics gathered along \nthe way of the execution.\n\nregards,\n-Gunther\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 Nov 2017 12:36:13 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OLAP/reporting queries fall into nested loops over seq\n scans or other horrible planner choices"
},
{
"msg_contents": "Thanks you for your thoughtful reply, Laurenz (funny that the people \ninterested in this topic are named Laurent and Laurenz :)\n\n> PostgreSQL doesn't have a way to tell if a query is an OLAP query\n> running against a star schema or a regular OLTP query, it will treat\n> both in the same fashion.\nright, of course, and I would not want to go down that road. There OLAP \nvs. OLTP are not just two cut and dry options, and neither is \"star \nschema\" but one way in which to lay out a simple data model. The real \nworld is always more complex than such cut and dry choices\n> However, it is not true that PostgreSQL \"perfers nested loops\".\n> Sometimes a nested loop join is the only sane and efficient\n> way to process a query ...\nof course, it's not preferring NLs deliberately, but it happens awfully \noften (and not just with PgSQL, same problems I have had with Oracle \nover the years).\n> Bad choices are almost always caused by bad estimates.\n> Granted, there is no way that estimates can ever be perfect.\n> ...\n> Looking deeper, I would say that wrongly chosen nested loop joins\n> often come from an underestimate that is close to zero.\n> PostgreSQL already clamps row count estimates to 1, that is, it will\n> choose an estimate of 1 whenever it thinks fewer rows will be returned.\n>\n> Perhaps using a higher clamp like 2 would get rid of many of your\n> problems, but it is a difficult gamble as it will also prevent some\n> nested loop joins that would have been the best solution.\nWow, that is very interesting! Are you saying that if PgSQL can't know \nwhat the cardinality is, it assumes a default of 1? That would be very \nslanted a guess. I would think a couple of hundred would be more \nappropriate, or 10% of the average of the base tables for which it does \nhave statistics. I would wonder if changing 1 to 2 would make much \ndifference, as Seq Search over 1 to 10 tuples should generally be better \nthan any other approach, as long as the 1-10 tuples are already readily \navailable.\n> Finally, even though the official line of PostgreSQL is to *not* have\n> query hints, and for a number of good reasons, this is far from being\n> an unanimous decision. The scales may tip at some point, though I\n> personally hope that this point is not too close.\n\nI am glad to hear that hints are not completely ruled out by the \ndevelopment team. Definitely Oracle hints are painful and should not be \nreplicated as is. Butmay be I can nudge your (and others') personal \ntastes with the following.\n\nYou suggested this:\n\n> One pragmatic solution would be to wrap every query that you know\n> to be an OLAP query with\n> BEGIN;\n> SET LOCAL enable_nestloop=off;\n> SELECT ...\n> COMMIT;\nI would also like to put the set enable_nestloop = false statement into \na combined statement, but when I do it in a transaction like you showed, \nit would not work for a normal PreparedStatement just expecting a \nResultSet, or at least I haven't been able to make that work. In my Aqua \nData Studio, if I put the set statement before the select statement, the \ncombined statement doesn't return any results. May be I am doing \nsomething wrong. If there is a way, then I would ave what I need.\n\nIf not, I think it might be an easy thing to add.\n\nWe already have different scopes of these optimizer parameters like \nenable_nestloop\n\n1. the system wide scope\n\n2. a session wide scope\n\nand I see no reason why one could not just add a non-disruptive syntax \nform to change these parameters on a statement-wide scope. By all means \nin a comment.\n\nWhy not\n\n--! set enable_nestloop = false\n--! set work_mem = '20 MB'\nSELECT *\n FROM ....\n;\n\nsomething like that. It would not be a big deal, no completely new \nobscure hint syntax.\n\nAnd may be, if that is possible so far, then why not add a CTE scope as \nwell:\n\nWITH Foo AS (\n--! set enable_nestloop = false\n SELECT * FROM ... INNER JOIN ... INNER JOIN ... INNER JOIN ... ...\n) , Bar AS (\n SELECT * FROM Foo INNER JOIN IndexedTable USING(a, b, c)\n)\nSELECT * FROM Bar ...\n;\n\nthis would keep the nestloop off for the CTE Foo with that complex join \nbut allow it to be used for the CTE Bar or the ultimate query.\n\nI think these features should be relatively easy to add without causing \nSQL compatibility issue and also not opening a can of worms with obscure \nhint features that need a lot of work to implement correctly.\n\nBut while we are at dreaming up solution, I think materialized indexed \nsub-plans would also be a nice ting, especially when dealing with CTEs. \nThis could be controlled manually to begin with:\n\nWITH Foo AS (\n--! set enable_nestloop = false\n SELECT * FROM ... INNER JOIN ... INNER JOIN ... INNER JOIN ... ...\n) MATERIALIZE INDEX ON(a, b, c)\n, Bar AS (\n SELECT * FROM Foo INNER JOIN IndexedTable USING(a, b, c)\n)\nSELECT * FROM Bar ...\n;\n\nAnd of course if we don't want to disturb SQL syntax, the \"materialize \nindex on ...\" clause could be in a --! comment.\n\nBut then, to dream on, PgSQL could make sub-query plans a temporary \ntable and add indexes that it needs for it on the fly, because after it \nhas done one pass over the inner loop sequential scan it already has a \nperfect guess of what the cardinality is, and knowing how many more \niterations are coming from the sub-query that's driving the Nested Loop, \nit could decide that it's much faster to put an index on the nested \nrelation, temporarily materialized. Or it could even decide to change \nit's plan mid-way and do the Hash Join.\n\nLet's call them dynamic feedback plan optimization.\n\nThis is why I had always dreamed that the PgSQL optimizer had some easy \nAPI where one could plug in experimental strategies. I personally am \nextremely efficient with XSLT for complex intelligent algorithms, and I \ndream of a PgSQL query plan structure exposed as XML which an XSLT \nplugin could then process to edit the plan. People could experiment with \nawesome intelligent new strategies based on statistics gathered along \nthe way of the execution.\n\nregards,\n-Gunther\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 Nov 2017 13:14:29 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OLAP/reporting queries fall into nested loops over seq\n scans or other horrible planner choices"
},
{
"msg_contents": "Hello,\n\nmay I suggest you to look at\nhttps://github.com/ossc-db/pg_hint_plan\nthat mimics Oracle hints syntax\n\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 Nov 2017 11:03:50 -0700 (MST)",
"msg_from": "legrand legrand <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OLAP/reporting queries fall into nested loops over seq scans or\n other horrible planner choices"
},
{
"msg_contents": "Gunther wrote:\n> > Bad choices are almost always caused by bad estimates.\n> > Granted, there is no way that estimates can ever be perfect.\n> > ...\n> > Looking deeper, I would say that wrongly chosen nested loop joins\n> > often come from an underestimate that is close to zero.\n> > PostgreSQL already clamps row count estimates to 1, that is, it will\n> > choose an estimate of 1 whenever it thinks fewer rows will be returned.\n> > \n> > Perhaps using a higher clamp like 2 would get rid of many of your\n> > problems, but it is a difficult gamble as it will also prevent some\n> > nested loop joins that would have been the best solution.\n> \n> Wow, that is very interesting! Are you saying that if PgSQL can't know \n> what the cardinality is, it assumes a default of 1? That would be very \n> slanted a guess. I would think a couple of hundred would be more \n> appropriate, or 10% of the average of the base tables for which it does \n> have statistics. I would wonder if changing 1 to 2 would make much \n> difference, as Seq Search over 1 to 10 tuples should generally be better \n> than any other approach, as long as the 1-10 tuples are already readily \n> available.\n\nNo, it is not like that.\nWhen PostgreSQL cannot come up with a \"real\" estimate, it uses\ndefault selectivity estimates.\n\nSee include/utils/selfuncs.h:\n\n/*\n * Note: the default selectivity estimates are not chosen entirely at random.\n * We want them to be small enough to ensure that indexscans will be used if\n * available, for typical table densities of ~100 tuples/page. Thus, for\n * example, 0.01 is not quite small enough, since that makes it appear that\n * nearly all pages will be hit anyway. Also, since we sometimes estimate\n * eqsel as 1/num_distinct, we probably want DEFAULT_NUM_DISTINCT to equal\n * 1/DEFAULT_EQ_SEL.\n */\n\n/* default selectivity estimate for equalities such as \"A = b\" */\n#define DEFAULT_EQ_SEL 0.005\n\n/* default selectivity estimate for inequalities such as \"A < b\" */\n#define DEFAULT_INEQ_SEL 0.3333333333333333\n\n/* default selectivity estimate for range inequalities \"A > b AND A < c\" */\n#define DEFAULT_RANGE_INEQ_SEL 0.005\n\n/* default selectivity estimate for pattern-match operators such as LIKE */\n#define DEFAULT_MATCH_SEL 0.005\n\n/* default number of distinct values in a table */\n#define DEFAULT_NUM_DISTINCT 200\n\n/* default selectivity estimate for boolean and null test nodes */\n#define DEFAULT_UNK_SEL 0.005\n#define DEFAULT_NOT_UNK_SEL (1.0 - DEFAULT_UNK_SEL)\n\nThose selectivity estimates are factors, not absolute numbers.\n\nThe clamp to 1 happens when, after applying all selectivity factors, the\nresult is less than 1, precisely to keep the optimizer from choosing a plan\nthat would become very expensive if a branch is executed *at all*.\n\n> > Finally, even though the official line of PostgreSQL is to *not* have\n> > query hints, and for a number of good reasons, this is far from being\n> > an unanimous decision. The scales may tip at some point, though I\n> > personally hope that this point is not too close.\n> \n> I am glad to hear that hints are not completely ruled out by the \n> development team. Definitely Oracle hints are painful and should not be \n> replicated as is. Butmay be I can nudge your (and others') personal \n> tastes with the following.\n\nDidn't work for me.\nYour hints look just like what Oracle does.\n\nThere have been better proposals that aim at fixing the selectivity\nestimates, e.g. \"multiply your estimate for this join by three\".\n\n> In my Aqua \n> Data Studio, if I put the set statement before the select statement, the \n> combined statement doesn't return any results. May be I am doing \n> something wrong. If there is a way, then I would ave what I need.\n\nCheck the SQL statements that are generated by your Aqua Data Studio!\n\nYours,\nLaurenz Albe\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 03 Nov 2017 09:01:52 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OLAP/reporting queries fall into nested loops over\n seq scans or other horrible planner choices"
},
{
"msg_contents": "Laurenz Albe schrieb am 02.11.2017 um 09:30:\n> Finally, even though the official line of PostgreSQL is to *not* have\n> query hints, and for a number of good reasons, this is far from being\n> an unanimous decision. The scales may tip at some point, though I\n> personally hope that this point is not too close.\n\nI also think that hints are not the right way to solve problems like that.\n\nI do like Oracle's approach with SQL profiles, where you can force the\noptimizer to try harder to find a good execution plan. I _think_ it even\nruns the statement with multiple plans and compares the expected outcome\nwith the actual values. Once a better plan is found that plan can be\nattached to that query and the planner will use that plan with subsequent\nexecutions.\n\nThis however requires a much bigger infrastructure then simple hints.\n\n(Unrelated, but: maybe a compromise of the never-ending \"hints vs. no hints\"\ndiscussion would be, to think about integrating the existing \"pg_hint_plan\"\nas a standard contrib module)\n\nThomas\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 3 Nov 2017 02:28:47 -0700 (MST)",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OLAP/reporting queries fall into nested loops over seq scans or\n other horrible planner choices"
},
{
"msg_contents": "On Fri, Nov 3, 2017 at 5:28 AM, Thomas Kellerer <[email protected]> wrote:\n> I do like Oracle's approach with SQL profiles, where you can force the\n> optimizer to try harder to find a good execution plan. I _think_ it even\n> runs the statement with multiple plans and compares the expected outcome\n> with the actual values. Once a better plan is found that plan can be\n> attached to that query and the planner will use that plan with subsequent\n> executions.\n\nI also think that this is a really cool approach. For those specific\nproblem queries, pretty much tell the optimizer \"do your best to make\nthis as efficient as possible\".\n\nTo make that more useful though, you'd probably need a shared query\ncache that would be persisted through restarts. I'd assume if you\nhave a problem query, this very heavy \"planning / optimization\"\noperation would not be something you wanted every connection to have\nto do every time they connect.\n\nI wish I was more knowledgeable about the internals so I could more\nclearly see how a system like that could come together, and what other\ngroundwork would be needed building up to it.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 3 Nov 2017 08:07:13 -0400",
"msg_from": "Adam Brusselback <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: OLAP/reporting queries fall into nested loops over\n seq scans or other horrible planner choices"
},
{
"msg_contents": "On Fri, Nov 3, 2017 at 5:28 AM, Thomas Kellerer <[email protected]> wrote:\n>> I do like Oracle's approach with SQL profiles, where you can force the\n>> optimizer to try harder to find a good execution plan. I _think_ it even\n>> runs the statement with multiple plans and compares the expected outcome\n>> with the actual values. Once a better plan is found that plan can be\n>> attached to that query and the planner will use that plan with subsequent\n>> executions.\nI have used that approach with Oracle. I didn't like it. It is too \ndifficult, too complicated. Requires all sorts of DBA privileges. \nNothing that would help a lowly user trying his ad-hoc queries.\n\nI think a \"dynamic feedback plan optimization\" would be more innovative \nand ultimately deliver better on the original RDBMS vision. The RDBMS \nshould exert all intelligence that it can to optimize the query \nexecution. (I know that means: no reliance on hints.)\n\nThere is so much more that could be done, such as materialized and \npotentially indexed partial results. (I know Oracle as materialized \npartial results).\n\nBut the dynamic feedback plan would be even cooler. So that means the \nouter relation should be built or sampled to estimate the selectivity, \nthe inner relation should be built completely, and if it is too large, \nit should be thrown back to the optimizer to change the plan.\n\nOr may be the planner needs some second look pattern matching \ncriticizer: Any pattern of Nested Loop I would re-check and possibly \nsample a few rows. And Nested Loop with costly inner loop should almost \nalways be avoided. Nested Loop of Seq Scan is a no-no unless it can be \nproven that the cardinality of the inner relation to scan is less than 100.\n\nBut even more, once you have the inner and outer table of a Nested Loop \nbuilt or sampled, there should be no reason not to run the Hash Join. I \nguess I still don't get why the optimizer even today would EVER consider \na Nested Loop over a Hash Join, unless there is some clear indication \nthat the query will be used to just get the FIRST ROWS (Oracle hint) and \nthat those first rows will actually exist (user waits 30 minutes at 100% \nCPU only to be informed that the query has no results!), and that the \nresults are likely to come out early in the Nested Loop! So many \nconstraints to make that Nested Loop plan a successful strategy. Why \never choose it???\n\nI guess, hints or no hints, I think Nested Loops should not be used by \nthe optimizer unless it has positive indication that it meets all the \ncriteria for being a good strategy, i.e., that there is a continuous \npath of indexed columns starting with constant query parameters. This is \nthe usual OLTP query. And that is what Nested Loops are for. But in all \nother cases, and if space allows at all, always use Hash Joins. It is \neven cheaper to do a trial and error! Assume that space will allow, and \nquit if it doesn't, rather than being sheepish and going to a 1 hour CPU \nbound operation. Because if space does not allow, the chance for Nested \nLoop being a good idea is also close to nil! So if space doesn't allow, \nit would be Sort-Merge on Disk. Especially if the query has a DISTINCT \nor ORDER BY clause anyway! Why is that not always a better strategy?\n\nAnd yes, until all this is figured out: by all means include the \npg_hint_plan.c -- pretty please!\n\nregards,\n-Gunther\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 3 Nov 2017 10:36:57 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: OLAP/reporting queries fall into nested loops over\n seq scans or other horrible planner choices"
},
{
"msg_contents": "Just throwing out some more innovative ideas.\n\nMaterialized join tables, I have read somewhere. OK, difficult to keep \nconsistent with transactions. Forget that.\n\nBut, why not collect statistics on every join that is processed, even if \nthe query is interrupted. Then as more and more plans are run, and \ninterrupted for being too slow, statistics on the joins are collected \nand can inform the optimizer next time not to use that approach.\n\nWould work like magic for a user.\n\nUser writes a query. It runs 3 minutes and as no result. User interrupts \nthe query (THANKS PgSQL for allowing that, unlike Oracle!). Now the \nstatistics has already been gathered.\n\nUser reruns the query, not changing anything. Because the statistics on \n(some of) the joins has been gathered, at least with an initial sample, \nnow the planner will likely choose a different plan. Say, now the \nresults come in at 2 minutes and the user is satisfied. But still more \ncomplete statistics was collected.\n\nNow the user changes a few query parameters and runs the query again, or \nputs it into a more complex query. This time the planner has even more \nstatistics and chooses an even better plan. And lo and behold now the \nresults come in at 10 seconds!\n\nAt no point did the user have to analyze the explain plan, come up with \nhints and tricks and nudges to the optimizer. And at no point did the \nuser have to become DBA to run some outlandish PL/SQL procedures for \nwhich he does not have the license key or the special privileges.\n\nBut until that is done, please put in the pg_hint_plan.c. Hints don't \nhurt. If you don't like them, don't use them.,\n\nregards,\n-Gunther\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 3 Nov 2017 10:51:31 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: OLAP/reporting queries fall into nested loops over\n seq scans or other horrible planner choices"
},
{
"msg_contents": "To limit NL usage, wouldn't a modified set of Planner Cost Constants \nhttps://www.postgresql.org/docs/current/static/runtime-config-query.html\n<https://www.postgresql.org/docs/current/static/runtime-config-query.html> \n\nseq_page_cost \nrandom_page_cost \ncpu_tuple_cost \ncpu_index_tuple_cost \ncpu_operator_cost \n\nbe more hash join freindly (as Oracle' optimizer_index_cost_adj )?\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 3 Nov 2017 07:55:45 -0700 (MST)",
"msg_from": "legrand legrand <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OLAP/reporting queries fall into nested loops over seq scans or\n other horrible planner choices"
},
{
"msg_contents": "On 11/3/2017 10:55, legrand legrand wrote:\n> To limit NL usage, wouldn't a modified set of Planner Cost Constants\n> https://www.postgresql.org/docs/current/static/runtime-config-query.html\n> <https://www.postgresql.org/docs/current/static/runtime-config-query.html>\n>\n> seq_page_cost\n> random_page_cost\n> cpu_tuple_cost\n> cpu_index_tuple_cost\n> cpu_operator_cost\n>\n> be more hash join freindly (as Oracle' optimizer_index_cost_adj )?\n>\nI twiddled with some of these and could nudge it toward a Sort Merge \ninstead NL. But it's hit or miss.\n\nMay be there should be a tool which you can run periodically which will \ntest out the installation to see how IO, CPU, and memory performs. Or, \nagain, these statistics should be collected during normal operation so \nthat nobody needs to guess them or test them in complex procedures. As \nthe system runs, it should sample the seq_page_cost and random_page_cost \n(noticing that it has a SSD or HDD) and it should see how much disk read \nis from cache and how much goes out to disk. Why isn't the executor of \nqueries the best person to ask for these cost constants?\n\nregards,\n-Gunther\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 3 Nov 2017 11:13:35 -0400",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: OLAP/reporting queries fall into nested loops over\n seq scans or other horrible planner choices"
},
{
"msg_contents": "Thank you Gunther for bringing this up. It's been bothering me quite a bit\nover time as well.\n\nForgive the naive question, but does the query planner's cost estimator\nonly track a single estimate of cost that gets accumulated and compared\nacross plan variants? Or is it keeping a range or probabilistic\ndistribution? I'm suspecting the former, but i bet either of the latter\nwould fix this rapidly.\n\nThe cases that frustrate me are where NL is chosen over something like HJ,\nwhere if the query planner is slightly wrong on the lower side, then NL\nwould certainly beat HJ (but by relatively small amounts), but a slight\nerror on the higher side mean that the NL gets punished tremendously, do to\nthe big-o penalty difference it's paying over the HJ approach. Having the\nplanner with some notion of the distribution might help it make a better\nassessment of the potential consequences for being slightly off in its\nestimates. If it notices that being off on a plan involving a NL sends the\ndistribution off into hours instead of seconds, it could potentially avoid\nit even if it might be slightly faster in the mean.\n\n<fantasy> If i ever find time, maybe i'll try to play around with this idea\nand see how it performs... </fantasy>\n\n -dave-\n\nOn Fri, Nov 3, 2017 at 11:13 AM, Gunther <[email protected]> wrote:\n\n> On 11/3/2017 10:55, legrand legrand wrote:\n>\n>> To limit NL usage, wouldn't a modified set of Planner Cost Constants\n>> https://www.postgresql.org/docs/current/static/runtime-config-query.html\n>> <https://www.postgresql.org/docs/current/static/runtime-config-query.html\n>> >\n>>\n>> seq_page_cost\n>> random_page_cost\n>> cpu_tuple_cost\n>> cpu_index_tuple_cost\n>> cpu_operator_cost\n>>\n>> be more hash join freindly (as Oracle' optimizer_index_cost_adj )?\n>>\n>> I twiddled with some of these and could nudge it toward a Sort Merge\n> instead NL. But it's hit or miss.\n>\n> May be there should be a tool which you can run periodically which will\n> test out the installation to see how IO, CPU, and memory performs. Or,\n> again, these statistics should be collected during normal operation so that\n> nobody needs to guess them or test them in complex procedures. As the\n> system runs, it should sample the seq_page_cost and random_page_cost\n> (noticing that it has a SSD or HDD) and it should see how much disk read is\n> from cache and how much goes out to disk. Why isn't the executor of queries\n> the best person to ask for these cost constants?\n>\n> regards,\n> -Gunther\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \n\nDave Nicponski\n\nChief Technology Officer\n\n917.696.3081\n\n\n|\n\n\[email protected]\n\n30 Vandam Street. 2nd Floor. NYC\n855.77.SEAMLESS | SeamlessGov.com <http://seamlessgov.com>\n\nThank you Gunther for bringing this up. It's been bothering me quite a bit over time as well.Forgive the naive question, but does the query planner's cost estimator only track a single estimate of cost that gets accumulated and compared across plan variants? Or is it keeping a range or probabilistic distribution? I'm suspecting the former, but i bet either of the latter would fix this rapidly.The cases that frustrate me are where NL is chosen over something like HJ, where if the query planner is slightly wrong on the lower side, then NL would certainly beat HJ (but by relatively small amounts), but a slight error on the higher side mean that the NL gets punished tremendously, do to the big-o penalty difference it's paying over the HJ approach. Having the planner with some notion of the distribution might help it make a better assessment of the potential consequences for being slightly off in its estimates. If it notices that being off on a plan involving a NL sends the distribution off into hours instead of seconds, it could potentially avoid it even if it might be slightly faster in the mean.<fantasy> If i ever find time, maybe i'll try to play around with this idea and see how it performs... </fantasy> -dave-On Fri, Nov 3, 2017 at 11:13 AM, Gunther <[email protected]> wrote:On 11/3/2017 10:55, legrand legrand wrote:\n\nTo limit NL usage, wouldn't a modified set of Planner Cost Constants\nhttps://www.postgresql.org/docs/current/static/runtime-config-query.html\n<https://www.postgresql.org/docs/current/static/runtime-config-query.html>\n\nseq_page_cost\nrandom_page_cost\ncpu_tuple_cost\ncpu_index_tuple_cost\ncpu_operator_cost\n\nbe more hash join freindly (as Oracle' optimizer_index_cost_adj )?\n\n\nI twiddled with some of these and could nudge it toward a Sort Merge instead NL. But it's hit or miss.\n\nMay be there should be a tool which you can run periodically which will test out the installation to see how IO, CPU, and memory performs. Or, again, these statistics should be collected during normal operation so that nobody needs to guess them or test them in complex procedures. As the system runs, it should sample the seq_page_cost and random_page_cost (noticing that it has a SSD or HDD) and it should see how much disk read is from cache and how much goes out to disk. Why isn't the executor of queries the best person to ask for these cost constants?\n\nregards,\n-Gunther\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- \nDave NicponskiChief Technology Officer917.696.3081 | [email protected] Vandam Street. 2nd Floor. NYC 855.77.SEAMLESS | SeamlessGov.com",
"msg_date": "Fri, 3 Nov 2017 11:56:45 -0400",
"msg_from": "Dave Nicponski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: OLAP/reporting queries fall into nested loops over\n seq scans or other horrible planner choices"
}
] |
[
{
"msg_contents": "Hi all:\n\nI am new in pgsql, I am even new in using mailing list. I send this email\njust to give a suggestion on performance.\nI found that if primary key or a colume which has an unique index, is in a\nselect sql,the distinct sometimes is Unnecessary.\nActually, the SQL with DISTINCT will runs more slowly than the SQL without\nDISTINCT.\n\nMy english is poor, here is the SQL:\n\nCREATE TABLE test_tbl ( k INT PRIMARY KEY, col text)\nINSERT into test_tbl select generate_series(1,10000000), 'test';\n\nSQL with DISTINCT:\ntest=# explain analyze select distinct col, k from test_tbl order by k\nlimit 1000;\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=1277683.22..1277690.72 rows=1000 width=36) (actual\ntime=12697.994..12698.382 rows=1000 loops=1)\n -> Unique (cost=1277683.22..1329170.61 rows=6864985 width=36) (actual\ntime=12697.992..12698.311 rows=1000 loops=1)\n -> Sort (cost=1277683.22..1294845.68 rows=6864985 width=36)\n(actual time=12697.991..12698.107 rows=1000 loops=1)\n Sort Key: k, col\n Sort Method: external sort Disk: 215064kB\n -> Seq Scan on test_tbl (cost=0.00..122704.85 rows=6864985\nwidth=36) (actual time=0.809..7561.215 rows=10000000 loops=1)\n Planning time: 2.368 ms\n Execution time: 12728.471 ms\n(8 rows)\n\n\nSQL without DISTINCT:\ntest=# explain analyze select col, k from test_tbl order by k limit 1000;\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.43..31.81 rows=1000 width=9) (actual time=0.667..1.811\nrows=1000 loops=1)\n -> Index Scan using test_tbl_pkey on test_tbl (cost=0.43..313745.06\nrows=10000175 width=9) (actual time=0.666..1.744 rows=1000 loops=1)\n Planning time: 0.676 ms\n Execution time: 1.872 ms\n(4 rows)\n\n\nAlso, after reading \"Understanding+How+PostgreSQL+Executes+a+Query\", I\nguess this happened:\n1. the planner see distinct\n2. the planner wants to use unique\n3. the planner wants to use sort or hash\n4. the planner see order by , and the \"order by colume\" is k, which is in\nthe \"select colume\" -- col, k\n5. the planner use sort\n\nIn fact, the k is primary key, so not only k is distinct, but also any\nvalue combined with k.\nAnd even more, we have a \"default primary key index\", so we do not need to\nsort either.\n\nSo my question is :\n1. Is my guess correct ?\n2. Can we make the planner more clever, so that it can just ignore the\nDISTINCT in the cases which just like mine, and use the index ?\n\n\nOther infomations:\nPostgreSQL 9.6.5 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu\n5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit\n\n\nBest regards.\nRui Liu\n\nHi all:I am new in pgsql, I am even new in using mailing list. I send this email just to give a suggestion on performance.I found that if primary key or a colume which has an unique index, is in a select sql,the distinct sometimes is Unnecessary.Actually, the SQL with DISTINCT will runs more slowly than the SQL without DISTINCT.My english is poor, here is the SQL:CREATE TABLE test_tbl ( k INT PRIMARY KEY, col text)INSERT into test_tbl select generate_series(1,10000000), 'test';SQL with DISTINCT:test=# explain analyze select distinct col, k from test_tbl order by k limit 1000; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=1277683.22..1277690.72 rows=1000 width=36) (actual time=12697.994..12698.382 rows=1000 loops=1) -> Unique (cost=1277683.22..1329170.61 rows=6864985 width=36) (actual time=12697.992..12698.311 rows=1000 loops=1) -> Sort (cost=1277683.22..1294845.68 rows=6864985 width=36) (actual time=12697.991..12698.107 rows=1000 loops=1) Sort Key: k, col Sort Method: external sort Disk: 215064kB -> Seq Scan on test_tbl (cost=0.00..122704.85 rows=6864985 width=36) (actual time=0.809..7561.215 rows=10000000 loops=1) Planning time: 2.368 ms Execution time: 12728.471 ms(8 rows)SQL without DISTINCT:test=# explain analyze select col, k from test_tbl order by k limit 1000; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.43..31.81 rows=1000 width=9) (actual time=0.667..1.811 rows=1000 loops=1) -> Index Scan using test_tbl_pkey on test_tbl (cost=0.43..313745.06 rows=10000175 width=9) (actual time=0.666..1.744 rows=1000 loops=1) Planning time: 0.676 ms Execution time: 1.872 ms(4 rows)Also, after reading \"Understanding+How+PostgreSQL+Executes+a+Query\", I guess this happened:1. the planner see distinct2. the planner wants to use unique3. the planner wants to use sort or hash4. the planner see order by , and the \"order by colume\" is k, which is in the \"select colume\" -- col, k5. the planner use sortIn fact, the k is primary key, so not only k is distinct, but also any value combined with k.And even more, we have a \"default primary key index\", so we do not need to sort either.So my question is :1. Is my guess correct ?2. Can we make the planner more clever, so that it can just ignore the DISTINCT in the cases which just like mine, and use the index ?Other infomations:PostgreSQL 9.6.5 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bitBest regards.Rui Liu",
"msg_date": "Sat, 4 Nov 2017 23:20:42 +0800",
"msg_from": "=?UTF-8?B?5YiY55Ge?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unnecessary DISTINCT while primary key in SQL"
},
{
"msg_contents": "On 5 November 2017 at 04:20, 刘瑞 <[email protected]> wrote:\n> CREATE TABLE test_tbl ( k INT PRIMARY KEY, col text)\n> INSERT into test_tbl select generate_series(1,10000000), 'test';\n>\n> SQL with DISTINCT:\n> test=# explain analyze select distinct col, k from test_tbl order by k limit\n> 1000;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=1277683.22..1277690.72 rows=1000 width=36) (actual\n> time=12697.994..12698.382 rows=1000 loops=1)\n> -> Unique (cost=1277683.22..1329170.61 rows=6864985 width=36) (actual\n> time=12697.992..12698.311 rows=1000 loops=1)\n> -> Sort (cost=1277683.22..1294845.68 rows=6864985 width=36)\n> (actual time=12697.991..12698.107 rows=1000 loops=1)\n> Sort Key: k, col\n> Sort Method: external sort Disk: 215064kB\n> -> Seq Scan on test_tbl (cost=0.00..122704.85 rows=6864985\n> width=36) (actual time=0.809..7561.215 rows=10000000 loops=1)\n> Planning time: 2.368 ms\n> Execution time: 12728.471 ms\n> (8 rows)\n\nThe current planner does not make much of an effort into recording\nwhich columns remain distinct at each level. I have ideas on how to\nimprove this and it would include improving your case here.\n\n9.6 did improve a slight variation of your query, but this was for\nGROUP BY instead of DISTINCT. Probably there's no reason why the same\noptimisation could not be applied to DISTINCT, I just didn't think of\nit when writing the patch.\n\nThe item from the release notes [1] reads \"Ignore GROUP BY columns\nthat are functionally dependent on other columns\"\n\nSo, if you were to write the query as:\n\nexplain analyze select col, k from test_tbl group by col, k order by k\nlimit 1000;\n\nIt should run much more quickly, although still not as optimal as it could be.\n\n[1] https://www.postgresql.org/docs/9.6/static/release-9-6.html\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 6 Nov 2017 13:34:25 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary DISTINCT while primary key in SQL"
}
] |
[
{
"msg_contents": "Good morning all,\r\n\r\nWe have a problem with performance after upgrading from 9.3 to 9.6 where certain queries take 9 times longer to run. On our initial attempt to upgrade, we noticed the system as a whole was taking longer to run through normal daily processes. The query with the largest run time was picked to act as a measuring stick.\r\n\r\nWe are using the staging server to test the upgrade. It has two 1.3TB partitions, each of which holds a copy of the near 1TB database. The active staging 9.3 database is on one of the partitions while a copy was made onto the other. A second instance of 9.3 was set up to verify the copy was successful and some performace tests were done, then upgraded to 9.6 via pg_upgrade. The same performance tests were done and this is where the 9 time slower number comes from.\r\n\r\nOS Ubuntu 14.04.4\r\nPG9.3 is 9.3.19-1.pgdg14.04+1 from http://apt.postgresql.org/pub/repos/apt/\r\nPG9.6 is 9.6.5-1.pgdg14.04+2 from same.\r\nThe server has 24 cores and 64GB RAM. Data drives are spinning platter in raid10 - not ssd.\r\n\r\nupgrade steps:\r\n* Ran rsync (excluding the xlog folder) from the active 9.3 partition to the unused partition while 9.3 was still running.\r\n* Once initial rsync completed, shut down 9.3 and reran the rsync command without the exclude.\r\n* Once second rsync completed, restarted 9.3 and left it alone.\r\n* Copied the active 9.3 configuration files into a new /etc/postgresql/9.3/ folder so we could test before/after numbers. Changed the config to point to the appropriate data/log/etc folders/files/port.\r\n* Started second 9.3 instance.\r\n* Altered the few foreign servers to account for the second instance's port.\r\n* Ran some arbitrary queries to check performance.\r\n* Installed 9.6 via apt\r\n* Created a 9.6 instance with its data directory on the same partition as the newely cloned 9.3 instance.\r\n* Ran pg_upgrade with --link option (includes running --check and compiling/installing postgis)\r\n* Copied the config from 9.3 and made minimal changes. Renamed log files, changed folders, removed checkpoint_segments is about it.\r\n* Started the 9.6 instance and was able to do arbitrary queries.\r\n* Ran the upgrade-suggested vacuumdb command on all databases to generate statistics\r\n\r\nAt that point, the database should be ready to use. Running the same set of arbitrary queries that were run on 9.3 yielded much worse performance, though.\r\nThe data is broken out by dealers containing customers owning vehicles. The arbitrary queries pull data from 8 tables (really 6 large[millions of records] and 2 smaller[hundreds] tables) to populate a new table via \"select [fields] into [new table]\". Twenty different dealers were used for testing meaning twenty of these queries. The system which runs these queries has 12 workers meaning up to 12 of these queries can be running concurrently. While the workers were offline, all twenty were queued up and then the workers activated. For each test, the order of the dealers was the same. That order was a mix of small/large dealers mixed - not exactly high,low,high; more like a few large then a few small. The run time for 9.3 was 21m9s and 9.6 was 3h18m25s.\r\n\r\nEach test was done while the other database was idle - htop showed little to no activity before each test started.\r\nperf reports (converted to flamegraph via https://github.com/brendangregg/FlameGraph) for the 9.6 test show about a 1/3 of the processor usage similar to that of graph for 9.3. The other 2/3 is still within the postgres process but starts with '[unknown]' and has 'connect', 'socket', and 'close' as the next call in the chain. I have not been able to figure out what postgres is doing to make these calls.\r\n\r\nChanging the configuration based on pgtune (command line version 0.9.3-2) did not make much change. The online pgtune at http://pgtune.leopard.in.ua/ had just a couple differences in settings I have yet to test.\r\n\r\nMain question is what the connect/socket/close calls in the perf output are and how to make them go away as they appear to be what is using up the added time. I'm hoping there is just a setting I've missed.\r\n\r\nQuery plan for a small dealer on 9.6 run without anything else running on the server\r\nhttps://explain.depesz.com/s/z71u\r\nPlanning time: 8.218 ms\r\nExecution time: 639319.525 ms\r\n\r\nSame query as run on 9.3\r\nhttps://explain.depesz.com/s/gjN3\r\nTotal runtime: 272897.150 ms\r\n\r\n\r\n--\r\nThanks,\r\nAdam Torres\r\n\r\n\r\n\n\n\n\n\n\n\n\n\n\n\nGood morning all,\n \nWe have a problem with performance after upgrading from 9.3 to 9.6 where certain queries take 9 times longer to run. On our initial attempt to upgrade, we noticed the system as a whole was taking longer to\r\n run through normal daily processes. The query with the largest run time was picked to act as a measuring stick.\n \nWe are using the staging server to test the upgrade. It has two 1.3TB partitions, each of which holds a copy of the near 1TB database. The active staging 9.3 database is on one of the partitions while a\r\n copy was made onto the other. A second instance of 9.3 was set up to verify the copy was successful and some performace tests were done, then upgraded to 9.6 via pg_upgrade. The same performance tests were done and this is where the 9 time slower number\r\n comes from.\n \nOS Ubuntu 14.04.4\nPG9.3 is 9.3.19-1.pgdg14.04+1 from http://apt.postgresql.org/pub/repos/apt/\nPG9.6 is 9.6.5-1.pgdg14.04+2 from same.\nThe server has 24 cores and 64GB RAM. Data drives are spinning platter in raid10 - not ssd.\n \nupgrade steps:\n* Ran rsync (excluding the xlog folder) from the active 9.3 partition to the unused partition while 9.3 was still running.\n* Once initial rsync completed, shut down 9.3 and reran the rsync command without the exclude.\n* Once second rsync completed, restarted 9.3 and left it alone.\n* Copied the active 9.3 configuration files into a new /etc/postgresql/9.3/ folder so we could test before/after numbers. Changed the config to point to the appropriate data/log/etc folders/files/port.\n* Started second 9.3 instance.\n* Altered the few foreign servers to account for the second instance's port.\n* Ran some arbitrary queries to check performance.\n* Installed 9.6 via apt\n* Created a 9.6 instance with its data directory on the same partition as the newely cloned 9.3 instance.\n* Ran pg_upgrade with --link option (includes running --check and compiling/installing postgis)\n* Copied the config from 9.3 and made minimal changes. Renamed log files, changed folders, removed checkpoint_segments is about it.\n* Started the 9.6 instance and was able to do arbitrary queries.\n* Ran the upgrade-suggested vacuumdb command on all databases to generate statistics\n \nAt that point, the database should be ready to use. Running the same set of arbitrary queries that were run on 9.3 yielded much worse performance, though.\nThe data is broken out by dealers containing customers owning vehicles. The arbitrary queries pull data from 8 tables (really 6 large[millions of records] and 2 smaller[hundreds] tables) to populate a new\r\n table via \"select [fields] into [new table]\". Twenty different dealers were used for testing meaning twenty of these queries. The system which runs these queries has 12 workers meaning up to 12 of these queries can be running concurrently. While the workers\r\n were offline, all twenty were queued up and then the workers activated. For each test, the order of the dealers was the same. That order was a mix of small/large dealers mixed - not exactly high,low,high; more like a few large then a few small. The run\r\n time for 9.3 was 21m9s and 9.6 was 3h18m25s.\n \nEach test was done while the other database was idle - htop showed little to no activity before each test started.\nperf reports (converted to flamegraph via https://github.com/brendangregg/FlameGraph) for the 9.6 test show about a 1/3 of the processor usage similar to that of graph for 9.3. The other 2/3 is still within\r\n the postgres process but starts with '[unknown]' and has 'connect', 'socket', and 'close' as the next call in the chain. I have not been able to figure out what postgres is doing to make these calls.\n \nChanging the configuration based on pgtune (command line version 0.9.3-2) did not make much change. The online pgtune at http://pgtune.leopard.in.ua/ had just a couple differences in settings I have yet to\r\n test.\n \nMain question is what the connect/socket/close calls in the perf output are and how to make them go away as they appear to be what is using up the added time. I'm hoping there is just a setting I've missed.\n \nQuery plan for a small dealer on 9.6 run without anything else running on the server\nhttps://explain.depesz.com/s/z71u\nPlanning time: 8.218 ms\nExecution time: 639319.525 ms\n \nSame query as run on 9.3\nhttps://explain.depesz.com/s/gjN3\nTotal runtime: 272897.150 ms\n \n \n-- \n\nThanks,\nAdam Torres",
"msg_date": "Mon, 6 Nov 2017 13:18:00 +0000",
"msg_from": "Adam Torres <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance loss upgrading from 9.3 to 9.6"
},
{
"msg_contents": "On Mon, Nov 06, 2017 at 01:18:00PM +0000, Adam Torres wrote:\n> Good morning all,\n> \n> We have a problem with performance after upgrading from 9.3 to 9.6 where certain queries take 9 times longer to run. On our initial attempt to upgrade, we noticed the system as a whole was taking longer to run through normal daily processes. The query with the largest run time was picked to act as a measuring stick.\n\n> https://explain.depesz.com/s/z71u\n> Planning time: 8.218 ms\n> Execution time: 639319.525 ms\n> \n> Same query as run on 9.3\n> https://explain.depesz.com/s/gjN3\n> Total runtime: 272897.150 ms\n\nActually it looks to me like both query plans are poor..\n\n..because of this:\n| Hash Join (cost=85,086.25..170,080.80 ROWS=40 width=115) (actual time=32.673..84.427 ROWS=13,390 loops=1)\n| Hash Cond: (av.customer_id = cc_1.id)\n\nIf there are a large number of distinct customer_ids (maybe with nearly equal\nfrequencies), it might help to\nALTER TABLE av ALTER customer_id SET STATISTICS 400\n..same for cc_1.id. And re-analyze those tables (are they large??).\n\nsee if statistics improve:\nSELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, n_distinct, array_length(most_common_vals,1) n_mcv,\nFROM pg_stats WHERE attname~'customers_customer' AND tablename='id' GROUP BY 1,2,3,4,5 ORDER BY 1\n\nGoal is to get at least an accurate value for n_distinct (but preferably also\nstoring the most frequent IDs). I wouldn't bother re-running the query unless\nyou find that increasing stats target causes the plan to change.\n\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 6 Nov 2017 08:21:42 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance loss upgrading from 9.3 to 9.6"
},
{
"msg_contents": "Justin,\r\nThanks for the reply.\r\n\r\nI changed the statistics on av.customer_id as suggested and the number returned by pg_stats went from 202,333 to 904,097. There are 11.2 million distinct customer_ids on the 14.8 million vehicle records. Rerunning the query showed no significant change in time (624 seconds vs. 639 seconds) - plan is at https://explain.depesz.com/s/e2fo.\r\n\r\nI went through the query looking for fields used in joins and conditions and applied the same steps to 7 other fields over 4 of the tables. Most n_distinct values did not change much but two did change from 1.# million to -1<x<0 which seems better based on n_distinct's definition. This improved the query a little; from 624 seconds down to 511 seconds. That plan is at https://explain.depesz.com/s/te50. This is the same query that ran in 272 seconds on 9.3 with the same data and previous statistics settings.\r\n\r\nIt has now been decided to try upgrading to 9.4 as that is the minimum to support Django 1.11 (which we are trying to upgrade a backend service to). The hope is whatever feature we have not configured properly in 9.6 is not there in 9.4.\r\n\r\n\r\nOn 11/6/17, 9:21 AM, \"Justin Pryzby\" <[email protected]> wrote:\r\n\r\n On Mon, Nov 06, 2017 at 01:18:00PM +0000, Adam Torres wrote:\r\n > Good morning all,\r\n > \r\n > We have a problem with performance after upgrading from 9.3 to 9.6 where certain queries take 9 times longer to run. On our initial attempt to upgrade, we noticed the system as a whole was taking longer to run through normal daily processes. The query with the largest run time was picked to act as a measuring stick.\r\n \r\n > https://explain.depesz.com/s/z71u\r\n > Planning time: 8.218 ms\r\n > Execution time: 639319.525 ms\r\n > \r\n > Same query as run on 9.3\r\n > https://explain.depesz.com/s/gjN3\r\n > Total runtime: 272897.150 ms\r\n \r\n Actually it looks to me like both query plans are poor..\r\n \r\n ..because of this:\r\n | Hash Join (cost=85,086.25..170,080.80 ROWS=40 width=115) (actual time=32.673..84.427 ROWS=13,390 loops=1)\r\n | Hash Cond: (av.customer_id = cc_1.id)\r\n \r\n If there are a large number of distinct customer_ids (maybe with nearly equal\r\n frequencies), it might help to\r\n ALTER TABLE av ALTER customer_id SET STATISTICS 400\r\n ..same for cc_1.id. And re-analyze those tables (are they large??).\r\n \r\n see if statistics improve:\r\n SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, n_distinct, array_length(most_common_vals,1) n_mcv,\r\n FROM pg_stats WHERE attname~'customers_customer' AND tablename='id' GROUP BY 1,2,3,4,5 ORDER BY 1\r\n \r\n Goal is to get at least an accurate value for n_distinct (but preferably also\r\n storing the most frequent IDs). I wouldn't bother re-running the query unless\r\n you find that increasing stats target causes the plan to change.\r\n \r\n Justin\r\n \r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 6 Nov 2017 21:12:01 +0000",
"msg_from": "Adam Torres <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance loss upgrading from 9.3 to 9.6"
},
{
"msg_contents": "> It has now been decided to try upgrading to 9.4 as that is the minimum to support Django 1.11 (which we are trying to upgrade a backend service to). The hope is whatever feature we have not configured properly in 9.6 is not there in 9.4.\nIt's entirely possible whatever is causing your performance issue is\ncaused by the migration, rather than anything inherently different in\n9.6. The best test for that is setting another 9.3 server up,\nrestoring a backup, and testing there. If that is very different than\nwhat you are getting on 9.6 then it's something which changed in\nPostgres, if not it's just bad stats.\n\nI do think that it's probably better to fix your query rather than\nchoosing to upgrade to 9.4 rather than 9.6. You have a crazy amount\nof your query time spent in a single node. That plan is not good. If\nthat's the only query giving you trouble, work on optimizing it.\n\nJust my $0.02\n\n-Adam\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 6 Nov 2017 21:59:30 -0500",
"msg_from": "Adam Brusselback <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance loss upgrading from 9.3 to 9.6"
},
{
"msg_contents": "On 11/6/17, 9:21 AM, \"Justin Pryzby\" <[email protected]> wrote:\n> see if statistics improve:\n> SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, n_distinct, array_length(most_common_vals,1) n_mcv,\n> FROM pg_stats WHERE attname~'customers_customer' AND tablename='id' GROUP BY 1,2,3,4,5 ORDER BY 1\n\nOn Mon, Nov 06, 2017 at 09:12:01PM +0000, Adam Torres wrote:\n> I changed the statistics on av.customer_id as suggested and the number\n> returned by pg_stats went from 202,333 to 904,097.\n\nDo you mean n_distinct ? It' be useful to see that query on pg_stats. Also I\ndon't know that we've seen \\d output for the tables (or at least the joined\ncolumns) or the full query ?\n\n> There are 11.2 million distinct customer_ids on the 14.8 million vehicle\n> records.\n\nIf there's so many distinct ids, updating stats won't help the rowcount\nestimate (and could even hurt) - it can only store 10000 most-common-values.\n\nAre there as many distinct values for cc.id ?\n\nI would try to reproduce the rowcount problem with a minimal query:\nexplain analyze SELECT FROM av JOIN cc ON av.customer_id=cc.id; --WHERE cc.id<99;\nMaybe the rows estimate is okay for some values and not for others, so maybe\nyou need to try various WHERE (with JOIN an additional tables if need be...but\nwithout reimplementing the whole query).\n\nI just noticed there are two conditions on dealer_id, one from table av and one\nfrom table cc_1. It seems likely those are co-related/non-independent\nconditions..but postgres probably doesn't know that (unless you used PG96 FK\nlogic, or PG10 multi-variable stats). \n\nAs a test, you could try dropping one of those conditions, or maybe a hacky\nchange like ROW(av.dealer_id, cc_1.dealer_id)=ROW('EC000079', 'EC000079'),\nwhich postgres estimates as no more selective than a single equality test. BTW\nthis is all from src/backend/utils/adt/selfuncs.c.\n\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 6 Nov 2017 23:11:14 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance loss upgrading from 9.3 to 9.6"
}
] |
[
{
"msg_contents": "Hello,\n\nI was encouraged to write up a few simplified, reproducible performance cases, that occur (similarly) in our production environment. Attached you find a generic script that sets up generic tables, used for the three different cases. While I think at all of them\nI included the times needed on my machine, the times differ by a small margin on rerun. Every block used was hit in the ring buffer, no need to turn to the kernel buffer in these cases. While this isn't exactly to be expected in the production cases, I doubt this impacts the performance-difference too much.\nEven though I originally developed these examples with different planer settings the default ones seem to work quite reasonably.\n\nOne thing to notice is that in our environment there is a lot of dynamic query-building going on, which might help understanding why we care about the second and third case.\n\nThe first case is the most easy to work around, but I think it's a very common one.\nWhile it's true that this is a special case of the - probably not so easy to solve - cross table correlation issue, this is somehow special because one of the table is accessed via a single unique key. I thought to bring it up, since I maintain (meta-)functions that build functions that select and reinsert these values in the query to expose them to the planner. While this solution works fine, this is a very common cross table correlation issues , while another chunk is the case where are referenced by a foreign key. I'm not sure whether it's a good idea to acquire a lock at planning time or rather recheck the exact values at execution time, but even if it's just using the exact values of that single row (similar to a stable function) at planning time to get a better estimate seems worth it to me.\n\nThe second case is something that happens a lot in our environment (mainly in dynamically composed queries). I wouldn't be so pedantic if 30 would be the largest occurring list length, but we have bigger lists the issue gets bigger.\nBesides the constraint exclusion with the values approach I showed, there is the much bigger issue artificially introducing cross table correlation issues, leading to absurd estimates (just try inserting 100000 in the values list to see what I mean), damaging the plan if one join more tables to it. I didn't choose that case even though I think it's much more frequent, just because joining more relations make it harder to grasp.\nI try to guess the selectivity of that clause in application code and choosing an in or values clause accordingly. As one would expect that is not only annoying to maintain, but in a world of dynamic queries this also leads to quite bizarre performance behavior in some cases.\nUsing a hashtable to enforce the predicate (if the list contains more than x elements) would sound reasonable to me. One might consider workmem, even though just the thought of having a query string that rivals the size of work_mem sounds stupid. What do you think?\n\nThe third case is something a little bit more sophisticated. Sadly it isn't just tied to this obvious case where one can just create an index (create unique index on u (expensive_func(id)); would solve this case), but appears mainly when there are more than three tables with a lot of different predicates of different expense and selectivity. Even though it's not that incredible frequent, maintaining the corresponding application code (see case two) is still quite painful.\n\nBest regards\nArne\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 6 Nov 2017 20:05:36 +0000",
"msg_from": "Arne Roland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Dynamic performance issues"
}
] |
[
{
"msg_contents": "Hi there,\n\nI'm using PostgreSQL 9.5.9 on Debian and experience slow execution of some\nbasic SET statements.\n\nI created about 1600 roles and use that setup for a multi tenancy\napplication:\n\n--snip--\n-- create one role per tenant\nCREATE ROLE tenant1;\n...\nCREATE ROLE tenant1600;\n\n-- create admin role that is granted all tenant roles\nCREATE ROLE admin NOINHERIT LOGIN PASSWORD 'XXXX';\nGRANT tenant1 TO admin;\n...\nGRANT tenant1600 TO admin;\n\n-- every tenant resides in its own schema\nCREATE SCHEMA AUTHORIZATION tenant1;\n...\nCREATE SCHEMA AUTHORIZATION tenant1600;\n--snap--\n\nMy application solely uses the role 'admin' to connect to the database.\nWhen performing sql statements for a specific tenant (e.g. tenant1337), a\nconnection with user 'admin' is established and the following commands are\nexecuted:\n\nSET ROLE 1337;\nSET search_path = tenant1337;\n\nThen the application uses that connection to perform various statements in\nthe database. As I'm using a connection pool, every connection that is\nreturned to the pool executes the following commands:\n\nRESET ROLE;\nSET search_path = DEFAULT;\n\nMy application is a web service that approximately executes some hundred\nstatements per second.\n\nI set \"log_min_duration_statement = 200ms\" and I get about 2k to 4k lines\nper day with statements like \"SET ROLE\"\", \"SET search_path ...\" and \"RESET\nROLE\":\n\n--snip--\n2017-11-07 09:44:30 CET [27760]: [3-1] user=admin,db=mydb LOG: duration:\n901.591 ms execute <unnamed>: SET ROLE \"tenant762\"\n2017-11-07 09:44:30 CET [27659]: [4-1] user=admin,db=mydb LOG: duration:\n1803.971 ms execute <unnamed>: SET ROLE \"tenant392\"\n2017-11-07 09:44:30 CET [27810]: [4-1] user=admin,db=mydb LOG: duration:\n1548.858 ms execute <unnamed>: SET ROLE \"tenant232\"\n2017-11-07 09:44:30 CET [27652]: [3-1] user=admin,db=mydb LOG: duration:\n1248.838 ms execute <unnamed>: SET ROLE \"tenant7\"\n2017-11-07 09:44:30 CET [27706]: [3-1] user=admin,db=mydb LOG: duration:\n998.044 ms execute <unnamed>: SET ROLE \"tenant1239\"\n2017-11-07 09:44:30 CET [27820]: [14-1] user=admin,db=mydb LOG: duration:\n1573.000 ms execute <unnamed>: SET ROLE \"tenant378\"\n2017-11-07 09:44:30 CET [28303]: [4-1] user=admin,db=mydb LOG: duration:\n2116.651 ms execute <unnamed>: SET ROLE \"tenant302\"\n2017-11-07 09:44:30 CET [27650]: [4-1] user=admin,db=mydb LOG: duration:\n2011.629 ms execute <unnamed>: SET ROLE \"tenant938\"\n2017-11-07 09:44:30 CET [28026]: [4-1] user=admin,db=mydb LOG: duration:\n2378.719 ms execute <unnamed>: SET ROLE \"tenant 634\"\n2017-11-07 09:44:30 CET [27708]: [7-1] user=admin,db=mydb LOG: duration:\n1327.962 ms execute <unnamed>: SET ROLE \"tenant22\"\n2017-11-07 09:44:30 CET [27707]: [4-1] user=admin,db=mydb LOG: duration:\n1366.602 ms execute <unnamed>: SET ROLE \"tenant22\"\n2017-11-07 09:44:30 CET [27610]: [8-1] user=admin,db=mydb LOG: duration:\n1098.192 ms execute <unnamed>: SET ROLE \"tenant22\"\n2017-11-07 09:44:30 CET [27762]: [3-1] user=admin,db=mydb LOG: duration:\n1349.368 ms execute <unnamed>: SET ROLE \"tenant22\"\n2017-11-07 09:44:30 CET [27756]: [4-1] user=admin,db=mydb LOG: duration:\n1735.926 ms parse <unnamed>: SET search_path = DEFAULT\n2017-11-07 09:44:31 CET [28190]: [4-1] user=admin,db=mydb LOG: duration:\n1987.256 ms parse <unnamed>: SET search_path = DEFAULT\n2017-11-07 09:44:31 CET [27646]: [3-1] user=admin,db=mydb LOG: duration:\n205.063 ms execute <unnamed>: SET ROLE \"tenant7\"\n2017-11-07 09:44:31 CET [27649]: [3-1] user=admin,db=mydb LOG: duration:\n225.152 ms execute <unnamed>: SET ROLE \"tenant302\"\n2017-11-07 09:44:31 CET [27654]: [5-1] user=admin,db=mydb LOG: duration:\n2235.243 ms parse <unnamed>: SET search_path = DEFAULT\n2017-11-07 09:44:31 CET [27674]: [4-1] user=admin,db=mydb LOG: duration:\n2080.905 ms parse <unnamed>: SET search_path = DEFAULT\n2017-11-07 09:44:31 CET [28307]: [5-1] user=admin,db=mydb LOG: duration:\n2351.064 ms parse <unnamed>: SET search_path = DEFAULT\n2017-11-07 09:44:31 CET [27681]: [4-1] user=admin,db=mydb LOG: duration:\n2455.486 ms parse <unnamed>: SET search_path = DEFAULT\n2017-11-07 09:44:31 CET [27651]: [4-1] user=admin,db=mydb LOG: duration:\n1830.737 ms parse <unnamed>: SET search_path = DEFAULT\n2017-11-07 09:44:32 CET [28137]: [4-1] user=admin,db=mydb LOG: duration:\n1973.241 ms parse <unnamed>: SET search_path = DEFAULT\n2017-11-07 09:44:32 CET [27682]: [4-1] user=admin,db=mydb LOG: duration:\n1863.962 ms parse <unnamed>: SET search_path = DEFAULT\n2017-11-07 09:44:32 CET [28243]: [4-1] user=admin,db=mydb LOG: duration:\n2120.339 ms parse <unnamed>: SET search_path = DEFAULT\n2017-11-07 09:44:32 CET [28025]: [5-1] user=admin,db=mydb LOG: duration:\n2643.520 ms parse <unnamed>: SET search_path = DEFAULT\n2017-11-07 09:44:32 CET [27709]: [7-1] user=admin,db=mydb LOG: duration:\n2519.842 ms parse <unnamed>: SET search_path = DEFAULT\n2017-11-07 09:44:32 CET [27655]: [5-1] user=admin,db=mydb LOG: duration:\n2622.280 ms parse <unnamed>: SET search_path = DEFAULT\n2017-11-07 09:44:32 CET [28242]: [4-1] user=admin,db=mydb LOG: duration:\n2326.483 ms parse <unnamed>: SET search_path = DEFAULT\n2017-11-07 09:44:32 CET [27652]: [4-1] user=admin,db=mydb LOG: duration:\n1746.423 ms parse <unnamed>: SET search_path = DEFAULT\n2017-11-07 09:44:32 CET [27706]: [4-1] user=admin,db=mydb LOG: duration:\n1759.188 ms parse <unnamed>: SET search_path = DEFAULT\n2017-11-07 09:44:32 CET [27603]: [5-1] user=admin,db=mydb LOG: duration:\n2521.347 ms parse <unnamed>: SET search_path = DEFAULT\n2017-11-07 09:44:32 CET [27818]: [5-1] user=admin,db=mydb LOG: duration:\n2382.254 ms parse <unnamed>: SET search_path = DEFAULT\n2017-11-07 09:44:32 CET [27761]: [5-1] user=admin,db=mydb LOG: duration:\n2372.629 ms parse <unnamed>: SET search_path = DEFAULT\n--snap--\n\nBesides those peaks in statement duration, my application performs (i.e.\nhas acceptable response times) most of the time.\n\nIs there anything I can do to improve performance here?\nAny help is greatly appreciated!\n\nRegards,\nUlf\n\nHi there,I'm using PostgreSQL 9.5.9 on Debian and experience slow execution of some basic SET statements.I created about 1600 roles and use that setup for a multi tenancy application:--snip---- create one role per tenantCREATE ROLE tenant1;...CREATE ROLE tenant1600;-- create admin role that is granted all tenant rolesCREATE ROLE admin NOINHERIT LOGIN PASSWORD 'XXXX';GRANT tenant1 TO admin;...GRANT tenant1600 TO admin;-- every tenant resides in its own schemaCREATE SCHEMA AUTHORIZATION tenant1;...CREATE SCHEMA AUTHORIZATION tenant1600;--snap--My application solely uses the role 'admin' to connect to the database. When performing sql statements for a specific tenant (e.g. tenant1337), a connection with user 'admin' is established and the following commands are executed:SET ROLE 1337;SET search_path = tenant1337;Then the application uses that connection to perform various statements in the database. As I'm using a connection pool, every connection that is returned to the pool executes the following commands:RESET ROLE;SET search_path = DEFAULT;My application is a web service that approximately executes some hundred statements per second.I set \"log_min_duration_statement = 200ms\" and I get about 2k to 4k lines per day with statements like \"SET ROLE\"\", \"SET search_path ...\" and \"RESET ROLE\":--snip--2017-11-07 09:44:30 CET [27760]: [3-1] user=admin,db=mydb LOG: duration: 901.591 ms execute <unnamed>: SET ROLE \"tenant762\"2017-11-07 09:44:30 CET [27659]: [4-1] user=admin,db=mydb LOG: duration: 1803.971 ms execute <unnamed>: SET ROLE \"tenant392\"2017-11-07 09:44:30 CET [27810]: [4-1] user=admin,db=mydb LOG: duration: 1548.858 ms execute <unnamed>: SET ROLE \"tenant232\"2017-11-07 09:44:30 CET [27652]: [3-1] user=admin,db=mydb LOG: duration: 1248.838 ms execute <unnamed>: SET ROLE \"tenant7\"2017-11-07 09:44:30 CET [27706]: [3-1] user=admin,db=mydb LOG: duration: 998.044 ms execute <unnamed>: SET ROLE \"tenant1239\"2017-11-07 09:44:30 CET [27820]: [14-1] user=admin,db=mydb LOG: duration: 1573.000 ms execute <unnamed>: SET ROLE \"tenant378\"2017-11-07 09:44:30 CET [28303]: [4-1] user=admin,db=mydb LOG: duration: 2116.651 ms execute <unnamed>: SET ROLE \"tenant302\"2017-11-07 09:44:30 CET [27650]: [4-1] user=admin,db=mydb LOG: duration: 2011.629 ms execute <unnamed>: SET ROLE \"tenant938\"2017-11-07 09:44:30 CET [28026]: [4-1] user=admin,db=mydb LOG: duration: 2378.719 ms execute <unnamed>: SET ROLE \"tenant 634\"2017-11-07 09:44:30 CET [27708]: [7-1] user=admin,db=mydb LOG: duration: 1327.962 ms execute <unnamed>: SET ROLE \"tenant22\"2017-11-07 09:44:30 CET [27707]: [4-1] user=admin,db=mydb LOG: duration: 1366.602 ms execute <unnamed>: SET ROLE \"tenant22\"2017-11-07 09:44:30 CET [27610]: [8-1] user=admin,db=mydb LOG: duration: 1098.192 ms execute <unnamed>: SET ROLE \"tenant22\"2017-11-07 09:44:30 CET [27762]: [3-1] user=admin,db=mydb LOG: duration: 1349.368 ms execute <unnamed>: SET ROLE \"tenant22\"2017-11-07 09:44:30 CET [27756]: [4-1] user=admin,db=mydb LOG: duration: 1735.926 ms parse <unnamed>: SET search_path = DEFAULT2017-11-07 09:44:31 CET [28190]: [4-1] user=admin,db=mydb LOG: duration: 1987.256 ms parse <unnamed>: SET search_path = DEFAULT2017-11-07 09:44:31 CET [27646]: [3-1] user=admin,db=mydb LOG: duration: 205.063 ms execute <unnamed>: SET ROLE \"tenant7\"2017-11-07 09:44:31 CET [27649]: [3-1] user=admin,db=mydb LOG: duration: 225.152 ms execute <unnamed>: SET ROLE \"tenant302\"2017-11-07 09:44:31 CET [27654]: [5-1] user=admin,db=mydb LOG: duration: 2235.243 ms parse <unnamed>: SET search_path = DEFAULT2017-11-07 09:44:31 CET [27674]: [4-1] user=admin,db=mydb LOG: duration: 2080.905 ms parse <unnamed>: SET search_path = DEFAULT2017-11-07 09:44:31 CET [28307]: [5-1] user=admin,db=mydb LOG: duration: 2351.064 ms parse <unnamed>: SET search_path = DEFAULT2017-11-07 09:44:31 CET [27681]: [4-1] user=admin,db=mydb LOG: duration: 2455.486 ms parse <unnamed>: SET search_path = DEFAULT2017-11-07 09:44:31 CET [27651]: [4-1] user=admin,db=mydb LOG: duration: 1830.737 ms parse <unnamed>: SET search_path = DEFAULT2017-11-07 09:44:32 CET [28137]: [4-1] user=admin,db=mydb LOG: duration: 1973.241 ms parse <unnamed>: SET search_path = DEFAULT2017-11-07 09:44:32 CET [27682]: [4-1] user=admin,db=mydb LOG: duration: 1863.962 ms parse <unnamed>: SET search_path = DEFAULT2017-11-07 09:44:32 CET [28243]: [4-1] user=admin,db=mydb LOG: duration: 2120.339 ms parse <unnamed>: SET search_path = DEFAULT2017-11-07 09:44:32 CET [28025]: [5-1] user=admin,db=mydb LOG: duration: 2643.520 ms parse <unnamed>: SET search_path = DEFAULT2017-11-07 09:44:32 CET [27709]: [7-1] user=admin,db=mydb LOG: duration: 2519.842 ms parse <unnamed>: SET search_path = DEFAULT2017-11-07 09:44:32 CET [27655]: [5-1] user=admin,db=mydb LOG: duration: 2622.280 ms parse <unnamed>: SET search_path = DEFAULT2017-11-07 09:44:32 CET [28242]: [4-1] user=admin,db=mydb LOG: duration: 2326.483 ms parse <unnamed>: SET search_path = DEFAULT2017-11-07 09:44:32 CET [27652]: [4-1] user=admin,db=mydb LOG: duration: 1746.423 ms parse <unnamed>: SET search_path = DEFAULT2017-11-07 09:44:32 CET [27706]: [4-1] user=admin,db=mydb LOG: duration: 1759.188 ms parse <unnamed>: SET search_path = DEFAULT2017-11-07 09:44:32 CET [27603]: [5-1] user=admin,db=mydb LOG: duration: 2521.347 ms parse <unnamed>: SET search_path = DEFAULT2017-11-07 09:44:32 CET [27818]: [5-1] user=admin,db=mydb LOG: duration: 2382.254 ms parse <unnamed>: SET search_path = DEFAULT2017-11-07 09:44:32 CET [27761]: [5-1] user=admin,db=mydb LOG: duration: 2372.629 ms parse <unnamed>: SET search_path = DEFAULT--snap--Besides those peaks in statement duration, my application performs (i.e. has acceptable response times) most of the time.Is there anything I can do to improve performance here?Any help is greatly appreciated!Regards,Ulf",
"msg_date": "Tue, 7 Nov 2017 11:11:36 +0100",
"msg_from": "=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow execution of SET ROLE, SET search_path and RESET ROLE"
},
{
"msg_contents": "Hi,\n\nOn 2017-11-07 11:11:36 +0100, Ulf Lohbr�gge wrote:\n> I'm using PostgreSQL 9.5.9 on Debian and experience slow execution of some\n> basic SET statements.\n> \n> I created about 1600 roles and use that setup for a multi tenancy\n> application:\n\nHm. How often do you drop/create these roles? How many other\nroles/groups is one role a member of?\n\n\n> My application solely uses the role 'admin' to connect to the database.\n> When performing sql statements for a specific tenant (e.g. tenant1337), a\n> connection with user 'admin' is established and the following commands are\n> executed:\n> \n> SET ROLE 1337;\n> SET search_path = tenant1337;\n> \n> Then the application uses that connection to perform various statements in\n> the database.\n\nJust to be sure: You realize bad application code could escape from\nthat, right?\n\n\n> My application is a web service that approximately executes some hundred\n> statements per second.\n> \n> I set \"log_min_duration_statement = 200ms\" and I get about 2k to 4k lines\n> per day with statements like \"SET ROLE\"\", \"SET search_path ...\" and \"RESET\n> ROLE\":\n> \n> --snip--\n> 2017-11-07 09:44:30 CET [27760]: [3-1] user=admin,db=mydb LOG: duration:\n> 901.591 ms execute <unnamed>: SET ROLE \"tenant762\"\n> 2017-11-07 09:44:30 CET [27659]: [4-1] user=admin,db=mydb LOG: duration:\n> 1803.971 ms execute <unnamed>: SET ROLE \"tenant392\"\n\nThat is weird.\n\n\n> Besides those peaks in statement duration, my application performs (i.e.\n> has acceptable response times) most of the time.\n> \n> Is there anything I can do to improve performance here?\n> Any help is greatly appreciated!\n\nCan you manually reproduce the problem? What times do you get if you\nmanually run the statement?\n\nGreetings,\n\nAndres Freund\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Nov 2017 07:11:08 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow execution of SET ROLE, SET search_path and RESET\n ROLE"
},
{
"msg_contents": "Hi,\n\n2017-11-07 16:11 GMT+01:00 Andres Freund <[email protected]>:\n\n> Hi,\n>\n> On 2017-11-07 11:11:36 +0100, Ulf Lohbrügge wrote:\n> > I'm using PostgreSQL 9.5.9 on Debian and experience slow execution of\n> some\n> > basic SET statements.\n> >\n> > I created about 1600 roles and use that setup for a multi tenancy\n> > application:\n>\n> Hm. How often do you drop/create these roles? How many other\n> roles/groups is one role a member of?\n>\n\nI create between 10-40 roles per day.\n\nThe roles tenant1 to tenant1600 are not members of any other roles. Only\nthe role 'admin' is member of many roles (tenant1 to tenant1600).\n\n\n>\n>\n> > My application solely uses the role 'admin' to connect to the database.\n> > When performing sql statements for a specific tenant (e.g. tenant1337), a\n> > connection with user 'admin' is established and the following commands\n> are\n> > executed:\n> >\n> > SET ROLE 1337;\n> > SET search_path = tenant1337;\n> >\n> > Then the application uses that connection to perform various statements\n> in\n> > the database.\n>\n> Just to be sure: You realize bad application code could escape from\n> that, right?\n>\n\nYes, I do. :)\nMy application executes all statements via an ORM tool (Hibernate). But\nevil code could still get the plain DB-Connection and execute \"SET ROLE\n...\" statements. My application used to connect as tenant1 to tenant1600\nbut that lead to a vast amount of postgresql connections (even with\npgbouncer).\n\n\n>\n>\n> > My application is a web service that approximately executes some hundred\n> > statements per second.\n> >\n> > I set \"log_min_duration_statement = 200ms\" and I get about 2k to 4k lines\n> > per day with statements like \"SET ROLE\"\", \"SET search_path ...\" and\n> \"RESET\n> > ROLE\":\n> >\n> > --snip--\n> > 2017-11-07 09:44:30 CET [27760]: [3-1] user=admin,db=mydb LOG: duration:\n> > 901.591 ms execute <unnamed>: SET ROLE \"tenant762\"\n> > 2017-11-07 09:44:30 CET [27659]: [4-1] user=admin,db=mydb LOG: duration:\n> > 1803.971 ms execute <unnamed>: SET ROLE \"tenant392\"\n>\n> That is weird.\n>\n>\n> > Besides those peaks in statement duration, my application performs (i.e.\n> > has acceptable response times) most of the time.\n> >\n> > Is there anything I can do to improve performance here?\n> > Any help is greatly appreciated!\n>\n> Can you manually reproduce the problem? What times do you get if you\n> manually run the statement?\n>\n\nUnfortunately not. Every time I manually execute \"SET ROLE ...\" the\nstatement is pretty fast. I created a simple SQL file that contains the\nfollowing statements:\n\n--snip--\nSET ROLE tenant382;\nSET ROLE tenant1337;\nSET ROLE tenant2;\n-- repeat the lines above 100k times\n--snap--\n\nWhen I execute those statements via 'time psql < set-roles.sql', the call\nlasts 138,7 seconds. So 300k \"SET ROLE\" statements result in 0,46ms per\ncall on average.\n\nCheers,\nUlf\n\nHi,2017-11-07 16:11 GMT+01:00 Andres Freund <[email protected]>:Hi,\n\nOn 2017-11-07 11:11:36 +0100, Ulf Lohbrügge wrote:\n> I'm using PostgreSQL 9.5.9 on Debian and experience slow execution of some\n> basic SET statements.\n>\n> I created about 1600 roles and use that setup for a multi tenancy\n> application:\n\nHm. How often do you drop/create these roles? How many other\nroles/groups is one role a member of?I create between 10-40 roles per day.The roles tenant1 to tenant1600 are not members of any other roles. Only the role 'admin' is member of many roles (tenant1 to tenant1600). \n\n\n> My application solely uses the role 'admin' to connect to the database.\n> When performing sql statements for a specific tenant (e.g. tenant1337), a\n> connection with user 'admin' is established and the following commands are\n> executed:\n>\n> SET ROLE 1337;\n> SET search_path = tenant1337;\n>\n> Then the application uses that connection to perform various statements in\n> the database.\n\nJust to be sure: You realize bad application code could escape from\nthat, right?Yes, I do. :)My application executes all statements via an ORM tool (Hibernate). But evil code could still get the plain DB-Connection and execute \"SET ROLE ...\" statements. My application used to connect as tenant1 to tenant1600 but that lead to a vast amount of postgresql connections (even with pgbouncer). \n\n\n> My application is a web service that approximately executes some hundred\n> statements per second.\n>\n> I set \"log_min_duration_statement = 200ms\" and I get about 2k to 4k lines\n> per day with statements like \"SET ROLE\"\", \"SET search_path ...\" and \"RESET\n> ROLE\":\n>\n> --snip--\n> 2017-11-07 09:44:30 CET [27760]: [3-1] user=admin,db=mydb LOG: duration:\n> 901.591 ms execute <unnamed>: SET ROLE \"tenant762\"\n> 2017-11-07 09:44:30 CET [27659]: [4-1] user=admin,db=mydb LOG: duration:\n> 1803.971 ms execute <unnamed>: SET ROLE \"tenant392\"\n\nThat is weird.\n\n\n> Besides those peaks in statement duration, my application performs (i.e.\n> has acceptable response times) most of the time.\n>\n> Is there anything I can do to improve performance here?\n> Any help is greatly appreciated!\n\nCan you manually reproduce the problem? What times do you get if you\nmanually run the statement?Unfortunately not. Every time I manually execute \"SET ROLE ...\" the statement is pretty fast. I created a simple SQL file that contains the following statements:--snip--SET ROLE tenant382;SET ROLE tenant1337;SET ROLE tenant2;-- repeat the lines above 100k times--snap--When I execute those statements via 'time psql < set-roles.sql', the call lasts 138,7 seconds. So 300k \"SET ROLE\" statements result in 0,46ms per call on average.Cheers,Ulf",
"msg_date": "Tue, 7 Nov 2017 18:48:14 +0100",
"msg_from": "=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow execution of SET ROLE, SET search_path and RESET ROLE"
},
{
"msg_contents": "On 2017-11-07 18:48:14 +0100, Ulf Lohbr�gge wrote:\n> Hi,\n> \n> 2017-11-07 16:11 GMT+01:00 Andres Freund <[email protected]>:\n> \n> > Hi,\n> >\n> > On 2017-11-07 11:11:36 +0100, Ulf Lohbr�gge wrote:\n> > > I'm using PostgreSQL 9.5.9 on Debian and experience slow execution of\n> > some\n> > > basic SET statements.\n> > >\n> > > I created about 1600 roles and use that setup for a multi tenancy\n> > > application:\n> >\n> > Hm. How often do you drop/create these roles? How many other\n> > roles/groups is one role a member of?\n> >\n> \n> I create between 10-40 roles per day.\n\nCould you VACUUM (VERBOSE, FREEZE) that table and report the output? Do\nyou ever delete roles?\n\n> > Can you manually reproduce the problem? What times do you get if you\n> > manually run the statement?\n> >\n> \n> Unfortunately not. Every time I manually execute \"SET ROLE ...\" the\n> statement is pretty fast. I created a simple SQL file that contains the\n> following statements:\n> \n> --snip--\n> SET ROLE tenant382;\n> SET ROLE tenant1337;\n> SET ROLE tenant2;\n> -- repeat the lines above 100k times\n> --snap--\n> \n> When I execute those statements via 'time psql < set-roles.sql', the call\n> lasts 138,7 seconds. So 300k \"SET ROLE\" statements result in 0,46ms per\n> call on average.\n\nAnd most of that is going to be roundtrip time. Hm. Could it be that\nyou're just seeing the delays when pgbouncer establishes new pooling\nconnections and you're attributing that to SET ROLE in your app?\n\nGreetings,\n\nAndres Freund\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Nov 2017 11:45:17 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow execution of SET ROLE, SET search_path and RESET\n ROLE"
},
{
"msg_contents": "2017-11-07 20:45 GMT+01:00 Andres Freund <[email protected]>:\n\n> On 2017-11-07 18:48:14 +0100, Ulf Lohbrügge wrote:\n> > Hi,\n> >\n> > 2017-11-07 16:11 GMT+01:00 Andres Freund <[email protected]>:\n> >\n> > > Hi,\n> > >\n> > > On 2017-11-07 11:11:36 +0100, Ulf Lohbrügge wrote:\n> > > > I'm using PostgreSQL 9.5.9 on Debian and experience slow execution of\n> > > some\n> > > > basic SET statements.\n> > > >\n> > > > I created about 1600 roles and use that setup for a multi tenancy\n> > > > application:\n> > >\n> > > Hm. How often do you drop/create these roles? How many other\n> > > roles/groups is one role a member of?\n> > >\n> >\n> > I create between 10-40 roles per day.\n>\n> Could you VACUUM (VERBOSE, FREEZE) that table and report the output? Do\n> you ever delete roles?\n>\n\nWhich table do you mean exactly? pg_catalog.pg_authid?\n\nSorry, forgot to write that: I delete about 2-3 roles per day.\n\n\n> > > Can you manually reproduce the problem? What times do you get if you\n> > > manually run the statement?\n> > >\n> >\n> > Unfortunately not. Every time I manually execute \"SET ROLE ...\" the\n> > statement is pretty fast. I created a simple SQL file that contains the\n> > following statements:\n> >\n> > --snip--\n> > SET ROLE tenant382;\n> > SET ROLE tenant1337;\n> > SET ROLE tenant2;\n> > -- repeat the lines above 100k times\n> > --snap--\n> >\n> > When I execute those statements via 'time psql < set-roles.sql', the call\n> > lasts 138,7 seconds. So 300k \"SET ROLE\" statements result in 0,46ms per\n> > call on average.\n>\n> And most of that is going to be roundtrip time. Hm. Could it be that\n> you're just seeing the delays when pgbouncer establishes new pooling\n> connections and you're attributing that to SET ROLE in your app?\n>\n\nI stopped using pgbouncer when I solely started using role 'admin' with\n\"SET ROLE\" statements. I use a connection pool (HikariCP) that renews\nconnections after 30 minutes. I couldn't find a pattern yet when those slow\nstatements occur.\n\nDoes using a few thousands roles and schemata in postgres scale well? I\nonly found some theoretical descriptions of multi tenancy setups with\npostgres while googling.\nUsing tabulator in psql cli is pretty slow, mainly\nbecause pg_table_is_visible() is being called for many entries in pg_class.\n\nCheers,\nUlf\n\n2017-11-07 20:45 GMT+01:00 Andres Freund <[email protected]>:On 2017-11-07 18:48:14 +0100, Ulf Lohbrügge wrote:\n> Hi,\n>\n> 2017-11-07 16:11 GMT+01:00 Andres Freund <[email protected]>:\n>\n> > Hi,\n> >\n> > On 2017-11-07 11:11:36 +0100, Ulf Lohbrügge wrote:\n> > > I'm using PostgreSQL 9.5.9 on Debian and experience slow execution of\n> > some\n> > > basic SET statements.\n> > >\n> > > I created about 1600 roles and use that setup for a multi tenancy\n> > > application:\n> >\n> > Hm. How often do you drop/create these roles? How many other\n> > roles/groups is one role a member of?\n> >\n>\n> I create between 10-40 roles per day.\n\nCould you VACUUM (VERBOSE, FREEZE) that table and report the output? Do\nyou ever delete roles?Which table do you mean exactly? pg_catalog.pg_authid?Sorry, forgot to write that: I delete about 2-3 roles per day. \n> > Can you manually reproduce the problem? What times do you get if you\n> > manually run the statement?\n> >\n>\n> Unfortunately not. Every time I manually execute \"SET ROLE ...\" the\n> statement is pretty fast. I created a simple SQL file that contains the\n> following statements:\n>\n> --snip--\n> SET ROLE tenant382;\n> SET ROLE tenant1337;\n> SET ROLE tenant2;\n> -- repeat the lines above 100k times\n> --snap--\n>\n> When I execute those statements via 'time psql < set-roles.sql', the call\n> lasts 138,7 seconds. So 300k \"SET ROLE\" statements result in 0,46ms per\n> call on average.\n\nAnd most of that is going to be roundtrip time. Hm. Could it be that\nyou're just seeing the delays when pgbouncer establishes new pooling\nconnections and you're attributing that to SET ROLE in your app?I stopped using pgbouncer when I solely started using role 'admin' with \"SET ROLE\" statements. I use a connection pool (HikariCP) that renews connections after 30 minutes. I couldn't find a pattern yet when those slow statements occur.Does using a few thousands roles and schemata in postgres scale well? I only found some theoretical descriptions of multi tenancy setups with postgres while googling.Using tabulator in psql cli is pretty slow, mainly because pg_table_is_visible() is being called for many entries in pg_class.Cheers,Ulf",
"msg_date": "Tue, 7 Nov 2017 22:25:36 +0100",
"msg_from": "=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow execution of SET ROLE, SET search_path and RESET ROLE"
},
{
"msg_contents": "On Tue, Nov 7, 2017 at 2:25 PM, Ulf Lohbrügge <[email protected]> wrote:\n> 2017-11-07 20:45 GMT+01:00 Andres Freund <[email protected]>:\n>>\n>> On 2017-11-07 18:48:14 +0100, Ulf Lohbrügge wrote:\n>> > Hi,\n>> >\n>> > 2017-11-07 16:11 GMT+01:00 Andres Freund <[email protected]>:\n>> >\n>> > > Hi,\n>> > >\n>> > > On 2017-11-07 11:11:36 +0100, Ulf Lohbrügge wrote:\n>> > > > I'm using PostgreSQL 9.5.9 on Debian and experience slow execution\n>> > > > of\n>> > > some\n>> > > > basic SET statements.\n>> > > >\n>> > > > I created about 1600 roles and use that setup for a multi tenancy\n>> > > > application:\n>> > >\n>> > > Hm. How often do you drop/create these roles? How many other\n>> > > roles/groups is one role a member of?\n>> > >\n>> >\n>> > I create between 10-40 roles per day.\n>>\n>> Could you VACUUM (VERBOSE, FREEZE) that table and report the output? Do\n>> you ever delete roles?\n>\n>\n> Which table do you mean exactly? pg_catalog.pg_authid?\n>\n> Sorry, forgot to write that: I delete about 2-3 roles per day.\n\nI'm gonna take a guess that pg_users or pg_roles has gotten bloated\nover time. Try running a vacuum full on both of them. It's also\npossible some other pg_xxx table is bloated out here too you might\nneed to download something like checkpostgres.pl to check for bloat in\nsystem catalog tables.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Nov 2017 14:39:15 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow execution of SET ROLE, SET search_path and RESET ROLE"
},
{
"msg_contents": "2017-11-07 22:39 GMT+01:00 Scott Marlowe <[email protected]>:\n\n> On Tue, Nov 7, 2017 at 2:25 PM, Ulf Lohbrügge <[email protected]>\n> wrote:\n> > 2017-11-07 20:45 GMT+01:00 Andres Freund <[email protected]>:\n> >>\n> >> On 2017-11-07 18:48:14 +0100, Ulf Lohbrügge wrote:\n> >> > Hi,\n> >> >\n> >> > 2017-11-07 16:11 GMT+01:00 Andres Freund <[email protected]>:\n> >> >\n> >> > > Hi,\n> >> > >\n> >> > > On 2017-11-07 11:11:36 +0100, Ulf Lohbrügge wrote:\n> >> > > > I'm using PostgreSQL 9.5.9 on Debian and experience slow execution\n> >> > > > of\n> >> > > some\n> >> > > > basic SET statements.\n> >> > > >\n> >> > > > I created about 1600 roles and use that setup for a multi tenancy\n> >> > > > application:\n> >> > >\n> >> > > Hm. How often do you drop/create these roles? How many other\n> >> > > roles/groups is one role a member of?\n> >> > >\n> >> >\n> >> > I create between 10-40 roles per day.\n> >>\n> >> Could you VACUUM (VERBOSE, FREEZE) that table and report the output? Do\n> >> you ever delete roles?\n> >\n> >\n> > Which table do you mean exactly? pg_catalog.pg_authid?\n> >\n> > Sorry, forgot to write that: I delete about 2-3 roles per day.\n>\n> I'm gonna take a guess that pg_users or pg_roles has gotten bloated\n> over time. Try running a vacuum full on both of them. It's also\n> possible some other pg_xxx table is bloated out here too you might\n> need to download something like checkpostgres.pl to check for bloat in\n> system catalog tables.\n>\n\nAs pg_user and pg_roles are views: Do you mean pg_authid? That table is\njust 432kb large:\n\n--snip--\npostgres=# select pg_size_pretty(pg_total_relation_size('pg_authid'));\n pg_size_pretty\n----------------\n 432 kB\n(1 row)\n--snap--\n\nI don't want to start a vacuum full right now because I'm not quite sure if\nthings will lock up. But I will do it later when there is less traffic.\n\nI just ran \"check_postgres.pl --action=bloat\" and got the following output:\n\n--snip--\nPOSTGRES_BLOAT OK: DB \"postgres\" (host:localhost) (db postgres) index\npg_shdepend_depender_index rows:? pages:9615 shouldbe:4073 (2.4X) wasted\nbytes:45400064 (43 MB) | pg_shdepend_depender_index=45400064B\npg_catalog.pg_shdepend=9740288B pg_shdepend_reference_index=4046848B\npg_depend_reference_index=98304B pg_depend_depender_index=57344B\npg_catalog.pg_class=32768B pg_catalog.pg_description=16384B\npg_amop_fam_strat_index=8192B pg_amop_opr_fam_index=8192B\npg_catalog.pg_amop=8192B pg_catalog.pg_depend=8192B pg_class_oid_index=0B\npg_class_relname_nsp_index=0B pg_class_tblspc_relfilenode_index=0B\npg_description_o_c_o_index=0B\n--snap--\n\nLooks fine, doesn't it?\n\nCheers,\nUlf\n\n2017-11-07 22:39 GMT+01:00 Scott Marlowe <[email protected]>:On Tue, Nov 7, 2017 at 2:25 PM, Ulf Lohbrügge <[email protected]> wrote:\n> 2017-11-07 20:45 GMT+01:00 Andres Freund <[email protected]>:\n>>\n>> On 2017-11-07 18:48:14 +0100, Ulf Lohbrügge wrote:\n>> > Hi,\n>> >\n>> > 2017-11-07 16:11 GMT+01:00 Andres Freund <[email protected]>:\n>> >\n>> > > Hi,\n>> > >\n>> > > On 2017-11-07 11:11:36 +0100, Ulf Lohbrügge wrote:\n>> > > > I'm using PostgreSQL 9.5.9 on Debian and experience slow execution\n>> > > > of\n>> > > some\n>> > > > basic SET statements.\n>> > > >\n>> > > > I created about 1600 roles and use that setup for a multi tenancy\n>> > > > application:\n>> > >\n>> > > Hm. How often do you drop/create these roles? How many other\n>> > > roles/groups is one role a member of?\n>> > >\n>> >\n>> > I create between 10-40 roles per day.\n>>\n>> Could you VACUUM (VERBOSE, FREEZE) that table and report the output? Do\n>> you ever delete roles?\n>\n>\n> Which table do you mean exactly? pg_catalog.pg_authid?\n>\n> Sorry, forgot to write that: I delete about 2-3 roles per day.\n\nI'm gonna take a guess that pg_users or pg_roles has gotten bloated\nover time. Try running a vacuum full on both of them. It's also\npossible some other pg_xxx table is bloated out here too you might\nneed to download something like checkpostgres.pl to check for bloat in\nsystem catalog tables.As pg_user and pg_roles are views: Do you mean pg_authid? That table is just 432kb large:--snip--postgres=# select pg_size_pretty(pg_total_relation_size('pg_authid')); pg_size_pretty---------------- 432 kB(1 row) --snap--I don't want to start a vacuum full right now because I'm not quite sure if things will lock up. But I will do it later when there is less traffic.I just ran \"check_postgres.pl --action=bloat\" and got the following output:--snip--POSTGRES_BLOAT OK: DB \"postgres\" (host:localhost) (db postgres) index pg_shdepend_depender_index rows:? pages:9615 shouldbe:4073 (2.4X) wasted bytes:45400064 (43 MB) | pg_shdepend_depender_index=45400064B pg_catalog.pg_shdepend=9740288B pg_shdepend_reference_index=4046848B pg_depend_reference_index=98304B pg_depend_depender_index=57344B pg_catalog.pg_class=32768B pg_catalog.pg_description=16384B pg_amop_fam_strat_index=8192B pg_amop_opr_fam_index=8192B pg_catalog.pg_amop=8192B pg_catalog.pg_depend=8192B pg_class_oid_index=0B pg_class_relname_nsp_index=0B pg_class_tblspc_relfilenode_index=0B pg_description_o_c_o_index=0B--snap--Looks fine, doesn't it?Cheers,Ulf",
"msg_date": "Wed, 8 Nov 2017 00:04:18 +0100",
"msg_from": "=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow execution of SET ROLE, SET search_path and RESET ROLE"
},
{
"msg_contents": "=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]> writes:\n> I just ran \"check_postgres.pl --action=bloat\" and got the following output:\n> ...\n> Looks fine, doesn't it?\n\nA possible explanation is that something is taking an exclusive lock\non some system catalog and holding it for a second or two. If so,\nturning on log_lock_waits might provide some useful info.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 07 Nov 2017 18:45:47 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow execution of SET ROLE, SET search_path and RESET ROLE"
},
{
"msg_contents": "2017-11-08 0:45 GMT+01:00 Tom Lane <[email protected]>:\n\n> =?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]> writes:\n> > I just ran \"check_postgres.pl --action=bloat\" and got the following\n> output:\n> > ...\n> > Looks fine, doesn't it?\n>\n> A possible explanation is that something is taking an exclusive lock\n> on some system catalog and holding it for a second or two. If so,\n> turning on log_lock_waits might provide some useful info.\n>\n> regards, tom lane\n>\n\nI just checked my configuration and found out that \"log_lock_waits\" was\nalready enabled.\n\nUnfortunately there is no log output of locks when those long running \"SET\nROLE\" statements occur.\n\nRegards,\nUlf\n\n2017-11-08 0:45 GMT+01:00 Tom Lane <[email protected]>:=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]> writes:\n> I just ran \"check_postgres.pl --action=bloat\" and got the following output:\n> ...\n> Looks fine, doesn't it?\n\nA possible explanation is that something is taking an exclusive lock\non some system catalog and holding it for a second or two. If so,\nturning on log_lock_waits might provide some useful info.\n\n regards, tom laneI just checked my configuration and found out that \"log_lock_waits\" was already enabled.Unfortunately there is no log output of locks when those long running \"SET ROLE\" statements occur.Regards,Ulf",
"msg_date": "Wed, 8 Nov 2017 10:31:42 +0100",
"msg_from": "=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow execution of SET ROLE, SET search_path and RESET ROLE"
}
] |
[
{
"msg_contents": "Hi all,\n\nWe have recently upgraded our project with a huge DB from Postgres v9.1 to\nv9.4. The whole system performance has degraded alarmingly after the\nupgrade. Simple operations that were taking only a few seconds in Postgres\n9.1 are now taking minutes of time.\n\nThe problem is not specific to one query orany particular kind of query.\nIts been generic and overall system has become very slow.\n\nWe tried running 'VACUUM ANALYZE' on the DB and that seemed to be helpful\ntoo. But the improvement after this is nowhere close to the performance we\nhad in 9.1.\n\nWe tried changing some of the performance parameters in the\npostgres.confirm as follows (our Postgres server has an 8GB RAM) -\n\nshared_buffers = 200MB\nmaintenance_work_mem = 1000MB\ndefault_statistics_target = 1000\neffective_cache_size = 4000MB\nAnd these made absolutely no difference to the query execution time.\n\nThe strangest part of the problem is when I EXPLAIN ANALYZE the same query\nmultiple times in the same Postgres server, it gives me different execution\ntimes every time ranging from 45 ms to 181 ms.\n\nWe are absolutely clueless on how to proceed. Any help would be greatly\nappreciated.\nThanks in advance.\n\nHi all,We have recently upgraded our project with a huge DB from Postgres v9.1 to v9.4. The whole system performance has degraded alarmingly after the upgrade. Simple operations that were taking only a few seconds in Postgres 9.1 are now taking minutes of time.The problem is not specific to one query orany particular kind of query. Its been generic and overall system has become very slow.We tried running 'VACUUM ANALYZE' on the DB and that seemed to be helpful too. But the improvement after this is nowhere close to the performance we had in 9.1.We tried changing some of the performance parameters in the postgres.confirm as follows (our Postgres server has an 8GB RAM) -shared_buffers = 200MBmaintenance_work_mem = 1000MBdefault_statistics_target = 1000effective_cache_size = 4000MBAnd these made absolutely no difference to the query execution time.The strangest part of the problem is when I EXPLAIN ANALYZE the same query multiple times in the same Postgres server, it gives me different execution times every time ranging from 45 ms to 181 ms.We are absolutely clueless on how to proceed. Any help would be greatly appreciated.Thanks in advance.",
"msg_date": "Fri, 10 Nov 2017 16:28:41 +0530",
"msg_from": "p kirti <[email protected]>",
"msg_from_op": true,
"msg_subject": "DB slowness after upgrade from Postgres 9.1 to 9.4"
},
{
"msg_contents": "p kirti <[email protected]> writes:\n> We have recently upgraded our project with a huge DB from Postgres v9.1 to\n> v9.4. The whole system performance has degraded alarmingly after the\n> upgrade. Simple operations that were taking only a few seconds in Postgres\n> 9.1 are now taking minutes of time.\n\nAre you certain nothing else changed? Same hardware, same OS, same\ndatabase configuration settings?\n\nOnce you've eliminated issues like that, you'd need to drill down deeper.\nThere's useful advice to help crystallize the situation at\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 12 Nov 2017 14:37:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB slowness after upgrade from Postgres 9.1 to 9.4"
}
] |
[
{
"msg_contents": "(or, the opposite of the more common problem)\n\nI wrote this query some time ago to handle \"deferred\" table-rewriting type\npromoting ALTERs of a inheritence children, to avoid worst-case disk usage\naltering the whole heirarchy, and also locking the entire heirarchy against\nSELECT and INSERT.\n\nts=# explain analyze SELECT child c, parent p, array_agg(colpar.attname::text) cols, array_agg(colpar.atttypid::regtype) AS types FROM\nqueued_alters qa JOIN pg_attribute colpar ON qa.parent::regclass=colpar.attrelid JOIN\npg_attribute colcld ON qa.child::regclass=colcld.attrelid WHERE\ncolcld.attname=colpar.attname AND colpar.atttypid!=colcld.atttypid GROUP BY 1,2\nORDER BY regexp_replace(child, '.*_((([0-9]{4}_[0-9]{2})_[0-9]{2})|(([0-9]{6})([0-9]{2})?))$', '\\3\\5') DESC, -- by YYYYMM\nchild~'_[0-9]{6}$' DESC, -- monthly tables first\nregexp_replace(child, '.*_', '') DESC -- by YYYYMMDD\nLIMIT 1;\n\nUnfortunately we get this terrible plan:\n\nLimit (cost=337497.59..337497.60 rows=1 width=184) (actual time=2395.283..2395.283 rows=0 loops=1)\n -> Sort (cost=337497.59..337500.04 rows=980 width=184) (actual time=2395.281..2395.281 rows=0 loops=1)\n Sort Key: (regexp_replace((qa.child)::text, '.*_((([0-9]{4}_[0-9]{2})_[0-9]{2})|(([0-9]{6})([0-9]{2})?))$'::text, '\\3\\5'::text)) DESC, (((qa.child)::text ~ '_[0-9]{6}$'::text)) DESC, (regexp_replace((qa.child)::text, '.*_'::text, ''::text)) DESC\n Sort Method: quicksort Memory: 25kB\n -> HashAggregate (cost=337470.64..337492.69 rows=980 width=184) (actual time=2395.273..2395.273 rows=0 loops=1)\n Group Key: qa.child, qa.parent\n -> Gather (cost=293727.20..336790.89 rows=54380 width=123) (actual time=2395.261..2395.261 rows=0 loops=1)\n Workers Planned: 3\n Workers Launched: 3\n -> Hash Join (cost=292727.20..330352.89 rows=17542 width=123) (actual time=2341.328..2341.328 rows=0 loops=4)\n Hash Cond: ((((qa.child)::regclass)::oid = colcld.attrelid) AND (colpar.attname = colcld.attname))\n Join Filter: (colpar.atttypid <> colcld.atttypid)\n -> Merge Join (cost=144034.27..151009.09 rows=105280 width=123) (actual time=514.820..514.820 rows=0 loops=4)\n Merge Cond: (colpar.attrelid = (((qa.parent)::regclass)::oid))\n -> Sort (cost=143965.78..145676.59 rows=684322 width=72) (actual time=514.790..514.790 rows=1 loops=4)\n Sort Key: colpar.attrelid\n Sort Method: external merge Disk: 78448kB\n -> Parallel Seq Scan on pg_attribute colpar (cost=0.00..77640.22 rows=684322 width=72) (actual time=0.011..164.106 rows=445582 loops=4)\n -> Sort (cost=68.49..70.94 rows=980 width=55) (actual time=0.031..0.031 rows=0 loops=3)\n Sort Key: (((qa.parent)::regclass)::oid)\n Sort Method: quicksort Memory: 25kB\n -> Seq Scan on queued_alters qa (cost=0.00..19.80 rows=980 width=55) (actual time=0.018..0.018 rows=0 loops=3)\n -> Hash (cost=92010.97..92010.97 rows=2121397 width=72) (actual time=1786.056..1786.056 rows=1782330 loops=4)\n Buckets: 2097152 Batches: 2 Memory Usage: 106870kB\n -> Seq Scan on pg_attribute colcld (cost=0.00..92010.97 rows=2121397 width=72) (actual time=0.027..731.554 rows=1782330 loops=4)\n\nAs the queued_alters table is typically empty (and autoanalyzed with\nrelpages=0), I see \"why\":\n\n./src/backend/optimizer/util/plancat.c\n| if (curpages < 10 &&\n| rel->rd_rel->relpages == 0 &&\n| !rel->rd_rel->relhassubclass &&\n| rel->rd_rel->relkind != RELKIND_INDEX)\n| curpages = 10;\n\n\nIndeed it works much better if I add a child table as a test/kludge:\n\n -> Sort (cost=306322.49..306323.16 rows=271 width=403) (actual time=4.945..4.945 rows=0 loops=1)\n Sort Key: (regexp_replace((qa.child)::text, '.*_((([0-9]{4}_[0-9]{2})_[0-9]{2})|(([0-9]{6})([0-9]{2})?))$'::text, '\\3\\5'::text)) DESC, (((qa.child)::text ~ '_[0-9]{6}$'::text)) DESC, (regexp_replace((qa.child)::text, '.*_'::text, ''::text)) DESC\n Sort Method: quicksort Memory: 25kB\n -> GroupAggregate (cost=306089.46..306321.13 rows=271 width=403) (actual time=4.938..4.938 rows=0 loops=1)\n Group Key: qa.child, qa.parent\n -> Sort (cost=306089.46..306127.06 rows=15038 width=342) (actual time=4.936..4.936 rows=0 loops=1)\n Sort Key: qa.child, qa.parent\n Sort Method: quicksort Memory: 25kB\n -> Gather (cost=149711.02..305046.10 rows=15038 width=342) (actual time=4.932..4.932 rows=0 loops=1)\n Workers Planned: 3\n Workers Launched: 3\n -> Hash Join (cost=148711.02..302542.30 rows=4851 width=342) (actual time=0.139..0.139 rows=0 loops=4)\n Hash Cond: ((((qa.child)::regclass)::oid = colcld.attrelid) AND (colpar.attname = colcld.attname))\n Join Filter: (colpar.atttypid <> colcld.atttypid)\n -> Hash Join (cost=18.10..125851.98 rows=29113 width=342) (actual time=0.137..0.137 rows=0 loops=4)\n Hash Cond: (colpar.attrelid = ((qa.parent)::regclass)::oid)\n -> Parallel Seq Scan on pg_attribute colpar (cost=0.00..77640.22 rows=684322 width=72) (actual time=0.005..0.005 rows=1 loops=4)\n -> Hash (cost=14.71..14.71 rows=271 width=274) (actual time=0.016..0.016 rows=0 loops=4)\n Buckets: 1024 Batches: 1 Memory Usage: 8kB\n -> Append (cost=0.00..14.71 rows=271 width=274) (actual time=0.016..0.016 rows=0 loops=4)\n -> Seq Scan on queued_alters qa (cost=0.00..2.21 rows=21 width=55) (actual time=0.012..0.012 rows=0 loops=4)\n -> Seq Scan on qa2 qa_1 (cost=0.00..12.50 rows=250 width=292) (actual time=0.003..0.003 rows=0 loops=4)\n -> Hash (cost=92010.97..92010.97 rows=2121397 width=72) (never executed)\n -> Seq Scan on pg_attribute colcld (cost=0.00..92010.97 rows=2121397 width=72) (never executed)\n\nBut is there a better way (I don't consider adding a row of junk to be a significant improvement).\n\nThanks in advance for any suggestion.\n\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 10 Nov 2017 14:40:43 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "overestimate on empty table"
},
{
"msg_contents": "Justin Pryzby <[email protected]> writes:\n> As the queued_alters table is typically empty (and autoanalyzed with\n> relpages=0), I see \"why\":\n\n> ./src/backend/optimizer/util/plancat.c\n> | if (curpages < 10 &&\n> | rel->rd_rel->relpages == 0 &&\n> | !rel->rd_rel->relhassubclass &&\n> | rel->rd_rel->relkind != RELKIND_INDEX)\n> | curpages = 10;\n\nSo I'm sure you read the comment above that, too.\n\nI'm loath to abandon the principle that the planner should not believe\nthat tables are empty/tiny without some forcing function. There are\ngoing to be way more people screaming about the plans they get from\ntoo-small rowcount estimates than the reverse. However, maybe we could\ndo better about detecting whether a vacuum or analyze has really happened.\n(Autovacuum won't normally touch a table until a fair number of rows have\nbeen put in it, so if a table is tiny but has been vacuumed, we can\npresume that that was a manual action.)\n\nOne idea is to say that relpages = reltuples = 0 is only the state that\nprevails for a freshly-created table, and that VACUUM or ANALYZE should\nalways set relpages to at least 1 even if the physical size is zero.\nDunno if that would confuse people. Or we could bite the bullet and\nadd a \"relanalyzed\" bool flag to pg_class. It's not like that's going\nto be a noticeable percentage increase in the row width ...\n\n> But is there a better way (I don't consider adding a row of junk to be a significant improvement).\n\nNot ATM.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 10 Nov 2017 16:19:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: overestimate on empty table"
},
{
"msg_contents": "On Fri, Nov 10, 2017 at 04:19:41PM -0500, Tom Lane wrote:\n> Justin Pryzby <[email protected]> writes:\n> > (or, the opposite of the more common problem)\n\n> > As the queued_alters table is typically empty (and autoanalyzed with\n> > relpages=0), I see \"why\":\n> \n> > ./src/backend/optimizer/util/plancat.c\n> > | if (curpages < 10 &&\n> > | rel->rd_rel->relpages == 0 &&\n> > | !rel->rd_rel->relhassubclass &&\n> > | rel->rd_rel->relkind != RELKIND_INDEX)\n> > | curpages = 10;\n> \n> So I'm sure you read the comment above that, too.\n\n> One idea is to say that relpages = reltuples = 0 is only the state that\n> prevails for a freshly-created table, and that VACUUM or ANALYZE should\n> always set relpages to at least 1 even if the physical size is zero.\n\n> Dunno if that would confuse people.\n\nWhat about adding && rel->rd_rel->reltuples==0, and make VACUUM/ANALYZE instead\nset only reltuples=1, since that's already done at costsize.c: clamp_row_est()\nand therefor no additional confusion?\n\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 11 Nov 2017 11:19:45 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: overestimate on empty table"
},
{
"msg_contents": "Justin Pryzby <[email protected]> writes:\n> On Fri, Nov 10, 2017 at 04:19:41PM -0500, Tom Lane wrote:\n>> One idea is to say that relpages = reltuples = 0 is only the state that\n>> prevails for a freshly-created table, and that VACUUM or ANALYZE should\n>> always set relpages to at least 1 even if the physical size is zero.\n\n>> Dunno if that would confuse people.\n\n> What about adding && rel->rd_rel->reltuples==0, and make VACUUM/ANALYZE instead\n> set only reltuples=1, since that's already done at costsize.c: clamp_row_est()\n> and therefor no additional confusion?\n\n1 tuple in 0 pages is a physically impossible situation, so I'm quite\nsure that way *would* confuse people.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 11 Nov 2017 12:43:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: overestimate on empty table"
}
] |
[
{
"msg_contents": "I am interested in giving the query planner the ability to replan (or\nre-rank plans) after query execution has begun, based on the\nprogression of the query so far.\n\nExample use case:\n\n* A LIMIT 1 query is planned using an expensive scan which the\nplanner expects to return a large number of results, and to terminate\nearly. The reality is the query actually produces no results, and\nthe scan must run to completion, potentially taking thousands of times\nlonger than expected.\n\n* If this plans costs were adjusted mid-execution to reflect the fact\nthat the scan is producing far fewer rows than expected, then another\nquery plan might come out ahead, which would complete far faster.\n\n\nHas this been done before? Are there any pitfalls to beware of?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 Nov 2017 16:44:49 +0000",
"msg_from": "Oliver Mattos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query planner gaining the ability to replanning after start of query\n execution."
},
{
"msg_contents": "Hello,\r\n\r\nI'd love to have some sort of dynamic query feedback, yet it's very complicated to do it right. I am not convinced that changing the plan during a single execution is the right way to do it, not only because it sounds intrusive to do crazy things in the executor, but also because don't understand why the new plan should be any better than the old one. Can you be more elaborate how you'd want to go about it?\r\n\r\nIn your example (which presumably just has a single relation), we have no notion of whether the scan returns no rows because we were unlucky, because just the first few pages were empty of matching rows (which in my experience happens more often), or because the cardinality estimation is wrong. Even if the cardinality estimation is wrong, we have no notion of which predicate or predicate combination actually caused the misestimation. If the first few pages where empty, the same might happen with every order (so also with every available indexscan). Imagine a very simple seqscan plan of \r\nselect * from mytab where a = 3 and b = 40 limit 1\r\nEven if we know the cardinality is overestimated, we have no idea whether the cardinality of a = 3 or b = 40 is wrong or they just correlate, so there is no notion of which is actually the cheapest plan. Usual workaround for most of these queries is to add an order by (which has the nice addition of having a deterministic result) with an appropriate complex index, usually resulting in indexscans.\r\n\r\nWhile we actually know more after the first execution of a nodes like materialize, sort or hash nodes, I rarely encounter materialize nodes in the wild. Consequently that is the place where the work is usually already done, which is especially true with the hash node. Even though it still might be more optimal to switch from a mergejoin to a hashjoin in some cases, I doubt that's worth any work (and even less the maintenance).\r\n\r\nBest regards\r\nArne Roland\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Oliver Mattos\r\nSent: Monday, November 13, 2017 5:45 PM\r\nTo: [email protected]\r\nSubject: [PERFORM] Query planner gaining the ability to replanning after start of query execution.\r\n\r\nI am interested in giving the query planner the ability to replan (or re-rank plans) after query execution has begun, based on the progression of the query so far.\r\n\r\nExample use case:\r\n\r\n* A LIMIT 1 query is planned using an expensive scan which the planner expects to return a large number of results, and to terminate\r\nearly. The reality is the query actually produces no results, and\r\nthe scan must run to completion, potentially taking thousands of times longer than expected.\r\n\r\n* If this plans costs were adjusted mid-execution to reflect the fact that the scan is producing far fewer rows than expected, then another query plan might come out ahead, which would complete far faster.\r\n\r\n\r\nHas this been done before? Are there any pitfalls to beware of?\r\n\r\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\r\n\r\n\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Oliver Mattos\r\nSent: Monday, November 13, 2017 5:45 PM\r\nTo: [email protected]\r\nSubject: [PERFORM] Query planner gaining the ability to replanning after start of query execution.\r\n\r\nI am interested in giving the query planner the ability to replan (or re-rank plans) after query execution has begun, based on the progression of the query so far.\r\n\r\nExample use case:\r\n\r\n* A LIMIT 1 query is planned using an expensive scan which the planner expects to return a large number of results, and to terminate\r\nearly. The reality is the query actually produces no results, and\r\nthe scan must run to completion, potentially taking thousands of times longer than expected.\r\n\r\n* If this plans costs were adjusted mid-execution to reflect the fact that the scan is producing far fewer rows than expected, then another query plan might come out ahead, which would complete far faster.\r\n\r\n\r\nHas this been done before? Are there any pitfalls to beware of?\r\n\r\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\r\n\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 Nov 2017 20:06:20 +0000",
"msg_from": "Arne Roland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query planner gaining the ability to replanning after\n start of query execution."
},
{
"msg_contents": "> Can you be more elaborate how you'd want to go about it?\n\nMy initial approach would be to try to identify places in the plan\nwhere selectivity is seriously over or underestimated. I would reuse\nthe instrumentation infrastructure's counts of filtered and returned\ntuples for each execnode, and periodically call back into the planner\n(for example at every power of 2 tuples processed).\n\nThe planner would have a wrapper to clauselist_selectivity which\nsomehow combines the existing estimate with the filtered and returned\ntuples so far. Exactly how to merge them isn't clear, but I could\nimagine using a poisson distribution to calculate the probability that\nthe selectivity estimate is representative of the filtered and\nreturned numbers, and then blending the two linearly based on that\nestimate.\n\nWhen the planner has re-estimated the cost of the current plan, a\ndiscount would be applied for the percentage of each execnode\ncompleted (rows processed / estimated rows), and all candidate plans\ncompared.\n\nIf another candidate plan is now lower cost, the current plan would be\nterminated[1] by setting a flag instructing each execnode to return as\nif it had reached the end of the input, although still caching the\nnode selectivity values, and the new plan started from scratch.\n\nThe old plan is kept within the query planner candidate list, together\nwith it's cached selectivity values. If at some point it again is\ncheaper, it is started from scratch too.\n\n\n> Even if we know the cardinality is overestimated, we have no idea whether the cardinality of a = 3 or b = 40 is wrong or they just correlate\n\nThe goal would be not to know which is wrong, but to try each,\ndiscarding it if it turns out worse than we estimated. Processing a\nfew hundred rows of each of 5 plans is tiny compared to a scan of 1M\nrows...\n\n\n[1]: An improvement here (with much more code complexity) is to keep\nmultiple partially executed plans around, so that whichever one is\nmost promising can be worked on, but can be halted and resumed later\nas selectivity (and hence cost) estimates change.\n\nOn Mon, Nov 13, 2017 at 8:06 PM, Arne Roland <[email protected]> wrote:\n> Hello,\n>\n> I'd love to have some sort of dynamic query feedback, yet it's very complicated to do it right. I am not convinced that changing the plan during a single execution is the right way to do it, not only because it sounds intrusive to do crazy things in the executor, but also because don't understand why the new plan should be any better than the old one. Can you be more elaborate how you'd want to go about it?\n>\n> In your example (which presumably just has a single relation), we have no notion of whether the scan returns no rows because we were unlucky, because just the first few pages were empty of matching rows (which in my experience happens more often), or because the cardinality estimation is wrong. Even if the cardinality estimation is wrong, we have no notion of which predicate or predicate combination actually caused the misestimation. If the first few pages where empty, the same might happen with every order (so also with every available indexscan). Imagine a very simple seqscan plan of\n> select * from mytab where a = 3 and b = 40 limit 1\n> Even if we know the cardinality is overestimated, we have no idea whether the cardinality of a = 3 or b = 40 is wrong or they just correlate, so there is no notion of which is actually the cheapest plan. Usual workaround for most of these queries is to add an order by (which has the nice addition of having a deterministic result) with an appropriate complex index, usually resulting in indexscans.\n>\n> While we actually know more after the first execution of a nodes like materialize, sort or hash nodes, I rarely encounter materialize nodes in the wild. Consequently that is the place where the work is usually already done, which is especially true with the hash node. Even though it still might be more optimal to switch from a mergejoin to a hashjoin in some cases, I doubt that's worth any work (and even less the maintenance).\n>\n> Best regards\n> Arne Roland\n>\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]] On Behalf Of Oliver Mattos\n> Sent: Monday, November 13, 2017 5:45 PM\n> To: [email protected]\n> Subject: [PERFORM] Query planner gaining the ability to replanning after start of query execution.\n>\n> I am interested in giving the query planner the ability to replan (or re-rank plans) after query execution has begun, based on the progression of the query so far.\n>\n> Example use case:\n>\n> * A LIMIT 1 query is planned using an expensive scan which the planner expects to return a large number of results, and to terminate\n> early. The reality is the query actually produces no results, and\n> the scan must run to completion, potentially taking thousands of times longer than expected.\n>\n> * If this plans costs were adjusted mid-execution to reflect the fact that the scan is producing far fewer rows than expected, then another query plan might come out ahead, which would complete far faster.\n>\n>\n> Has this been done before? Are there any pitfalls to beware of?\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\n>\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]] On Behalf Of Oliver Mattos\n> Sent: Monday, November 13, 2017 5:45 PM\n> To: [email protected]\n> Subject: [PERFORM] Query planner gaining the ability to replanning after start of query execution.\n>\n> I am interested in giving the query planner the ability to replan (or re-rank plans) after query execution has begun, based on the progression of the query so far.\n>\n> Example use case:\n>\n> * A LIMIT 1 query is planned using an expensive scan which the planner expects to return a large number of results, and to terminate\n> early. The reality is the query actually produces no results, and\n> the scan must run to completion, potentially taking thousands of times longer than expected.\n>\n> * If this plans costs were adjusted mid-execution to reflect the fact that the scan is producing far fewer rows than expected, then another query plan might come out ahead, which would complete far faster.\n>\n>\n> Has this been done before? Are there any pitfalls to beware of?\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 Nov 2017 21:51:32 +0000",
"msg_from": "Oliver Mattos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query planner gaining the ability to replanning after\n start of query execution."
},
{
"msg_contents": "Hello,\r\n\r\nthat method is bound to introduce errors if the physical location of a row correlates strongly with a column, imagine \"and my_timestamp> now() - INTERVAL '1 year'\" as part of a where clause of a query without limit on an insert only table which is one and a half years old with a seqscan. There might be similar effects if an index on the timestamp is used to go about a query, if other rows of the filter correlate.\r\n\r\nThis method furthermore only judges filter predicates.\r\nSo it's not that easy to just go about the expected rowcount of a single node, since the underling node might return a totally inaccurate number of rows. While that isn't that common with underlying seqscans, it is very frequent if an indexscan is used without being able to rely on the MCV. While it's still possible to notice a misestimation there is no sense of how wrong it is, until the rows are already processed.\r\n\r\nFurthermore: Did you think about parallel plans and most importantly cursors?\r\n\r\nBest regards\r\nArne Roland\r\n\r\n-----Original Message-----\r\nFrom: Oliver Mattos [mailto:[email protected]] \r\nSent: Monday, November 13, 2017 10:52 PM\r\nTo: Arne Roland <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Query planner gaining the ability to replanning after start of query execution.\r\n\r\n> Can you be more elaborate how you'd want to go about it?\r\n\r\nMy initial approach would be to try to identify places in the plan\r\nwhere selectivity is seriously over or underestimated. I would reuse\r\nthe instrumentation infrastructure's counts of filtered and returned tuples for each execnode, and periodically call back into the planner (for example at every power of 2 tuples processed).\r\n\r\nThe planner would have a wrapper to clauselist_selectivity which somehow combines the existing estimate with the filtered and returned\r\ntuples so far. Exactly how to merge them isn't clear, but I could\r\nimagine using a poisson distribution to calculate the probability that the selectivity estimate is representative of the filtered and returned numbers, and then blending the two linearly based on that estimate.\r\n\r\nWhen the planner has re-estimated the cost of the current plan, a discount would be applied for the percentage of each execnode completed (rows processed / estimated rows), and all candidate plans compared.\r\n\r\nIf another candidate plan is now lower cost, the current plan would be terminated[1] by setting a flag instructing each execnode to return as if it had reached the end of the input, although still caching the node selectivity values, and the new plan started from scratch.\r\n\r\nThe old plan is kept within the query planner candidate list, together with it's cached selectivity values. If at some point it again is cheaper, it is started from scratch too.\r\n\r\n\r\n> Even if we know the cardinality is overestimated, we have no idea \r\n> whether the cardinality of a = 3 or b = 40 is wrong or they just \r\n> correlate\r\n\r\nThe goal would be not to know which is wrong, but to try each, discarding it if it turns out worse than we estimated. Processing a few hundred rows of each of 5 plans is tiny compared to a scan of 1M rows...\r\n\r\n\r\n[1]: An improvement here (with much more code complexity) is to keep\r\nmultiple partially executed plans around, so that whichever one is most promising can be worked on, but can be halted and resumed later as selectivity (and hence cost) estimates change.\r\n\r\nOn Mon, Nov 13, 2017 at 8:06 PM, Arne Roland <[email protected]> wrote:\r\n> Hello,\r\n>\r\n> I'd love to have some sort of dynamic query feedback, yet it's very complicated to do it right. I am not convinced that changing the plan during a single execution is the right way to do it, not only because it sounds intrusive to do crazy things in the executor, but also because don't understand why the new plan should be any better than the old one. Can you be more elaborate how you'd want to go about it?\r\n>\r\n> In your example (which presumably just has a single relation), we have \r\n> no notion of whether the scan returns no rows because we were unlucky, \r\n> because just the first few pages were empty of matching rows (which in my experience happens more often), or because the cardinality estimation is wrong. Even if the cardinality estimation is wrong, we have no notion of which predicate or predicate combination actually caused the misestimation. If the first few pages where empty, the same might happen with every order (so also with every available indexscan). Imagine a very simple seqscan plan of select * from mytab where a = 3 and b = 40 limit 1 Even if we know the cardinality is overestimated, we have no idea whether the cardinality of a = 3 or b = 40 is wrong or they just correlate, so there is no notion of which is actually the cheapest plan. Usual workaround for most of these queries is to add an order by (which has the nice addition of having a deterministic result) with an appropriate complex index, usually resulting in indexscans.\r\n>\r\n> While we actually know more after the first execution of a nodes like materialize, sort or hash nodes, I rarely encounter materialize nodes in the wild. Consequently that is the place where the work is usually already done, which is especially true with the hash node. Even though it still might be more optimal to switch from a mergejoin to a hashjoin in some cases, I doubt that's worth any work (and even less the maintenance).\r\n>\r\n> Best regards\r\n> Arne Roland\r\n>\r\n> -----Original Message-----\r\n> From: [email protected] \r\n> [mailto:[email protected]] On Behalf Of Oliver \r\n> Mattos\r\n> Sent: Monday, November 13, 2017 5:45 PM\r\n> To: [email protected]\r\n> Subject: [PERFORM] Query planner gaining the ability to replanning after start of query execution.\r\n>\r\n> I am interested in giving the query planner the ability to replan (or re-rank plans) after query execution has begun, based on the progression of the query so far.\r\n>\r\n> Example use case:\r\n>\r\n> * A LIMIT 1 query is planned using an expensive scan which the planner expects to return a large number of results, and to terminate\r\n> early. The reality is the query actually produces no results, and\r\n> the scan must run to completion, potentially taking thousands of times longer than expected.\r\n>\r\n> * If this plans costs were adjusted mid-execution to reflect the fact that the scan is producing far fewer rows than expected, then another query plan might come out ahead, which would complete far faster.\r\n>\r\n>\r\n> Has this been done before? Are there any pitfalls to beware of?\r\n>\r\n>\r\n> --\r\n> Sent via pgsql-performance mailing list \r\n> ([email protected])\r\n> To make changes to your subscription:\r\n> http://www.postgresql.org/mailpref/pgsql-performance\r\n>\r\n>\r\n>\r\n>\r\n> -----Original Message-----\r\n> From: [email protected] \r\n> [mailto:[email protected]] On Behalf Of Oliver \r\n> Mattos\r\n> Sent: Monday, November 13, 2017 5:45 PM\r\n> To: [email protected]\r\n> Subject: [PERFORM] Query planner gaining the ability to replanning after start of query execution.\r\n>\r\n> I am interested in giving the query planner the ability to replan (or re-rank plans) after query execution has begun, based on the progression of the query so far.\r\n>\r\n> Example use case:\r\n>\r\n> * A LIMIT 1 query is planned using an expensive scan which the planner expects to return a large number of results, and to terminate\r\n> early. The reality is the query actually produces no results, and\r\n> the scan must run to completion, potentially taking thousands of times longer than expected.\r\n>\r\n> * If this plans costs were adjusted mid-execution to reflect the fact that the scan is producing far fewer rows than expected, then another query plan might come out ahead, which would complete far faster.\r\n>\r\n>\r\n> Has this been done before? Are there any pitfalls to beware of?\r\n>\r\n>\r\n> --\r\n> Sent via pgsql-performance mailing list \r\n> ([email protected])\r\n> To make changes to your subscription:\r\n> http://www.postgresql.org/mailpref/pgsql-performance\r\n>\r\n>\r\n>\r\n\r\n\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 Nov 2017 22:48:49 +0000",
"msg_from": "Arne Roland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query planner gaining the ability to replanning after\n start of query execution."
},
{
"msg_contents": "Oliver Mattos <[email protected]> writes:\n>> Can you be more elaborate how you'd want to go about it?\n\n> ... If another candidate plan is now lower cost, the current plan would be\n> terminated[1] by setting a flag instructing each execnode to return as\n> if it had reached the end of the input, although still caching the\n> node selectivity values, and the new plan started from scratch.\n\nQuite aside from the implementation difficulties you'll have, that\napproach is a show-stopper right there. You can't just restart from\nscratch, because we may already have shipped rows to the client, or\nfor DML cases already inserted/updated/deleted rows (and even if you\ncould roll those back, we've possibly fired triggers with unpredictable\nside effects). Queries containing volatile functions are another no-fly\nzone for this approach.\n\nI can't see any way around that without unacceptable performance costs\n(i.e. buffering all the rows until we're done) or wire-protocol breakage.\n\nI think that a more practical way to address the class of problems\nyou're talking about is to teach the planner to have some notion of\nworst-case as well as expected-case costs, and then incorporate some\nperhaps-configurable amount of risk aversion in its choices.\n\n\t\t\tregards, tom lane\n\nPS: please do not top-post, and do not quote the entire darn thread\nin each message.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 Nov 2017 17:49:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query planner gaining the ability to replanning after start of\n query execution."
},
{
"msg_contents": "> You can't just restart from scratch, because we may already have shipped rows to the client\n\nFor v1, replanning wouldn't be an option if rows have already been\nshipped, or for DML statements.\n\n> parallel plans and most importantly cursors?\nParallel plans look do-able with the same approach, but cursor use I'd\nprobably stop replanning as soon as the first row is delivered to the\nclient, as above. One could imagine more complex approaches like a\nlimited size buffer of 'delivered' rows, allowing a new plan to be\nselected and the delivered rows excluded from the new plans resultset\nvia a special extra prepending+dupe filtering execnode. The memory\nand computation costs of that execnode would be factored into the\nreplanning decision like any other node.\n\n\n>errors if the physical location of a row correlates strongly with a column\nThis is my largest concern. These cases already lead to large errors\ncurrently (SELECT * FROM foo WHERE created_date = today LIMIT 1) might\nscan all data, only to find all of today's records in the last\nphysical block.\n\nIt's hard to say if replacing one bad estimate with another will lead\nto overall better/worse results... My hope is that in most cases a\nbunch of plans will be tried, all end up with cost estimates revised\nup a lot, and then one settled on as rows start getting passed to\nupper layers.\n\n>underling node might return a totally inaccurate number of rows for index scans\nOne might imagine using the last returned row as an extra histogram\npoint when estimating how many rows are left in an index scan. That\nshould at least make the estimate more accurate than it is without\nfeedback.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 Nov 2017 23:20:44 +0000",
"msg_from": "Oliver Mattos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query planner gaining the ability to replanning after\n start of query execution."
},
{
"msg_contents": "As a first step, before relaunching the query with a new plan I would be\nhappy to be able to get information about sql queries having wrong\nestimates.\n\nMaybe thoses SQL queries could be collected using something similar to\n\"auto_explain\" module (tracing finished or cancelled queries).\n\nIf the \"corrected plan\" taking into account real cardinalities was proposed\n(with some advices on how to get it) it would be a great tuning adviser ;o)\n\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 14 Nov 2017 14:26:16 -0700 (MST)",
"msg_from": "legrand legrand <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query planner gaining the ability to replanning after start of\n query execution."
},
{
"msg_contents": "May I suggest the recent discussion over the dreaded NL issue ... I had some\nsimilar ideas.\n\nhttp://www.postgresql-archive.org/OLAP-reporting-queries-fall-into-nested-loops-over-seq-scans-or-other-horrible-planner-choices-td5990160.html\n\nThis could be massively useful and a huge leap in the industry.\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 15 Nov 2017 12:57:44 -0700 (MST)",
"msg_from": "Gunter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query planner gaining the ability to replanning after start of\n query execution."
}
] |
[
{
"msg_contents": "Hello,\nI am having performance issues with one of the query.\nThe query is taking 39 min to fetch 3.5 mil records.\n\nI want to reduce that time to 15 mins.\ncould you please suggest something to its performance?\n\nserver configuration:\n CPUs = 4\nmemory = 16 GM\nshared_buffers = 3 GB\nwork_mem = 100MB\neffective_cache_size = 12 GB\n\nwe are doing the vacuum/analyze regularly on the database.\n\nattached is the query with its explain plan.\n\nThanks,\nSamir Magar\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 15 Nov 2017 15:03:39 +0530",
"msg_from": "Samir Magar <[email protected]>",
"msg_from_op": true,
"msg_subject": "query performance issue"
},
{
"msg_contents": "Hi\n\nplease send EXPLAIN ANALYZE output.\n\nRegards\n\nPavel\n\n2017-11-15 10:33 GMT+01:00 Samir Magar <[email protected]>:\n\n> Hello,\n> I am having performance issues with one of the query.\n> The query is taking 39 min to fetch 3.5 mil records.\n>\n> I want to reduce that time to 15 mins.\n> could you please suggest something to its performance?\n>\n> server configuration:\n> CPUs = 4\n> memory = 16 GM\n> shared_buffers = 3 GB\n> work_mem = 100MB\n> effective_cache_size = 12 GB\n>\n> we are doing the vacuum/analyze regularly on the database.\n>\n> attached is the query with its explain plan.\n>\n> Thanks,\n> Samir Magar\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\nHiplease send EXPLAIN ANALYZE output.RegardsPavel2017-11-15 10:33 GMT+01:00 Samir Magar <[email protected]>:Hello,I am having performance issues with one of the query.The query is taking 39 min to fetch 3.5 mil records.I want to reduce that time to 15 mins. could you please suggest something to its performance?server configuration: CPUs = 4memory = 16 GMshared_buffers = 3 GBwork_mem = 100MBeffective_cache_size = 12 GBwe are doing the vacuum/analyze regularly on the database. attached is the query with its explain plan.Thanks,Samir Magar \n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 15 Nov 2017 10:43:23 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query performance issue"
},
{
"msg_contents": "please find the EXPLAIN ANALYZE output.\n\nOn Wed, Nov 15, 2017 at 3:13 PM, Pavel Stehule <[email protected]>\nwrote:\n\n> Hi\n>\n> please send EXPLAIN ANALYZE output.\n>\n> Regards\n>\n> Pavel\n>\n> 2017-11-15 10:33 GMT+01:00 Samir Magar <[email protected]>:\n>\n>> Hello,\n>> I am having performance issues with one of the query.\n>> The query is taking 39 min to fetch 3.5 mil records.\n>>\n>> I want to reduce that time to 15 mins.\n>> could you please suggest something to its performance?\n>>\n>> server configuration:\n>> CPUs = 4\n>> memory = 16 GM\n>> shared_buffers = 3 GB\n>> work_mem = 100MB\n>> effective_cache_size = 12 GB\n>>\n>> we are doing the vacuum/analyze regularly on the database.\n>>\n>> attached is the query with its explain plan.\n>>\n>> Thanks,\n>> Samir Magar\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]\n>> )\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>>\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 15 Nov 2017 18:24:45 +0530",
"msg_from": "Samir Magar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query performance issue"
},
{
"msg_contents": "2017-11-15 13:54 GMT+01:00 Samir Magar <[email protected]>:\n\n> please find the EXPLAIN ANALYZE output.\n>\n> On Wed, Nov 15, 2017 at 3:13 PM, Pavel Stehule <[email protected]>\n> wrote:\n>\n>> Hi\n>>\n>> please send EXPLAIN ANALYZE output.\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>> 2017-11-15 10:33 GMT+01:00 Samir Magar <[email protected]>:\n>>\n>>> Hello,\n>>> I am having performance issues with one of the query.\n>>> The query is taking 39 min to fetch 3.5 mil records.\n>>>\n>>> I want to reduce that time to 15 mins.\n>>> could you please suggest something to its performance?\n>>>\n>>> server configuration:\n>>> CPUs = 4\n>>> memory = 16 GM\n>>> shared_buffers = 3 GB\n>>> work_mem = 100MB\n>>> effective_cache_size = 12 GB\n>>>\n>>> we are doing the vacuum/analyze regularly on the database.\n>>>\n>>> attached is the query with its explain plan.\n>>>\n>>>\n\nThere is wrong plan due wrong estimation\n\nfor this query you should to penalize nested loop\n\nset enable_nestloop to off;\n\nbefore evaluation of this query\n\n\nThanks,\n>>> Samir Magar\n>>>\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.\n>>> org)\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>\n>>>\n>>\n>\n\n2017-11-15 13:54 GMT+01:00 Samir Magar <[email protected]>:please find the EXPLAIN ANALYZE output.On Wed, Nov 15, 2017 at 3:13 PM, Pavel Stehule <[email protected]> wrote:Hiplease send EXPLAIN ANALYZE output.RegardsPavel2017-11-15 10:33 GMT+01:00 Samir Magar <[email protected]>:Hello,I am having performance issues with one of the query.The query is taking 39 min to fetch 3.5 mil records.I want to reduce that time to 15 mins. could you please suggest something to its performance?server configuration: CPUs = 4memory = 16 GMshared_buffers = 3 GBwork_mem = 100MBeffective_cache_size = 12 GBwe are doing the vacuum/analyze regularly on the database. attached is the query with its explain plan.There is wrong plan due wrong estimation for this query you should to penalize nested loopset enable_nestloop to off;before evaluation of this queryThanks,Samir Magar \n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 15 Nov 2017 14:12:10 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query performance issue"
},
{
"msg_contents": "On Wed, Nov 15, 2017 at 03:03:39PM +0530, Samir Magar wrote:\n> I am having performance issues with one of the query.\n> The query is taking 39 min to fetch 3.5 mil records.\n> \n> I want to reduce that time to 15 mins.\n> could you please suggest something to its performance?\n\n> \"HashAggregate (cost=4459.68..4459.69 rows=1 width=27) (actual time=2890035.403..2892173.601 rows=3489861 loops=1)\"\n\nLooks to me like the problem is here:\n\n> \" -> Index Only Scan using idxdq7 on dlr_qlfy (cost=0.43..4.45 ROWS=1 width=16) (actual time=0.009..0.066 ROWS=121 loops=103987)\"\n> \" Index Cond: ((qlfy_grp_id = dlr_grp.dlr_grp_id) AND (qlf_flg = 'N'::bpchar) AND (cog_grp_id = dlr_grp_dlr_xref_1.dlr_grp_id))\"\n> \" Heap Fetches: 0\"\n\nReturning 100x more rows than expected and bubbling up through a cascade of\nnested loops.\n\nAre those 3 conditions independent ? Or, perhaps, are rows for which\n\"qlfy_grp_id=dlr_grp.dlr_grp_id\" is true always going to have\n\"cog_grp_id = dlr_grp_dlr_xref_1.dlr_grp_id\" ?\n\nEven if it's not \"always\" true, if rows which pass the one condition are more\nlikely to pass the other condition, this will cause an underestimate, as\nobvserved.\n\nYou can do an experiment SELECTing just from those two tables joined and see if\nyou can reproduce the problem with poor rowcount estimate (hopefully in much\nless than 15min).\n\nIf you can't drop one of the two conditions, you can make PG treat it as a\nsingle condition for purpose of determining expected selectivity, using a ROW()\ncomparison like:\n\nROW(qlfy_grp_id, cog_grp_id) = ROW(dlr_grp.dlr_grp_id, dlr_grp_dlr_xref_1.dlr_grp_id)\n\nIf you're running PG96+ you may also be able to work around this by adding FKs.\n\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 15 Nov 2017 08:16:21 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query performance issue"
},
{
"msg_contents": "On 11/15/2017 8:12, Pavel Stehule wrote:\n> There is wrong plan due wrong estimation\n>\n> for this query you should to penalize nested loop\n>\n> set enable_nestloop to off;\n>\n> before evaluation of this query\n\nYou are not the only one with this issue. May I suggest to look at this \nthread a little earlier this month.\n\nhttp://www.postgresql-archive.org/OLAP-reporting-queries-fall-into-nested-loops-over-seq-scans-or-other-horrible-planner-choices-tp5990160.html\n\nwhere this has been discussed in some length.\n\nregards,\n-Gunther\n\n\n\n\n\n\n\n\n\nOn 11/15/2017 8:12, Pavel Stehule\n wrote:\n\n\n\n\nThere is wrong plan due wrong\n estimation \n\n\nfor this query you should to penalize nested loop\n\n\nset enable_nestloop to off;\n\n\nbefore evaluation of this query\n\n\n\n\n\n You are not the only one with this issue. May I suggest to look at\n this thread a little earlier this month.\n\nhttp://www.postgresql-archive.org/OLAP-reporting-queries-fall-into-nested-loops-over-seq-scans-or-other-horrible-planner-choices-tp5990160.html\n\n where this has been discussed in some length.\n\n regards,\n -Gunther",
"msg_date": "Wed, 15 Nov 2017 14:58:17 -0500",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query performance issue"
},
{
"msg_contents": "2017-11-15 20:58 GMT+01:00 Gunther <[email protected]>:\n\n>\n> On 11/15/2017 8:12, Pavel Stehule wrote:\n>\n> There is wrong plan due wrong estimation\n>\n> for this query you should to penalize nested loop\n>\n> set enable_nestloop to off;\n>\n> before evaluation of this query\n>\n>\n> You are not the only one with this issue. May I suggest to look at this\n> thread a little earlier this month.\n>\n> http://www.postgresql-archive.org/OLAP-reporting-queries-\n> fall-into-nested-loops-over-seq-scans-or-other-horrible-\n> planner-choices-tp5990160.html\n>\n> where this has been discussed in some length.\n>\n\nIt is typical issue. The source of these problems are correlations between\ncolumns (it can be fixed partially by multicolumn statistics in PostgreSQL\n10). Another problem is missing multi table statistics - PostgreSQL planner\nexpects so any value from dictionary has same probability, what is not\nusually true. Some OLAP techniques like calendar tables has usually very\nbad impact on estimations with this results.\n\nRegards\n\nPavel\n\n\n> regards,\n> -Gunther\n>\n>\n>\n\n2017-11-15 20:58 GMT+01:00 Gunther <[email protected]>:\n\n\nOn 11/15/2017 8:12, Pavel Stehule\n wrote:\n\n\n\n\nThere is wrong plan due wrong\n estimation \n\n\nfor this query you should to penalize nested loop\n\n\nset enable_nestloop to off;\n\n\nbefore evaluation of this query\n\n\n\n\n\n You are not the only one with this issue. May I suggest to look at\n this thread a little earlier this month.\n\nhttp://www.postgresql-archive.org/OLAP-reporting-queries-fall-into-nested-loops-over-seq-scans-or-other-horrible-planner-choices-tp5990160.html\n\n where this has been discussed in some length.It is typical issue. The source of these problems are correlations between columns (it can be fixed partially by multicolumn statistics in PostgreSQL 10). Another problem is missing multi table statistics - PostgreSQL planner expects so any value from dictionary has same probability, what is not usually true. Some OLAP techniques like calendar tables has usually very bad impact on estimations with this results.RegardsPavel \n\n regards,\n -Gunther",
"msg_date": "Wed, 15 Nov 2017 21:07:01 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query performance issue"
}
] |
[
{
"msg_contents": "I just noticed that PG10 CREATE STATISTICS (dependencies) doesn't seem to work\nfor joins on multiple columns; is that right?\n\nWith statistics on table for 20171111 but not 20171110:\n\nts=# CREATE STATISTICS x ON site_id,sect_id FROM eric_enodeb_cell_20171111;\nts=# ANALYZE VERBOSE eric_enodeb_cell_20171111;\n\nts=# explain ANALYZE SELECT FROM eric_enodeb_cell_20171110 a JOIN eric_enodeb_cell_20171110 b USING(start_time, sect_id) WHERE a.site_id=318 AND sect_id=1489;\nNested Loop (cost=0.83..4565.09 rows=1 width=0) (actual time=23.595..69.541 rows=96 loops=1)\n=> bad estimate on redundant WHERE WITHOUT multivar statistics\n\nts=# explain ANALYZE SELECT FROM eric_enodeb_cell_20171111 a JOIN eric_enodeb_cell_20171111 b USING(start_time, sect_id) WHERE a.site_id=318 AND sect_id=1489;\nNested Loop (cost=0.83..4862.41 rows=96 width=0) (actual time=0.034..3.882 rows=96 loops=1)\n=> good estimate on redundant WHERE WITH multivar statistics\n\nts=# explain ANALYZE SELECT FROM eric_enodeb_cell_20171110 a JOIN eric_enodeb_cell_20171110 b USING(start_time, sect_id);\nMerge Join (cost=18249.85..19624.18 rows=54858 width=0) (actual time=157.252..236.945 rows=55050 loops=1)\n=> good estimate on JOIN on SECT_id without stats\n\nts=# explain ANALYZE SELECT FROM eric_enodeb_cell_20171110 a JOIN eric_enodeb_cell_20171110 b USING(start_time, site_id);\nMerge Join (cost=0.83..14431.81 rows=261499 width=0) (actual time=0.031..259.382 rows=262638 loops=1)\n=> good estimate on JOIN on SITE_id without stats\n\nts=# explain ANALYZE SELECT FROM eric_enodeb_cell_20171111 a JOIN eric_enodeb_cell_20171111 b USING(start_time, site_id);\nMerge Join (cost=0.83..14706.29 rows=268057 width=0) (actual time=37.360..331.276 rows=268092 loops=1)\n=> good estimate on JOIN on SITE_id with stats\n\nts=# explain ANALYZE SELECT FROM eric_enodeb_cell_20171111 a JOIN eric_enodeb_cell_20171111 b USING(start_time, sect_id);\nMerge Join (cost=18560.89..19959.67 rows=55944 width=0) (actual time=130.865..198.439 rows=55956 loops=1)\n=> good estimate on JOIN on SECT_id with stats\n\nts=# explain ANALYZE SELECT FROM eric_enodeb_cell_20171111 a JOIN eric_enodeb_cell_20171111 b USING(start_time, sect_id, site_id);\nGather (cost=1000.83..12222.06 rows=460 width=0) (actual time=1.686..149.707 rows=55956 loops=1)\n=> poor estimate on redundant JOIN WITH stats (??)\n\nI've already fixed our reports to avoid this kind of thing and support our PG95\ncustomers, but I tentatively would've expected PG10 MV stats to \"know\" that\nUSING(site_id, sect_id) is no more selective than USING(sect_id), same as it\nknows that's true for WHERE site... AND sect....\n\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 15 Nov 2017 14:19:13 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "CREATE STATISTICS and join selectivity"
},
{
"msg_contents": "On 16 November 2017 at 09:19, Justin Pryzby <[email protected]> wrote:\n> I just noticed that PG10 CREATE STATISTICS (dependencies) doesn't seem to work\n> for joins on multiple columns; is that right?\n\nUnfortunately, for now, they're not used for join selectivity\nestimates, only for the base rel selectivities. That's all there was\ntime for with PG10. This is highly likely to be improved sometime in\nthe future.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 16 Nov 2017 12:47:29 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE STATISTICS and join selectivity"
}
] |
[
{
"msg_contents": "Dear all\n\nI have successfully installed POWA (http://dalibo.github.io/powa),\nincluding all required extensions, see the following Printscreen of its\noperation of end email.\n\nBut when executing queries in psql- comand line, this queries are not\nmonitored by powa. I have checked that only Postgresql internal catalog\nqueries are shown. .\nI need the Optimize Query functionality and mainly the suggestion of\nindexes.\nBut that does not work, by clicking on the optimize query option, returns\nzero suggestions.\n\nSee below that I created a scenario, with a table with a large amount of\ndata, to check if the tool would suggest some index, and when making a\ncomplex query, no index is suggested.\n\nSomeone uses POWA, knows if they have to configure something so that the\nqueries are monitored and show suggestions ??\n\n---------------------- Printscreens of my environment partially\nworking:--------------\n\nhttps://sites.google.com/site/eletrolareshop/repositorio/powa1.jpeg\nhttps://sites.google.com/site/eletrolareshop/repositorio/powa2.jpeg\nhttps://sites.google.com/site/eletrolareshop/repositorio/powa3.jpeg\n\n-------------------------------------------------------------------------------------------------------\n------------------------ scenario to verify the suggestion of indices\n--------------------\npostgres=# create table city_habitant (number_habitant text);\nCREATE TABLE\npostgres=# insert into city_habitant (number_habitant) select 'São Paulo'\nfrom (select generate_series (1, 4000000)) a;\nINSERT 0 4000000\npostgres=# insert into city_habitant (number_habitant) select 'Rio de\nJaneiro' from (select generate_series (1, 8000000)) a;\nINSERT 0 8000000\npostgres=# insert into city_habitant (number_habitant) select 'Recife'\nfrom (select generate_series (1, 6000000)) a;\nINSERT 0 6000000\npostgres=# insert into city_habitant (number_habitant) select 'Santos'\nfrom (select generate_series (1, 2000000)) a;\nINSERT 0 2000000\npostgres=# insert into city_habitant (number_habitant) select 'Chui' from\n(select generate_series (1, 6)) a;\nINSERT 0 6\npostgres=# SELECT number_habitant, count(number_habitant) FROM\n city_habitant GROUP BY number_habitant;\n number_habitant | count\n-------------------+------\n Rio de Janeiro | 8000000\n Recife | 6000000\n Santos | 2000000\n São Paulo | 4000000\n Chui | 6\n(5 rows)\n\n<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>\nLivre\nde vírus. www.avast.com\n<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>.\n<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>\n\nDear allI have successfully installed POWA (http://dalibo.github.io/powa), including all required extensions, see the following Printscreen of its operation of end email.But when executing queries in psql- comand line, this queries are not monitored by powa. I have checked that only Postgresql internal catalog queries are shown. .I need the Optimize Query functionality and mainly the suggestion of indexes.But that does not work, by clicking on the optimize query option, returns zero suggestions.See below that I created a scenario, with a table with a large amount of data, to check if the tool would suggest some index, and when making a complex query, no index is suggested.Someone uses POWA, knows if they have to configure something so that the queries are monitored and show suggestions ??---------------------- Printscreens of my environment partially working:--------------https://sites.google.com/site/eletrolareshop/repositorio/powa1.jpeghttps://sites.google.com/site/eletrolareshop/repositorio/powa2.jpeghttps://sites.google.com/site/eletrolareshop/repositorio/powa3.jpeg------------------------------------------------------------------------------------------------------------------------------- scenario to verify the suggestion of indices -------------------- postgres=# create table city_habitant (number_habitant text);CREATE TABLEpostgres=# insert into city_habitant (number_habitant) select 'São Paulo' from (select generate_series (1, 4000000)) a;INSERT 0 4000000postgres=# insert into city_habitant (number_habitant) select 'Rio de Janeiro' from (select generate_series (1, 8000000)) a;INSERT 0 8000000postgres=# insert into city_habitant (number_habitant) select 'Recife' from (select generate_series (1, 6000000)) a;INSERT 0 6000000postgres=# insert into city_habitant (number_habitant) select 'Santos' from (select generate_series (1, 2000000)) a;INSERT 0 2000000postgres=# insert into city_habitant (number_habitant) select 'Chui' from (select generate_series (1, 6)) a;INSERT 0 6postgres=# SELECT number_habitant, count(number_habitant) FROM city_habitant GROUP BY number_habitant; number_habitant | count-------------------+------ Rio de Janeiro | 8000000 Recife | 6000000 Santos | 2000000 São Paulo | 4000000 Chui | 6(5 rows) \n\n\nLivre de vírus. www.avast.com.",
"msg_date": "Fri, 17 Nov 2017 23:52:04 -0200",
"msg_from": "Neto pr <[email protected]>",
"msg_from_op": true,
"msg_subject": "POWA doesn't show queries executed"
},
{
"msg_contents": "Hi,\nYou should probably report your issue at \nhttps://github.com/dalibo/powa/issues\nKR\n\nLe 18/11/2017 à 02:52, Neto pr a écrit :\n> Dear all\n>\n> I have successfully installed POWA (http://dalibo.github.io/powa), \n> including all required extensions, see the following Printscreen of \n> its operation of end email.\n>\n> But when executing queries in psql- comand line, this queries are not \n> monitored by powa. I have checked that only Postgresql internal \n> catalog queries are shown. .\n> I need the Optimize Query functionality and mainly the suggestion of \n> indexes.\n> But that does not work, by clicking on the optimize query option, \n> returns zero suggestions.\n>\n> See below that I created a scenario, with a table with a large amount \n> of data, to check if the tool would suggest some index, and when \n> making a complex query, no index is suggested.\n>\n> Someone uses POWA, knows if they have to configure something so that \n> the queries are monitored and show suggestions ??\n>\n> ---------------------- Printscreens of my environment partially \n> working:--------------\n>\n> https://sites.google.com/site/eletrolareshop/repositorio/powa1.jpeg\n> https://sites.google.com/site/eletrolareshop/repositorio/powa2.jpeg\n> https://sites.google.com/site/eletrolareshop/repositorio/powa3.jpeg\n>\n> -------------------------------------------------------------------------------------------------------\n> ------------------------ scenario to verify the suggestion of indices \n> --------------------\n> postgres=# create table city_habitant (number_habitant text);\n> CREATE TABLE\n> postgres=# insert into city_habitant (number_habitant) select 'São \n> Paulo' from (select generate_series (1, 4000000)) a;\n> INSERT 0 4000000\n> postgres=# insert into city_habitant (number_habitant) select 'Rio de \n> Janeiro' from (select generate_series (1, 8000000)) a;\n> INSERT 0 8000000\n> postgres=# insert into city_habitant (number_habitant) select \n> 'Recife' from (select generate_series (1, 6000000)) a;\n> INSERT 0 6000000\n> postgres=# insert into city_habitant (number_habitant) select \n> 'Santos' from (select generate_series (1, 2000000)) a;\n> INSERT 0 2000000\n> postgres=# insert into city_habitant (number_habitant) select 'Chui' \n> from (select generate_series (1, 6)) a;\n> INSERT 0 6\n> postgres=# SELECT number_habitant, count(number_habitant) FROM \n> city_habitant GROUP BY number_habitant;\n> number_habitant | count\n> -------------------+------\n> Rio de Janeiro | 8000000\n> Recife | 6000000\n> Santos | 2000000\n> São Paulo | 4000000\n> Chui | 6\n> (5 rows)\n>\n> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> \n> \tLivre de vírus. www.avast.com \n> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>. \n>\n>\n\n\n\n\n\n\n\n Hi,\n You should probably report your issue at https://github.com/dalibo/powa/issues\n KR\n\nLe 18/11/2017 à 02:52, Neto pr a\n écrit :\n\n\nDear all\n\n I have successfully installed POWA (http://dalibo.github.io/powa),\n including all required extensions, see the following Printscreen\n of its operation of end email.\n\n But when executing queries in psql- comand line, this queries\n are not monitored by powa. I have checked that only Postgresql\n internal catalog queries are shown. .\n I need the Optimize Query functionality and mainly the\n suggestion of indexes.\n But that does not work, by clicking on the optimize query\n option, returns zero suggestions.\n\n See below that I created a scenario, with a table with a large\n amount of data, to check if the tool would suggest some index,\n and when making a complex query, no index is suggested.\n\n Someone uses POWA, knows if they have to configure something so\n that the queries are monitored and show suggestions ??\n\n ---------------------- Printscreens of my environment partially\n working:--------------\n\nhttps://sites.google.com/site/eletrolareshop/repositorio/powa1.jpeg\nhttps://sites.google.com/site/eletrolareshop/repositorio/powa2.jpeg\nhttps://sites.google.com/site/eletrolareshop/repositorio/powa3.jpeg\n\n-------------------------------------------------------------------------------------------------------\n ------------------------ scenario to verify the suggestion of\n indices -------------------- \n postgres=# create table city_habitant (number_habitant text);\n CREATE TABLE\n postgres=# insert into city_habitant (number_habitant) select\n 'São Paulo' from (select generate_series (1, 4000000)) a;\n INSERT 0 4000000\n postgres=# insert into city_habitant (number_habitant) select\n 'Rio de Janeiro' from (select generate_series (1, 8000000)) a;\n INSERT 0 8000000\n postgres=# insert into city_habitant (number_habitant) select\n 'Recife' from (select generate_series (1, 6000000)) a;\n INSERT 0 6000000\n postgres=# insert into city_habitant (number_habitant) select\n 'Santos' from (select generate_series (1, 2000000)) a;\n INSERT 0 2000000\n postgres=# insert into city_habitant (number_habitant) select\n 'Chui' from (select generate_series (1, 6)) a;\n INSERT 0 6\n postgres=# SELECT number_habitant, count(number_habitant) FROM\n city_habitant GROUP BY number_habitant;\n number_habitant | count\n -------------------+------\n Rio de Janeiro | 8000000\n Recife | 6000000\n Santos | 2000000\n São Paulo | 4000000\n Chui | 6\n (5 rows)\n\n\n\n\n\n\nLivre\n de vírus. www.avast.com.",
"msg_date": "Tue, 21 Nov 2017 09:00:53 +0100",
"msg_from": "phb07 <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] POWA doesn't show queries executed"
},
{
"msg_contents": "Hi,\n\npowa relies on extensions (pg_stat_statements, pg_qualstats) that needs \nto be installed in every database you want to monitor. Maybe you just \ninstalled them only into postgres database?!\n\nBest regards\nMarco\n\n\nAm 18.11.2017 um 02:52 schrieb Neto pr:\n> Dear all\n> \n> I have successfully installed POWA (http://dalibo.github.io/powa), \n> including all required extensions, see the following Printscreen of its \n> operation of end email.\n> \n> But when executing queries in psql- comand line, this queries are not \n> monitored by powa. I have checked that only Postgresql internal catalog \n> queries are shown. .\n> I need the Optimize Query functionality and mainly the suggestion of \n> indexes.\n> But that does not work, by clicking on the optimize query option, \n> returns zero suggestions.\n> \n> See below that I created a scenario, with a table with a large amount of \n> data, to check if the tool would suggest some index, and when making a \n> complex query, no index is suggested.\n> \n> Someone uses POWA, knows if they have to configure something so that the \n> queries are monitored and show suggestions ??\n> \n> ---------------------- Printscreens of my environment partially \n> working:--------------\n> \n> https://sites.google.com/site/eletrolareshop/repositorio/powa1.jpeg\n> https://sites.google.com/site/eletrolareshop/repositorio/powa2.jpeg\n> https://sites.google.com/site/eletrolareshop/repositorio/powa3.jpeg\n> \n> -------------------------------------------------------------------------------------------------------\n> ------------------------ scenario to verify the suggestion of indices \n> --------------------\n> postgres=# create table city_habitant (number_habitant text);\n> CREATE TABLE\n> postgres=# insert into city_habitant (number_habitant) select 'São \n> Paulo' from (select generate_series (1, 4000000)) a;\n> INSERT 0 4000000\n> postgres=# insert into city_habitant (number_habitant) select 'Rio de \n> Janeiro' from (select generate_series (1, 8000000)) a;\n> INSERT 0 8000000\n> postgres=# insert into city_habitant (number_habitant) select 'Recife' \n> from (select generate_series (1, 6000000)) a;\n> INSERT 0 6000000\n> postgres=# insert into city_habitant (number_habitant) select 'Santos' \n> from (select generate_series (1, 2000000)) a;\n> INSERT 0 2000000\n> postgres=# insert into city_habitant (number_habitant) select 'Chui' \n> from (select generate_series (1, 6)) a;\n> INSERT 0 6\n> postgres=# SELECT number_habitant, count(number_habitant) FROM \n> city_habitant GROUP BY number_habitant;\n> number_habitant | count\n> -------------------+------\n> Rio de Janeiro | 8000000\n> Recife | 6000000\n> Santos | 2000000\n> São Paulo | 4000000\n> Chui | 6\n> (5 rows)\n> \n> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> \n> \tLivre de vírus. www.avast.com \n> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>. \n> \n> \n> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>\n\n",
"msg_date": "Tue, 21 Nov 2017 16:29:22 +0100",
"msg_from": "Marco Nietz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] POWA doesn't show queries executed"
}
] |
[
{
"msg_contents": "Hi,\nI'm trying to understand my wals behavior on my postgresql environment.\nMy wal settings are :\n\n wal_keep_segments = 200\n max_wal_size = 3GB\n min_wal_size = 80MB\n archive_command = 'cp %p /PostgreSQL-wal/9.6/pg_xlog/wal_archives/%f'\n archive_timeout = 10\n #checkpoint_flush_after = 256kB\n #checkpoint_completion_target = 0.5\n\nMy wals directory is /PostgreSQL-wal/9.6/pg_xlog/ and my archives directory\nis PostgreSQL-wal/9.6/pg_xlog/wal_archives.\nLast night my wals directory storage got full(archive directory also\nbecause they are on the same fs..)l\n\nI have right now 211 wals in my wal`s directory :\n\n ls -l /PostgreSQL-wal/9.6/pg_xlog/ | wc -l\n 212\n\nThe only thing that was running during the night are only selects from our\nmonitoring agent. I guess that wal were created because the archive_timeout\nwas very low and they were deleted because the wal_keep_segments was high.\n\nThis morning , I set the wal_keep_segments to 100 and I set the\narchive_timeout to 6 minutes. Now, after setting those settings and\nstarting the cluster wals switch is working fine and I didnt see that many\nwals were created. However, doesnt the old wals should be deleted\nautomaticly ? Can I delete archives safely ?\n\nThanks , Mariel.\n\nHi,I'm trying to understand my wals behavior on my postgresql environment.My wal settings are : wal_keep_segments = 200 max_wal_size = 3GB min_wal_size = 80MB archive_command = 'cp %p /PostgreSQL-wal/9.6/pg_xlog/wal_archives/%f' archive_timeout = 10 #checkpoint_flush_after = 256kB #checkpoint_completion_target = 0.5My wals directory is /PostgreSQL-wal/9.6/pg_xlog/ and my archives directory is PostgreSQL-wal/9.6/pg_xlog/wal_archives.Last night my wals directory storage got full(archive directory also because they are on the same fs..)lI have right now 211 wals in my wal`s directory : ls -l /PostgreSQL-wal/9.6/pg_xlog/ | wc -l 212The only thing that was running during the night are only selects from our monitoring agent. I guess that wal were created because the archive_timeout was very low and they were deleted because the wal_keep_segments was high. This morning , I set the wal_keep_segments to 100 and I set the archive_timeout to 6 minutes. Now, after setting those settings and starting the cluster wals switch is working fine and I didnt see that many wals were created. However, doesnt the old wals should be deleted automaticly ? Can I delete archives safely ?Thanks , Mariel.",
"msg_date": "Mon, 20 Nov 2017 11:02:40 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 9.6 wals management"
},
{
"msg_contents": "On Mon, Nov 20, 2017 at 6:02 PM, Mariel Cherkassky\n<[email protected]> wrote:\n> This morning , I set the wal_keep_segments to 100 and I set the\n> archive_timeout to 6 minutes. Now, after setting those settings and starting\n> the cluster wals switch is working fine and I didnt see that many wals were\n> However, doesnt the old wals should be deleted automaticly ? Can I\n> delete archives safely ?\n\nArchives are useful if they can be used with a base backup which would\nallow it to recover close to the point has created WAL activity, up to\nthe last finished segment to be precise. So if you have no base\nbackups or standbys (for example disconnected for a long) that would\nuse them, there is no point in keeping them. What defines the archive\nand base backup retention is your data retention policy. Do not touch\nthe files of pg_xlog though, those are managed by PostgreSQL itself.\nIt is also good practice to put the archives on a different partition,\nand to not have the archives in a sub-path of the main data folder as\nyou do as those would get included in all base backups taken.\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 20 Nov 2017 21:28:52 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.6 wals management"
},
{
"msg_contents": "Thank you for the clarification.\n\nבתאריך 20 בנוב׳ 2017 14:28, \"Michael Paquier\" <[email protected]>\nכתב:\n\n> On Mon, Nov 20, 2017 at 6:02 PM, Mariel Cherkassky\n> <[email protected]> wrote:\n> > This morning , I set the wal_keep_segments to 100 and I set the\n> > archive_timeout to 6 minutes. Now, after setting those settings and\n> starting\n> > the cluster wals switch is working fine and I didnt see that many wals\n> were\n> > However, doesnt the old wals should be deleted automaticly ? Can I\n> > delete archives safely ?\n>\n> Archives are useful if they can be used with a base backup which would\n> allow it to recover close to the point has created WAL activity, up to\n> the last finished segment to be precise. So if you have no base\n> backups or standbys (for example disconnected for a long) that would\n> use them, there is no point in keeping them. What defines the archive\n> and base backup retention is your data retention policy. Do not touch\n> the files of pg_xlog though, those are managed by PostgreSQL itself.\n> It is also good practice to put the archives on a different partition,\n> and to not have the archives in a sub-path of the main data folder as\n> you do as those would get included in all base backups taken.\n> --\n> Michael\n>\n\nThank you for the clarification. בתאריך 20 בנוב׳ 2017 14:28, \"Michael Paquier\" <[email protected]> כתב:On Mon, Nov 20, 2017 at 6:02 PM, Mariel Cherkassky\n<[email protected]> wrote:\n> This morning , I set the wal_keep_segments to 100 and I set the\n> archive_timeout to 6 minutes. Now, after setting those settings and starting\n> the cluster wals switch is working fine and I didnt see that many wals were\n> However, doesnt the old wals should be deleted automaticly ? Can I\n> delete archives safely ?\n\nArchives are useful if they can be used with a base backup which would\nallow it to recover close to the point has created WAL activity, up to\nthe last finished segment to be precise. So if you have no base\nbackups or standbys (for example disconnected for a long) that would\nuse them, there is no point in keeping them. What defines the archive\nand base backup retention is your data retention policy. Do not touch\nthe files of pg_xlog though, those are managed by PostgreSQL itself.\nIt is also good practice to put the archives on a different partition,\nand to not have the archives in a sub-path of the main data folder as\nyou do as those would get included in all base backups taken.\n--\nMichael",
"msg_date": "Mon, 20 Nov 2017 14:56:27 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 9.6 wals management"
}
] |
[
{
"msg_contents": "Greetings,\n\nWe will be migrating these lists to pglister in the next few minutes.\n\nThis final email on the old list system is intended to let you know\nthat future emails will have different headers and you will need to\nadjust your filters.\n\nThe changes which we expect to be most significant to users can be found\non the wiki here: https://wiki.postgresql.org/wiki/PGLister_Announce\n\nOnce the migration of these lists is complete, an 'after' email will be\nsent out.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 20 Nov 2017 09:33:05 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Migration to pglister - Before"
},
{
"msg_contents": "On 20/11/2017 16:33, Stephen Frost wrote:\n> Greetings,\n>\n> We will be migrating these lists to pglister in the next few minutes.\n>\n> This final email on the old list system is intended to let you know\n> that future emails will have different headers and you will need to\n> adjust your filters.\n>\n> The changes which we expect to be most significant to users can be found\n> on the wiki here: https://wiki.postgresql.org/wiki/PGLister_Announce\n>\n> Once the migration of these lists is complete, an 'after' email will be\n> sent out.\n\nHello,\n\ncongrats for the migration! One question, will the same upgrade happen to the pgfoundry lists? Some of them (pgbouncer) seem quite dead.\n\n>\n> Thanks!\n>\n> Stephen\n\n\n-- \nAchilleas Mantzios\nIT DEV Lead\nIT DEPT\nDynacom Tankers Mgmt\n\n\n",
"msg_date": "Mon, 20 Nov 2017 17:30:53 +0200",
"msg_from": "Achilleas Mantzios <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] Migration to pglister - Before"
},
{
"msg_contents": "Greetings,\n\n* Achilleas Mantzios ([email protected]) wrote:\n> congrats for the migration! One question, will the same upgrade happen to the pgfoundry lists? Some of them (pgbouncer) seem quite dead.\n\nWe do not have any control nor access to the pgfoundry lists. My\ninclination would be to post your questions or comments regarding\npgbouncer to our -general list, it's certainly closely related\ntechnology to PostgreSQL and there's quite a few folks there who use it.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 20 Nov 2017 10:34:02 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [ADMIN] Migration to pglister - Before"
}
] |
[
{
"msg_contents": "Greetings!\n\nThis list has now been migrated to new mailing list software known as\n'PGLister'. This migration will impact all users of this mailing list\nin one way or another.\n\nIf you would like to unsubscribe from this mailing list, please click on\n'Show Original' or 'Show Headers' in your client and find the\n'List-Unsubscribe' header which will include a link that you can click\non (or copy/paste into your browser) and use to unsubscribe yourself\nfrom this list.\n\nThe changes which we expect to be most significant to users can be found\non the wiki here: https://wiki.postgresql.org/wiki/PGLister_Announce the\ncurrent version of which is also included below.\n\nThank you!\n\nStephen",
"msg_date": "Mon, 20 Nov 2017 10:03:02 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Migration to PGLister - After"
}
] |
[
{
"msg_contents": "Hello PGSQL experts,\n\nI've used your great database pretty heavily for the last 4 years, and during\nthat time it's helped me to solve an amazingly wide variety of data\nchallenges. Last week, I finally ran into something weird enough I couldn't\nfigure it out by myself. I'm using a self-compiled copy from latest 10.x\nstable branch, Ubuntu 16.04 LTS, inserts with psycopg2, queries (so far) with\npsql for testing, later JDBC (PostgreSQL 10.1 on x86_64-pc-linux-gnu, compiled\nby gcc (Ubuntu 5.4.0-6ubuntu1~16.04.5) 5.4.0 20160609, 64-bit).\n\nI have this great big table of strings, about 180 million rows. I want to be\nable to search this table for substring matches and overall string similarity\nagainst various inputs (an ideal use case for pg_trgm, from what I can see in\nthe docs and the research articles for such indexing).\n\nI need a unique b-tree index on the strings, to prevent duplicates in the\ninput in the beginning, and from adding new strings in the future, and the\n{gin,gist}_trgm_ops index to speed up the matching. I couldn't fully\nunderstand from the docs if my use case was a better fit for GIN, or for GIST.\nSome parts of the docs implied GIST would be faster, but only for less than\n100K entries, at which point GIN would be faster. I am hoping someone could\ncomment.\n\nHere is the table:\n\n Unlogged table \"public.huge_table\"\n Column | Type | Collation | Nullable | Default\n-------------+--------------------------+-----------+----------+-----------------------------------------------\n id | bigint | | not null | nextval('huge_table_id_seq'::regclass)\n inserted_ts | timestamp with time zone | | | transaction_timestamp()\n value | character varying | | |\nIndexes:\n \"huge_table_pkey\" PRIMARY KEY, btree (id)\n \"huge_table_value_idx\" UNIQUE, btree (value)\n \"huge_table_value_trgm\" gin (value gin_trgm_ops)\n\nI managed to load the table initially in about 9 hours, after doing some\noptimizations below based on various documentation (the server is 8-core Xeon\nE5504, 16 GB RAM, 4 Hitachi 1TB 7200 RPM in a RAID 5 via Linux MD):\n\n* compiled latest 10.x stable code branch from Git\n* unlogged table (risky but made a big difference)\n* shared_buffers 6 GB\n* work_mem 32 MB\n* maintenance_work_mem 512 MB\n* effective_cache_size 10 GB\n* synchronous_commit off\n* wal_buffers 16 MB\n* max_wal_size 4 GB\n* checkpoint_completion_target 0.9\n* auto_explain, and slow log for >= 1000 msecs (to debug this)\n\nI'm noticing that the performance of inserts starts slipping quite a bit, as\nthe data is growing. It starts out fast, <1 sec per batch of 5000, but\neventually slows to 5-10 sec. per batch, sometimes randomly more.\n\nIn this example, it was just starting to slow, taking 4 secs to insert 5000\nvalues:\n\n2017-11-18 08:10:21 UTC [29578-11250] arceo@osint LOG: duration: 4034.901 ms plan:\n Query Text: INSERT INTO huge_table (value) VALUES\n ('value1'),\n\t... 4998 more values ...\n ('value5000')\n ON CONFLICT (value) DO NOTHING\n Insert on huge_table (cost=0.00..87.50 rows=5000 width=48)\n Conflict Resolution: NOTHING\n Conflict Arbiter Indexes: huge_table_value_idx\n -> Values Scan on \"*VALUES*\" (cost=0.00..87.50 rows=5000 width=48)\n\nWhen it's inserting, oddly enough, the postgres seems mostly CPU limited,\nwhere I would have expected more of an IO limit personally, and the memory\nisn't necessarily over-utilized either, so it makes me wonder if I missed some\nthings.\n\nKiB Mem : 16232816 total, 159196 free, 487392 used, 15586228 buff/cache\nKiB Swap: 93702144 total, 93382320 free, 319816 used. 8714944 avail Mem\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n29578 postgres 20 0 6575672 6.149g 6.139g R 86.0 39.7 45:24.97 postgres\n\nAs for queries, doing a simple query like this one seems to require around 30\nseconds to a minute. My volume is not crazy high but I am hoping I could get\nthis down to less than 30 seconds, because other stuff above this code will\nstart to time out otherwise:\n\nosint=# explain analyze select * from huge_table where value ilike '%keyword%';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on huge_table (cost=273.44..61690.09 rows=16702 width=33) (actual time=2897.847..58438.545 rows=16423 loops=1)\n Recheck Cond: ((value)::text ~~* '%keyword%'::text)\n Rows Removed by Index Recheck: 3\n Heap Blocks: exact=5954\n -> Bitmap Index Scan on huge_table_value_trgm (cost=0.00..269.26 rows=16702 width=0) (actual time=2888.846..2888.846 rows=16434 loops=1)\n Index Cond: ((value)::text ~~* '%keyword%'::text)\n Planning time: 0.252 ms\n Execution time: 58442.413 ms\n(8 rows)\n\nThanks for reading this and letting me know any recommendations.\n\nSincerely,\nMatthew Hall\n",
"msg_date": "Mon, 20 Nov 2017 14:54:01 -0800",
"msg_from": "Matthew Hall <[email protected]>",
"msg_from_op": true,
"msg_subject": "insert and query performance on big string table with pg_trgm"
},
{
"msg_contents": "On Mon, Nov 20, 2017 at 2:54 PM, Matthew Hall <[email protected]> wrote:\n\nWhile I have not done exhaustive testing, from the tests I have done I've\nnever found gist to be better than gin with trgm indexes.\n\n\n>\n> Here is the table:\n>\n> Unlogged table \"public.huge_table\"\n> Column | Type | Collation | Nullable |\n> Default\n> -------------+--------------------------+-----------+-------\n> ---+-----------------------------------------------\n> id | bigint | | not null |\n> nextval('huge_table_id_seq'::regclass)\n> inserted_ts | timestamp with time zone | | |\n> transaction_timestamp()\n> value | character varying | | |\n> Indexes:\n> \"huge_table_pkey\" PRIMARY KEY, btree (id)\n> \"huge_table_value_idx\" UNIQUE, btree (value)\n> \"huge_table_value_trgm\" gin (value gin_trgm_ops)\n>\n\nDo you really need the artificial primary key, when you already have\nanother column that would be used as the primary key? If you need to use\nthis it a foreign key in another type, then very well might. But\nmaintaining two unique indexes doesn't come free.\n\nAre all indexes present at the time you insert? It will probably be much\nfaster to insert without the gin index (at least) and build it after the\nload.\n\nWithout knowing this key fact, it is hard to interpret the rest of your\ndata.\n\n\n>\n> I managed to load the table initially in about 9 hours, after doing some\n> optimizations below based on various documentation (the server is 8-core\n> Xeon\n> E5504, 16 GB RAM, 4 Hitachi 1TB 7200 RPM in a RAID 5 via Linux MD):\n> ...\n\n\n\n>\n>\n* maintenance_work_mem 512 MB\n>\n\nBuilding a gin index in bulk could benefit from more memory here.\n\n* synchronous_commit off\n>\n\nIf you already are using unlogged tables, this might not be so helpful, but\ndoes increase the risk of the rest of your system.\n\n\n\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 29578 postgres 20 0 6575672 6.149g 6.139g R 86.0 39.7 45:24.97\n> postgres\n>\n\nYou should expand the command line (by hitting 'c', at least in my version\nof top) so we can see which postgres process this is.\n\n\n>\n> As for queries, doing a simple query like this one seems to require around\n> 30\n> seconds to a minute. My volume is not crazy high but I am hoping I could\n> get\n> this down to less than 30 seconds, because other stuff above this code will\n> start to time out otherwise:\n>\n> osint=# explain analyze select * from huge_table where value ilike\n> '%keyword%';\n>\n\nexplain (analyze, buffers), please. And hopefully with track_io_timing=on.\n\nIf you repeat the same query, is it then faster, or is it still slow?\n\nCheers,\n\nJeff\n\nOn Mon, Nov 20, 2017 at 2:54 PM, Matthew Hall <[email protected]> wrote:While I have not done exhaustive testing, from the tests I have done I've never found gist to be better than gin with trgm indexes. \nHere is the table:\n\n Unlogged table \"public.huge_table\"\n Column | Type | Collation | Nullable | Default\n-------------+--------------------------+-----------+----------+-----------------------------------------------\n id | bigint | | not null | nextval('huge_table_id_seq'::regclass)\n inserted_ts | timestamp with time zone | | | transaction_timestamp()\n value | character varying | | |\nIndexes:\n \"huge_table_pkey\" PRIMARY KEY, btree (id)\n \"huge_table_value_idx\" UNIQUE, btree (value)\n \"huge_table_value_trgm\" gin (value gin_trgm_ops)Do you really need the artificial primary key, when you already have another column that would be used as the primary key? If you need to use this it a foreign key in another type, then very well might. But maintaining two unique indexes doesn't come free.Are all indexes present at the time you insert? It will probably be much faster to insert without the gin index (at least) and build it after the load.Without knowing this key fact, it is hard to interpret the rest of your data. \n\nI managed to load the table initially in about 9 hours, after doing some\noptimizations below based on various documentation (the server is 8-core Xeon\nE5504, 16 GB RAM, 4 Hitachi 1TB 7200 RPM in a RAID 5 via Linux MD): ... \n* maintenance_work_mem 512 MBBuilding a gin index in bulk could benefit from more memory here. * synchronous_commit offIf you already are using unlogged tables, this might not be so helpful, but does increase the risk of the rest of your system. \n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n29578 postgres 20 0 6575672 6.149g 6.139g R 86.0 39.7 45:24.97 postgresYou should expand the command line (by hitting 'c', at least in my version of top) so we can see which postgres process this is. \n\nAs for queries, doing a simple query like this one seems to require around 30\nseconds to a minute. My volume is not crazy high but I am hoping I could get\nthis down to less than 30 seconds, because other stuff above this code will\nstart to time out otherwise:\n\nosint=# explain analyze select * from huge_table where value ilike '%keyword%';explain (analyze, buffers), please. And hopefully with track_io_timing=on.If you repeat the same query, is it then faster, or is it still slow?Cheers,Jeff",
"msg_date": "Mon, 20 Nov 2017 17:42:50 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert and query performance on big string table with pg_trgm"
},
{
"msg_contents": "Hi Jeff,\n\nThanks so much for writing. You've got some great points.\n\n> On Nov 20, 2017, at 5:42 PM, Jeff Janes <[email protected]> wrote:\n> While I have not done exhaustive testing, from the tests I have done I've never found gist to be better than gin with trgm indexes.\n\nThanks, this helps considerably, as the documentation was kind of confusing and I didn't want to get it wrong if I could avoid it.\n\n> Do you really need the artificial primary key, when you already have another column that would be used as the primary key? If you need to use this it a foreign key in another type, then very well might. But maintaining two unique indexes doesn't come free.\n\nOK, fair enough, I'll test with it removed and see what happens.\n\n> Are all indexes present at the time you insert? It will probably be much faster to insert without the gin index (at least) and build it after the load.\n\nThere is some flexibility on the initial load, but the updates in the future will require the de-duplication capability. I'm willing to accept that might be somewhat slower on the load process, to get the accurate updates, provided we could try meeting the read-side goal I wrote about, or at least figure out why it's impossible, so I can understand what I need to fix to make it possible.\n\n> Without knowing this key fact, it is hard to interpret the rest of your data.\n\nI'm assuming you're referring to the part about the need for the primary key, and the indexes during loading? I did try to describe that in the earlier mail, but obviously I'm new at writing these, so sorry if I didn't make it more clear. I can get rid of the bigserial PK and the indexes could be made separately, but I would need a way to de-duplicate on future reloading... that's why I had the ON CONFLICT DO NOTHING expression on the INSERT. So we'd still want to learn why the INSERT is slow to fix up the update processes that would happen in the future.\n\n> * maintenance_work_mem 512 MB\n> \n> Building a gin index in bulk could benefit from more memory here. \n\nFixed it; I will re-test w/ 1 GB. Have you got any recommended values so I don't screw it up?\n\n> * synchronous_commit off\n> \n> If you already are using unlogged tables, this might not be so helpful, but does increase the risk of the rest of your system.\n\nFixed it; the unlogged mode change came later than this did.\n\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 29578 postgres 20 0 6575672 6.149g 6.139g R 86.0 39.7 45:24.97 postgres\n> \n> You should expand the command line (by hitting 'c', at least in my version of top) so we can see which postgres process this is.\n\nGood point, I'll write back once I retry w/ your other advice.\n\n> explain (analyze, buffers), please. And hopefully with track_io_timing=on.\n\ntrack_io_timing was missing because sadly I had only found it in one document at the very end of the investigation, after doing the big job which generated all of the material posted. It's there now, so here is some better output on the query:\n\nexplain (analyze, buffers) select * from huge_table where value ilike '%canada%';\n\n Bitmap Heap Scan on huge_table (cost=273.44..61690.09 rows=16702 width=33) (actual time=5701.511..76469.688 rows=110166 loops=1)\n Recheck Cond: ((value)::text ~~* '%canada%'::text)\n Rows Removed by Index Recheck: 198\n Heap Blocks: exact=66657\n Buffers: shared hit=12372 read=56201 dirtied=36906\n I/O Timings: read=74195.734\n -> Bitmap Index Scan on huge_table_value_trgm (cost=0.00..269.26 rows=16702 width=0) (actual time=5683.032..5683.032 rows=110468 loops=1)\n Index Cond: ((value)::text ~~* '%canada%'::text)\n Buffers: shared hit=888 read=1028\n I/O Timings: read=5470.839\n Planning time: 0.271 ms\n Execution time: 76506.949 ms\n\nI will work some more on the insert piece.\n\n> If you repeat the same query, is it then faster, or is it still slow?\n\nIf you keep the expression exactly the same, it still takes a few seconds as could be expected for such a torture test query, but it's still WAY faster than the first such query. If you change it out to a different expression, it's longer again of course. There does seem to be a low-to-medium correlation between the number of rows found and the query completion time.\n\n> Cheers,\n> Jeff\n\nThanks,\nMatthew.\n",
"msg_date": "Tue, 21 Nov 2017 00:05:09 -0800",
"msg_from": "Matthew Hall <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: insert and query performance on big string table with pg_trgm"
},
{
"msg_contents": "On Nov 21, 2017 00:05, \"Matthew Hall\" <[email protected]> wrote:\n\n\n> Are all indexes present at the time you insert? It will probably be much\nfaster to insert without the gin index (at least) and build it after the\nload.\n\nThere is some flexibility on the initial load, but the updates in the\nfuture will require the de-duplication capability. I'm willing to accept\nthat might be somewhat slower on the load process, to get the accurate\nupdates, provided we could try meeting the read-side goal I wrote about, or\nat least figure out why it's impossible, so I can understand what I need to\nfix to make it possible.\n\n\nAs long as you don't let anyone use the table between the initial load and\nwhen the index build finishes, you don't have to compromise on\ncorrectness. But yeah, makes sense to worry about query speed first.\n\n\n\n\n\n\n> If you repeat the same query, is it then faster, or is it still slow?\n\nIf you keep the expression exactly the same, it still takes a few seconds\nas could be expected for such a torture test query, but it's still WAY\nfaster than the first such query. If you change it out to a different\nexpression, it's longer again of course. There does seem to be a\nlow-to-medium correlation between the number of rows found and the query\ncompletion time.\n\n\nTo make this quick, you will need to get most of the table and most of the\nindex cached into RAM. A good way to do that is with pg_prewarm. Of\ncourse that only works if you have enough RAM in the first place.\n\nWhat is the size of the table and the gin index?\n\n\nCheers,\n\nJeff\n\nOn Nov 21, 2017 00:05, \"Matthew Hall\" <[email protected]> wrote:\n> Are all indexes present at the time you insert? It will probably be much faster to insert without the gin index (at least) and build it after the load.\n\nThere is some flexibility on the initial load, but the updates in the future will require the de-duplication capability. I'm willing to accept that might be somewhat slower on the load process, to get the accurate updates, provided we could try meeting the read-side goal I wrote about, or at least figure out why it's impossible, so I can understand what I need to fix to make it possible.As long as you don't let anyone use the table between the initial load and when the index build finishes, you don't have to compromise on correctness. But yeah, makes sense to worry about query speed first.\n\n\n> If you repeat the same query, is it then faster, or is it still slow?\n\nIf you keep the expression exactly the same, it still takes a few seconds as could be expected for such a torture test query, but it's still WAY faster than the first such query. If you change it out to a different expression, it's longer again of course. There does seem to be a low-to-medium correlation between the number of rows found and the query completion time.To make this quick, you will need to get most of the table and most of the index cached into RAM. A good way to do that is with pg_prewarm. Of course that only works if you have enough RAM in the first place.What is the size of the table and the gin index?Cheers,Jeff",
"msg_date": "Fri, 24 Nov 2017 22:35:22 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert and query performance on big string table with pg_trgm"
},
{
"msg_contents": "Don't know if it would make PostgreSQL happier but how about adding a hash\nvalue column and creating the unique index on that one? May block some\nfalse duplicates but the unique index would be way smaller, speeding up\ninserts.\n\n2017. nov. 25. 7:35 ezt írta (\"Jeff Janes\" <[email protected]>):\n\n>\n>\n> On Nov 21, 2017 00:05, \"Matthew Hall\" <[email protected]> wrote:\n>\n>\n> > Are all indexes present at the time you insert? It will probably be\n> much faster to insert without the gin index (at least) and build it after\n> the load.\n>\n> There is some flexibility on the initial load, but the updates in the\n> future will require the de-duplication capability. I'm willing to accept\n> that might be somewhat slower on the load process, to get the accurate\n> updates, provided we could try meeting the read-side goal I wrote about, or\n> at least figure out why it's impossible, so I can understand what I need to\n> fix to make it possible.\n>\n>\n> As long as you don't let anyone use the table between the initial load and\n> when the index build finishes, you don't have to compromise on\n> correctness. But yeah, makes sense to worry about query speed first.\n>\n>\n>\n>\n>\n>\n> > If you repeat the same query, is it then faster, or is it still slow?\n>\n> If you keep the expression exactly the same, it still takes a few seconds\n> as could be expected for such a torture test query, but it's still WAY\n> faster than the first such query. If you change it out to a different\n> expression, it's longer again of course. There does seem to be a\n> low-to-medium correlation between the number of rows found and the query\n> completion time.\n>\n>\n> To make this quick, you will need to get most of the table and most of the\n> index cached into RAM. A good way to do that is with pg_prewarm. Of\n> course that only works if you have enough RAM in the first place.\n>\n> What is the size of the table and the gin index?\n>\n>\n> Cheers,\n>\n> Jeff\n>\n>\n\nDon't know if it would make PostgreSQL happier but how about adding a hash value column and creating the unique index on that one? May block some false duplicates but the unique index would be way smaller, speeding up inserts.2017. nov. 25. 7:35 ezt írta (\"Jeff Janes\" <[email protected]>):On Nov 21, 2017 00:05, \"Matthew Hall\" <[email protected]> wrote:\n> Are all indexes present at the time you insert? It will probably be much faster to insert without the gin index (at least) and build it after the load.\n\nThere is some flexibility on the initial load, but the updates in the future will require the de-duplication capability. I'm willing to accept that might be somewhat slower on the load process, to get the accurate updates, provided we could try meeting the read-side goal I wrote about, or at least figure out why it's impossible, so I can understand what I need to fix to make it possible.As long as you don't let anyone use the table between the initial load and when the index build finishes, you don't have to compromise on correctness. But yeah, makes sense to worry about query speed first.\n\n\n> If you repeat the same query, is it then faster, or is it still slow?\n\nIf you keep the expression exactly the same, it still takes a few seconds as could be expected for such a torture test query, but it's still WAY faster than the first such query. If you change it out to a different expression, it's longer again of course. There does seem to be a low-to-medium correlation between the number of rows found and the query completion time.To make this quick, you will need to get most of the table and most of the index cached into RAM. A good way to do that is with pg_prewarm. Of course that only works if you have enough RAM in the first place.What is the size of the table and the gin index?Cheers,Jeff",
"msg_date": "Sat, 25 Nov 2017 10:19:59 +0100",
"msg_from": "=?UTF-8?B?R8OhYm9yIFNaxbBDUw==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert and query performance on big string table with pg_trgm"
},
{
"msg_contents": "On Nov 21, 2017, at 12:05 AM, Matthew Hall <[email protected]> wrote:\n>> Do you really need the artificial primary key, when you already have another column that would be used as the primary key? If you need to use this it a foreign key in another type, then very well might. But maintaining two unique indexes doesn't come free.\n> \n> OK, fair enough, I'll test with it removed and see what happens.\n\nWith the integer primary key removed, it still takes ~9 hours to load the table, so it didn't seem to make a big difference.\n\n> Fixed it; I will re-test w/ 1 GB. Have you got any recommended values so I don't screw it up?\n\nI also took this step for maintenance_work_mem.\n\nQueries on the table still take a long time with the PK removed:\n\n# explain (analyze, buffers) select * from huge_table where value ilike '%yahoo%';\n\n Bitmap Heap Scan on huge_table (cost=593.72..68828.97 rows=18803 width=25) (actual time=3224.100..70059.839 rows=20909 loops=1)\n Recheck Cond: ((value)::text ~~* '%yahoo%'::text)\n Rows Removed by Index Recheck: 17\n Heap Blocks: exact=6682\n Buffers: shared hit=544 read=6760 dirtied=4034\n I/O Timings: read=69709.611\n -> Bitmap Index Scan on huge_table_value_trgm_idx (cost=0.00..589.02 rows=18803 width=0) (actual time=3216.545..3216.545 rows=20926 loops=1)\n Index Cond: ((value)::text ~~* '%yahoo%'::text)\n Buffers: shared hit=352 read=270\n I/O Timings: read=3171.872\n Planning time: 0.283 ms\n Execution time: 70065.157 ms\n(12 rows)\n\nThe slow process during inserts is:\n\npostgres: username dbname [local] INSERT\n\nThe slow statement example is:\n\n2017-12-06 04:27:11 UTC [16085-10378] username@dbname LOG: duration: 5028.190 ms plan:\n Query Text: INSERT INTO huge_table (value) VALUES\n .... 5000 values at once ...\n ON CONFLICT (value) DO NOTHING\n Insert on huge_table (cost=0.00..75.00 rows=5000 width=40)\n Conflict Resolution: NOTHING\n Conflict Arbiter Indexes: huge_table_value_idx\n -> Values Scan on \"*VALUES*\" (cost=0.00..75.00 rows=5000 width=40)\n\n> What is the size of the table and the gin index?\n\nThe table is 10 GB. The gin index is 5.8 GB.\n\n> [From Gabor Szucs] [H]ow about adding a hash value column and creating the unique index on that one? May block some false duplicates but the unique index would be way smaller, speeding up inserts.\n\nThe mean length of the input items is about 18 bytes. The max length of the input items is about 67 bytes. The size of the md5 would of course be 16 bytes. I'm testing it now, and I'll write another update.\n\nMatthew.\n",
"msg_date": "Tue, 5 Dec 2017 22:15:13 -0800",
"msg_from": "Matthew Hall <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: insert and query performance on big string table with pg_trgm"
},
{
"msg_contents": "> Buffers: shared hit=544 read=6760 dirtied=4034\n> I/O Timings: read=69709.611\nYou has very slow (or busy) disks, not postgresql issue. Reading 6760 * 8KB in 70 seconds is very bad result.\n\nFor better performance you need better disks, at least raid10 (not raid5). Much more memory in shared_buffers can help with read performance and so reduce disk utilization, but write operations still will be slow.\n\nSergei\n\n",
"msg_date": "Wed, 06 Dec 2017 10:23:10 +0300",
"msg_from": "Sergei Kornilov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert and query performance on big string table with pg_trgm"
},
{
"msg_contents": "\n> On Dec 5, 2017, at 11:23 PM, Sergei Kornilov <[email protected]> wrote:\n> You has very slow (or busy) disks, not postgresql issue. Reading 6760 * 8KB in 70 seconds is very bad result.\n> \n> For better performance you need better disks, at least raid10 (not raid5). Much more memory in shared_buffers can help with read performance and so reduce disk utilization, but write operations still will be slow.\n> \n> Sergei\n\nSergei,\n\nThanks so much for confirming, this really helps a lot to know what to do. I thought the disk could be some of my issue, but I wanted to make sure I did all of the obvious tuning first. I have learned some very valuable things which I'll be able to use on future challenges like this which I didn't learn previously.\n\nBased on this advice from everyone, I'm setting up a box with more RAM, lots of SSDs, and RAID 10. I'll write back in a few more days after I've completed it.\n\nI can also confirm that the previous advice about using a hash / digest based unique index seemed to make the loading process slower for me, not faster, which is an interesting result to consider for future users following this thread (if any). I don't yet have specific data how much slower, because it's actually still going!\n\nSincerely,\nMatthew.\n",
"msg_date": "Wed, 6 Dec 2017 18:04:54 -0800",
"msg_from": "Matthew Hall <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: insert and query performance on big string table with pg_trgm"
}
] |
[
{
"msg_contents": "Hi!\nFirst of all, thanks for the great work! PostgreSQL is amazing, and\ncommunity is super helpful.\n\nI found an unexpected behaviour in PostgreSQL, and was advised to post\nit to the performance mailing list on IRC. \n\nUsing GROUPING SETS with more than one set disables predicate pushdown?\n\nVersion:\nPostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu\n5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit\n\nSeems like when GROUPING SETS with at least two sets are used in the\nsubquery, planner\ncan not push WHERE clauses inside.\n\nHere are two queries that (I think) are equivalent, but produce very\ndifferent execution\nplans leading to bad performance on real data - and in effect,\nmaking it impossible to abstract away non-trivial grouping logic into a\nview.\n\nIt might as well be that queries are not really equivalent, but I don't\nsee how.\n\nSame problem happens even if grouping sets are the same - like `GROUPING\nSETS ((), ())`.\n\nCREATE TEMPORARY TABLE test_gs (\n x INT,\n y INT,\n z INT,\n PRIMARY KEY (x, y, z)\n);\n\nEXPLAIN\nSELECT\n x,\n y,\n avg(z) AS mean\nFROM test_gs\nWHERE x = 1\nGROUP BY x, GROUPING SETS ((y), ());\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------\n GroupAggregate (cost=0.15..8.65 rows=20 width=40)\n Group Key: x, y\n Group Key: x\n -> Index Only Scan using test_gs_pkey on test_gs (cost=0.15..8.33\n rows=10 width=12)\n Index Cond: (x = 1)\n(5 rows)\n\n\n\nEXPLAIN\nSELECT x, y, mean\nFROM (\n SELECT\n x,\n y,\n avg(z) AS mean\n FROM test_gs\n GROUP BY x, GROUPING SETS ((y), ())\n ) AS g\nWHERE x = 1;\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------\n GroupAggregate (cost=0.15..62.10 rows=404 width=40)\n Group Key: test_gs.x, test_gs.y\n Group Key: test_gs.x\n Filter: (test_gs.x = 1)\n -> Index Only Scan using test_gs_pkey on test_gs (cost=0.15..41.75\n rows=2040 width=12)\n(5 rows)\n\n\nThe issue here is that the second query is not using index to filter on\nx = 1 , instead it reads all the tuples from an index and applies the\nfilter.\n\nHere is also a description in gist:\nhttps://gist.github.com/zeveshe/cf92c9d2a6b14518af3180113e767ae7\n\nThanks a lot!\n\n-- \n Zakhar Shapurau\n [email protected]\n+47 407 54 397\n\n",
"msg_date": "Tue, 21 Nov 2017 12:04:17 +0100",
"msg_from": "Zakhar Shapurau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Using GROUPING SETS with more than one set disables predicate\n pushdown?"
},
{
"msg_contents": "Zakhar Shapurau <[email protected]> writes:\n> Using GROUPING SETS with more than one set disables predicate pushdown?\n\nIt looks like this is a case that no one's gotten round to yet.\nThe comment in the relevant code is\n\n * In some cases we may want to transfer a HAVING clause into WHERE. We\n * cannot do so if the HAVING clause contains aggregates (obviously) or\n * volatile functions (since a HAVING clause is supposed to be executed\n * only once per group). We also can't do this if there are any nonempty\n * grouping sets; moving such a clause into WHERE would potentially change\n * the results, if any referenced column isn't present in all the grouping\n * sets. (If there are only empty grouping sets, then the HAVING clause\n * must be degenerate as discussed below.)\n\nPresumably, we could examine the grouping sets to identify column(s)\npresent in all sets, and then allow the optimization for clauses that\nreference only such columns. Or maybe I'm misreading the comment\n(but then it needs clarification).\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 21 Nov 2017 09:49:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using GROUPING SETS with more than one set disables predicate\n pushdown?"
},
{
"msg_contents": "\n\nOn November 21, 2017 6:49:26 AM PST, Tom Lane <[email protected]> wrote:\n>Zakhar Shapurau <[email protected]> writes:\n>\n>Presumably, we could examine the grouping sets to identify column(s)\n>present in all sets, and then allow the optimization for clauses that\n>reference only such columns. Or maybe I'm misreading the comment\n>(but then it needs clarification).\n\nBy memory that sounds about right. IIRC we'd some slightly more elaborate logic when GS were introduced, but had to take it out as buggy, and it was too late in the development cycle to come up with something better.\n\nAndres\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n",
"msg_date": "Tue, 21 Nov 2017 10:16:56 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using GROUPING SETS with more than one set disables predicate\n pushdown?"
}
] |
[
{
"msg_contents": "Hello\n\nWe use a system in filmproduction called DaVinci Resolve. It uses a pgsql database when you work in a collaborative workflow and multiple people share projects. Previously it was using pgsql 8.4 but for a new major upgrade they recommend an upgrade to 9.5. Probably also to some macOS limitation/support and that 9.x is required for macOS >10.11.\n\nThey (BlackMagic Design) provide three tools for the migration.\n1. For for dumping everything form the old 8.4 database\n2. One for upgrading from 8.4 to 9.5\n3. One for restoring the backup in step 1 in 9.5\n\nAll that went smoothly and working in the systems also works smoothly and as good as previously, maybe even a bit better/faster.\n\nWhat's not working smoothly is my daily pg_dump's though. I don't have a reference to what's a big and what's a small database since I'm no db-guy and don't really maintain nor work with it on a daily basis. Pretty much only this system we use that has a db system like this. Below is a list of what we dump.\n\n930M Nov 18 13:31 filmserver03_2017-11-18_132043_dailies_2017_01.backup\n2.2K Nov 18 13:20 filmserver03_2017-11-18_132043_postgres.backup\n522K Nov 18 13:20 filmserver03_2017-11-18_132043_resolve.backup\n23G Nov 18 19:37 filmserver03_2017-11-18_132043_resolve_2017_01.backup\n5.1G Nov 18 20:54 filmserver03_2017-11-18_132043_resolve_2017_02.backup\n10G Nov 18 23:34 filmserver03_2017-11-18_132043_resolve_filmserver02.backup\n516K Nov 18 23:35 filmserver03_2017-11-18_132043_temp_backup_test.backup\n1.9G Nov 19 00:05 filmserver03_2017-11-18_132043_temp_dev_resolve14.backup\n\n\nThe last pg_dump with 8.4 took 212 minutes and 49 seconds.And now with 9.5 the very same pg_dump takes 644 minutes and 40 seconds. To it takes about three times as long now and I have no idea to why. Nothing in the system or hardware other than the pgsql upgrade have change.\n\nI dump the db's with a custom script and this is the line I use to get the DB's:\nDATABASES=$(${BINARY_PATH}/psql --user=postgres -w --no-align --tuples-only --command=\"SELECT datname from pg_database WHERE NOT datistemplate\")\n\nAfter that I iterate over them with a for loop and dump with:\n${BINARY_PATH}/pg_dump --host=localhost --user=postgres --no-password --blobs --format=custom --verbose --file=${pg_dump_filename}_${database}.backup ${database} | tee -a ${log_pg_dump}_${database}.log\n\nWhen observing the system during the dump it LOOKS like it did in 8.4. pg_dump is using 100% of one core and from what I can see it does this through out the operation. But it's still sooooo much slower. I read about the parallell option in pg_dump for 9.5 but sadly I cannot dump like that because the application in question can (probably) not import that format on it's own and I would have to use pgrestore or something. Which in theory is fine but sometimes one of the artists have to import the db backup. So need to keep it simple.\n\nThe system is:\nMacPro 5,1\n2x2.66 GHz Quad Core Xeon\n64 GB RAM\nmacOS 10.11.6\nPostgreSQL 9.5.4\nDB on a 6 disk SSD RAID\n\n\nI hope I got all the info needed. Really hope someone with more expertise and skills than me can point me in the right direction.\n\nCheers and thanks\n\n\n--\nHenrik Cednert\ncto | compositor\n\n\n\n\n\n\n\n\n\n\nHello\n\n\nWe use a system in filmproduction called DaVinci Resolve. It uses a pgsql database when you work in a collaborative workflow and multiple people share projects. Previously it was using pgsql 8.4 but for a new major upgrade they recommend an upgrade\n to 9.5. Probably also to some macOS limitation/support and that 9.x is required for macOS >10.11.\n\n\nThey (BlackMagic Design) provide three tools for the migration. \n1. For for dumping everything form the old 8.4 database\n2. One for upgrading from 8.4 to 9.5\n3. One for restoring the backup in step 1 in 9.5\n\n\nAll that went smoothly and working in the systems also works smoothly and as good as previously, maybe even a bit better/faster. \n\n\nWhat's not working smoothly is my daily pg_dump's though. I don't have a reference to what's a big and what's a small database since I'm no db-guy and don't really maintain nor work with it on a daily basis. Pretty much only this system we use\n that has a db system like this. Below is a list of what we dump.\n\n\n930M Nov 18 13:31 filmserver03_2017-11-18_132043_dailies_2017_01.backup\n2.2K Nov 18 13:20 filmserver03_2017-11-18_132043_postgres.backup\n522K Nov 18 13:20 filmserver03_2017-11-18_132043_resolve.backup\n23G Nov 18 19:37 filmserver03_2017-11-18_132043_resolve_2017_01.backup\n5.1G Nov 18 20:54 filmserver03_2017-11-18_132043_resolve_2017_02.backup\n10G Nov 18 23:34 filmserver03_2017-11-18_132043_resolve_filmserver02.backup\n516K Nov 18 23:35 filmserver03_2017-11-18_132043_temp_backup_test.backup\n1.9G Nov 19 00:05 filmserver03_2017-11-18_132043_temp_dev_resolve14.backup\n\n\nThe last pg_dump with 8.4 took 212 minutes and 49 seconds.And now with 9.5 the very same pg_dump takes 644\n minutes and 40 seconds. To it takes about three times as long now and I have no idea to why. Nothing in the system or hardware other than the pgsql upgrade have change. \n\n\nI dump the db's with a custom script and this is the line I use to get the DB's:\n\n\nDATABASES=$(${BINARY_PATH}/psql --user=postgres -w --no-align --tuples-only --command=\"SELECT datname from pg_database WHERE NOT datistemplate\")\n\n\n\nAfter that I iterate over them with a for loop and dump with:\n\n${BINARY_PATH}/pg_dump --host=localhost --user=postgres --no-password --blobs --format=custom --verbose --file=${pg_dump_filename}_${database}.backup ${database} | tee -a ${log_pg_dump}_${database}.log \n\n\n\nWhen observing the system during the dump it LOOKS like it did in 8.4. pg_dump is using 100% of one core and from what I can see it does this through out the operation. But it's still sooooo much slower. I read about the parallell option in pg_dump\n for 9.5 but sadly I cannot dump like that because the application in question can (probably) not import that format on it's own and I would have to use pgrestore or something. Which in theory is fine but sometimes one of the artists have to import the db backup.\n So need to keep it simple.\n\n\nThe system is:\nMacPro 5,1\n2x2.66 GHz Quad Core Xeon\n64 GB RAM\nmacOS 10.11.6\nPostgreSQL 9.5.4\nDB on a 6 disk SSD RAID\n\n\n\n\nI hope I got all the info needed. Really hope someone with more expertise and skills than me can point me in the right direction.\n\n\nCheers and thanks\n\n\n--\nHenrik\n Cednert\ncto\n | compositor",
"msg_date": "Tue, 21 Nov 2017 14:28:43 +0000",
"msg_from": "\"Henrik Cednert (Filmlance)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "From: Henrik Cednert (Filmlance) [mailto:[email protected]]\nSent: Tuesday, November 21, 2017 9:29 AM\nTo: [email protected]\nSubject: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade\n\nHello\n\nWe use a system in filmproduction called DaVinci Resolve. It uses a pgsql database when you work in a collaborative workflow and multiple people share projects. Previously it was using pgsql 8.4 but for a new major upgrade they recommend an upgrade to 9.5. Probably also to some macOS limitation/support and that 9.x is required for macOS >10.11.\n\nThey (BlackMagic Design) provide three tools for the migration.\n1. For for dumping everything form the old 8.4 database\n2. One for upgrading from 8.4 to 9.5\n3. One for restoring the backup in step 1 in 9.5\n\nAll that went smoothly and working in the systems also works smoothly and as good as previously, maybe even a bit better/faster.\n\nWhat's not working smoothly is my daily pg_dump's though. I don't have a reference to what's a big and what's a small database since I'm no db-guy and don't really maintain nor work with it on a daily basis. Pretty much only this system we use that has a db system like this. Below is a list of what we dump.\n\n930M Nov 18 13:31 filmserver03_2017-11-18_132043_dailies_2017_01.backup\n2.2K Nov 18 13:20 filmserver03_2017-11-18_132043_postgres.backup\n522K Nov 18 13:20 filmserver03_2017-11-18_132043_resolve.backup\n23G Nov 18 19:37 filmserver03_2017-11-18_132043_resolve_2017_01.backup\n5.1G Nov 18 20:54 filmserver03_2017-11-18_132043_resolve_2017_02.backup\n10G Nov 18 23:34 filmserver03_2017-11-18_132043_resolve_filmserver02.backup\n516K Nov 18 23:35 filmserver03_2017-11-18_132043_temp_backup_test.backup\n1.9G Nov 19 00:05 filmserver03_2017-11-18_132043_temp_dev_resolve14.backup\n\n\nThe last pg_dump with 8.4 took 212 minutes and 49 seconds.And now with 9.5 the very same pg_dump takes 644 minutes and 40 seconds. To it takes about three times as long now and I have no idea to why. Nothing in the system or hardware other than the pgsql upgrade have change.\n\nI dump the db's with a custom script and this is the line I use to get the DB's:\nDATABASES=$(${BINARY_PATH}/psql --user=postgres -w --no-align --tuples-only --command=\"SELECT datname from pg_database WHERE NOT datistemplate\")\n\nAfter that I iterate over them with a for loop and dump with:\n${BINARY_PATH}/pg_dump --host=localhost --user=postgres --no-password --blobs --format=custom --verbose --file=${pg_dump_filename}_${database}.backup ${database} | tee -a ${log_pg_dump}_${database}.log\n\nWhen observing the system during the dump it LOOKS like it did in 8.4. pg_dump is using 100% of one core and from what I can see it does this through out the operation. But it's still sooooo much slower. I read about the parallell option in pg_dump for 9.5 but sadly I cannot dump like that because the application in question can (probably) not import that format on it's own and I would have to use pgrestore or something. Which in theory is fine but sometimes one of the artists have to import the db backup. So need to keep it simple.\n\nThe system is:\nMacPro 5,1\n2x2.66 GHz Quad Core Xeon\n64 GB RAM\nmacOS 10.11.6\nPostgreSQL 9.5.4\nDB on a 6 disk SSD RAID\n\n\nI hope I got all the info needed. Really hope someone with more expertise and skills than me can point me in the right direction.\n\nCheers and thanks\n\n\n--\nHenrik Cednert\ncto | compositor\nAccording to pg_dump command in your script you are dumping your databases in custom format:\n\n--format=custom\n\nThese backups could only be restored using pg_restore (or something that wraps pg_restore).\nSo, you can safely add parallel option. It should not affect your restore procedure.\n\nRegards,\nIgor Neyman\n\n\n\n\n\n\n\n\n\n\n \n\n\nFrom: Henrik Cednert (Filmlance) [mailto:[email protected]]\n\nSent: Tuesday, November 21, 2017 9:29 AM\nTo: [email protected]\nSubject: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade\n\n\n\n\n\n \nHello \n\n \n\n\nWe use a system in filmproduction called DaVinci Resolve. It uses a pgsql database when you work in a collaborative workflow and multiple people share projects. Previously it was using pgsql 8.4 but for a new major upgrade they recommend\n an upgrade to 9.5. Probably also to some macOS limitation/support and that 9.x is required for macOS >10.11.\n\n\n \n\n\nThey (BlackMagic Design) provide three tools for the migration. \n\n\n1. For for dumping everything form the old 8.4 database\n\n\n2. One for upgrading from 8.4 to 9.5\n\n\n3. One for restoring the backup in step 1 in 9.5\n\n\n \n\n\nAll that went smoothly and working in the systems also works smoothly and as good as previously, maybe even a bit better/faster. \n\n\n \n\n\nWhat's not working smoothly is my daily pg_dump's though. I don't have a reference to what's a big and what's a small database since I'm no db-guy and don't really maintain nor work with it on a daily basis. Pretty much only this system\n we use that has a db system like this. Below is a list of what we dump.\n\n\n \n\n\n930M Nov 18 13:31 filmserver03_2017-11-18_132043_dailies_2017_01.backup\n2.2K Nov 18 13:20 filmserver03_2017-11-18_132043_postgres.backup\n522K Nov 18 13:20 filmserver03_2017-11-18_132043_resolve.backup\n23G Nov 18 19:37 filmserver03_2017-11-18_132043_resolve_2017_01.backup\n5.1G Nov 18 20:54 filmserver03_2017-11-18_132043_resolve_2017_02.backup\n10G Nov 18 23:34 filmserver03_2017-11-18_132043_resolve_filmserver02.backup\n516K Nov 18 23:35 filmserver03_2017-11-18_132043_temp_backup_test.backup\n1.9G Nov 19 00:05 filmserver03_2017-11-18_132043_temp_dev_resolve14.backup\n\n\nThe last pg_dump with 8.4 took 212 minutes and 49 seconds.And now with 9.5 the very same pg_dump takes 644 minutes and 40 seconds. To it takes about three times as long now and I have no idea to\n why. Nothing in the system or hardware other than the pgsql upgrade have change. \n\n\n \n\n\nI dump the db's with a custom script and this is the line I use to get the DB's:\n\n\n\n\nDATABASES=$(${BINARY_PATH}/psql --user=postgres -w --no-align --tuples-only --command=\"SELECT datname from pg_database WHERE NOT datistemplate\")\n\n\n\n \n\n\nAfter that I iterate over them with a for loop and dump with:\n\n\n\n${BINARY_PATH}/pg_dump --host=localhost --user=postgres --no-password --blobs --format=custom --verbose --file=${pg_dump_filename}_${database}.backup ${database} | tee -a ${log_pg_dump}_${database}.log\n \n\n\n\n \n\nWhen observing the system during the dump it LOOKS like it did in 8.4. pg_dump is using 100% of one core and from what I can see it does this through out the operation. But it's still sooooo much slower. I read about the parallell option\n in pg_dump for 9.5 but sadly I cannot dump like that because the application in question can (probably) not import that format on it's own and I would have to use pgrestore or something. Which in theory is fine but sometimes one of the artists have to import\n the db backup. So need to keep it simple.\n\n\n \n\n\nThe system is:\n\n\nMacPro 5,1\n\n\n2x2.66 GHz Quad Core Xeon\n\n\n64 GB RAM\n\n\nmacOS 10.11.6\n\n\nPostgreSQL 9.5.4\n\n\nDB on a 6 disk SSD RAID\n\n\n \n\n\n \n\n\nI hope I got all the info needed. Really hope someone with more expertise and skills than me can point me in the right direction.\n\n\n \n\n\nCheers and thanks\n\n\n \n\n\n\n--\nHenrik Cednert\ncto | compositor\n\n\nAccording to pg_dump command in your script you are dumping your databases in custom format:\n \n--format=custom\n \nThese backups could only be restored using pg_restore (or something that wraps pg_restore).\nSo, you can safely add parallel option. It should not affect your restore procedure.\n \nRegards,\nIgor Neyman",
"msg_date": "Tue, 21 Nov 2017 16:25:17 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "Ahh! Nice catch Igor. Thanks. =)\n\nWill try and see if resolve can read that back in.\n\nStill very curious about the 3x slowdown in 9.5 pg_dump though.\n\n\n--\nHenrik Cednert\ncto | compositor\n\nFilmlance International\nOn 21 Nov 2017, at 17:25, Igor Neyman <[email protected]<mailto:[email protected]>> wrote:\n\n\nFrom: Henrik Cednert (Filmlance) [mailto:[email protected]]\nSent: Tuesday, November 21, 2017 9:29 AM\nTo: [email protected]<mailto:[email protected]>\nSubject: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade\n\nHello\n\nWe use a system in filmproduction called DaVinci Resolve. It uses a pgsql database when you work in a collaborative workflow and multiple people share projects. Previously it was using pgsql 8.4 but for a new major upgrade they recommend an upgrade to 9.5. Probably also to some macOS limitation/support and that 9.x is required for macOS >10.11.\n\nThey (BlackMagic Design) provide three tools for the migration.\n1. For for dumping everything form the old 8.4 database\n2. One for upgrading from 8.4 to 9.5\n3. One for restoring the backup in step 1 in 9.5\n\nAll that went smoothly and working in the systems also works smoothly and as good as previously, maybe even a bit better/faster.\n\nWhat's not working smoothly is my daily pg_dump's though. I don't have a reference to what's a big and what's a small database since I'm no db-guy and don't really maintain nor work with it on a daily basis. Pretty much only this system we use that has a db system like this. Below is a list of what we dump.\n\n930M Nov 18 13:31 filmserver03_2017-11-18_132043_dailies_2017_01.backup\n2.2K Nov 18 13:20 filmserver03_2017-11-18_132043_postgres.backup\n522K Nov 18 13:20 filmserver03_2017-11-18_132043_resolve.backup\n23G Nov 18 19:37 filmserver03_2017-11-18_132043_resolve_2017_01.backup\n5.1G Nov 18 20:54 filmserver03_2017-11-18_132043_resolve_2017_02.backup\n10G Nov 18 23:34 filmserver03_2017-11-18_132043_resolve_filmserver02.backup\n516K Nov 18 23:35 filmserver03_2017-11-18_132043_temp_backup_test.backup\n1.9G Nov 19 00:05 filmserver03_2017-11-18_132043_temp_dev_resolve14.backup\n\n\nThe last pg_dump with 8.4 took 212 minutes and 49 seconds.And now with 9.5 the very same pg_dump takes 644 minutes and 40 seconds. To it takes about three times as long now and I have no idea to why. Nothing in the system or hardware other than the pgsql upgrade have change.\n\nI dump the db's with a custom script and this is the line I use to get the DB's:\nDATABASES=$(${BINARY_PATH}/psql --user=postgres -w --no-align --tuples-only --command=\"SELECT datname from pg_database WHERE NOT datistemplate\")\n\nAfter that I iterate over them with a for loop and dump with:\n${BINARY_PATH}/pg_dump --host=localhost --user=postgres --no-password --blobs --format=custom --verbose --file=${pg_dump_filename}_${database}.backup ${database} | tee -a ${log_pg_dump}_${database}.log\n\nWhen observing the system during the dump it LOOKS like it did in 8.4. pg_dump is using 100% of one core and from what I can see it does this through out the operation. But it's still sooooo much slower. I read about the parallell option in pg_dump for 9.5 but sadly I cannot dump like that because the application in question can (probably) not import that format on it's own and I would have to use pgrestore or something. Which in theory is fine but sometimes one of the artists have to import the db backup. So need to keep it simple.\n\nThe system is:\nMacPro 5,1\n2x2.66 GHz Quad Core Xeon\n64 GB RAM\nmacOS 10.11.6\nPostgreSQL 9.5.4\nDB on a 6 disk SSD RAID\n\n\nI hope I got all the info needed. Really hope someone with more expertise and skills than me can point me in the right direction.\n\nCheers and thanks\n\n\n--\nHenrik Cednert\ncto | compositor\nAccording to pg_dump command in your script you are dumping your databases in custom format:\n\n--format=custom\n\nThese backups could only be restored using pg_restore (or something that wraps pg_restore).\nSo, you can safely add parallel option. It should not affect your restore procedure.\n\nRegards,\nIgor Neyman\n\n\n\n\n\n\n\nAhh! Nice catch Igor. Thanks. =) \n\n\nWill try and see if resolve can read that back in.\n\n\nStill very curious about the 3x slowdown in 9.5 pg_dump though.\n\n\n\n\n--\nHenrik\n Cednert\ncto\n | compositor\n\nFilmlance\n International\n\n\n\nOn 21 Nov 2017, at 17:25, Igor Neyman <[email protected]> wrote:\n\n\n\n\n \n\n\n\nFrom: Henrik Cednert (Filmlance) [mailto:[email protected]] \nSent: Tuesday, November 21, 2017 9:29 AM\nTo: [email protected]\nSubject: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade\n\n\n\n\n\n\n \n\nHello \n\n\n \n\n\n\nWe use a system in filmproduction called DaVinci Resolve. It uses a pgsql database when you work in a collaborative workflow and multiple people share projects. Previously it was using pgsql 8.4 but for a new major upgrade they recommend an upgrade to 9.5.\n Probably also to some macOS limitation/support and that 9.x is required for macOS >10.11.\n\n\n\n \n\n\n\nThey (BlackMagic Design) provide three tools for the migration. \n\n\n\n1. For for dumping everything form the old 8.4 database\n\n\n\n2. One for upgrading from 8.4 to 9.5\n\n\n\n3. One for restoring the backup in step 1 in 9.5\n\n\n\n \n\n\n\nAll that went smoothly and working in the systems also works smoothly and as good as previously, maybe even a bit better/faster. \n\n\n\n \n\n\n\nWhat's not working smoothly is my daily pg_dump's though. I don't have a reference to what's a big and what's a small database since I'm no db-guy and don't really maintain nor work with it on a daily basis. Pretty much only this system we use that has a db\n system like this. Below is a list of what we dump.\n\n\n\n \n\n\n\n930M Nov 18 13:31 filmserver03_2017-11-18_132043_dailies_2017_01.backup\n2.2K Nov 18 13:20 filmserver03_2017-11-18_132043_postgres.backup\n522K Nov 18 13:20 filmserver03_2017-11-18_132043_resolve.backup\n23G Nov 18 19:37 filmserver03_2017-11-18_132043_resolve_2017_01.backup\n5.1G Nov 18 20:54 filmserver03_2017-11-18_132043_resolve_2017_02.backup\n10G Nov 18 23:34 filmserver03_2017-11-18_132043_resolve_filmserver02.backup\n516K Nov 18 23:35 filmserver03_2017-11-18_132043_temp_backup_test.backup\n1.9G Nov 19 00:05 filmserver03_2017-11-18_132043_temp_dev_resolve14.backup\n\n\nThe last pg_dump with 8.4 took 212 minutes and 49 seconds.And now with 9.5 the very same pg_dump takes 644 minutes and 40 seconds. To it takes about three times as long now and I have\n no idea to why. Nothing in the system or hardware other than the pgsql upgrade have change. \n\n\n\n \n\n\n\nI dump the db's with a custom script and this is the line I use to get the DB's:\n\n\n\n\n\nDATABASES=$(${BINARY_PATH}/psql --user=postgres -w --no-align --tuples-only --command=\"SELECT datname from pg_database WHERE NOT datistemplate\")\n\n\n\n\n \n\n\n\nAfter that I iterate over them with a for loop and dump with:\n\n\n\n\n${BINARY_PATH}/pg_dump --host=localhost --user=postgres --no-password --blobs --format=custom --verbose --file=${pg_dump_filename}_${database}.backup ${database} | tee -a ${log_pg_dump}_${database}.log\n \n\n\n\n\n \n\n\nWhen observing the system during the dump it LOOKS like it did in 8.4. pg_dump is using 100% of one core and from what I can see it does this through out the operation. But it's still sooooo much slower. I read about the parallell option in pg_dump for 9.5\n but sadly I cannot dump like that because the application in question can (probably) not import that format on it's own and I would have to use pgrestore or something. Which in theory is fine but sometimes one of the artists have to import the db backup. So\n need to keep it simple.\n\n\n\n \n\n\n\nThe system is:\n\n\n\nMacPro 5,1\n\n\n\n2x2.66 GHz Quad Core Xeon\n\n\n\n64 GB RAM\n\n\n\nmacOS 10.11.6\n\n\n\nPostgreSQL 9.5.4\n\n\n\nDB on a 6 disk SSD RAID\n\n\n\n \n\n\n\n \n\n\n\nI hope I got all the info needed. Really hope someone with more expertise and skills than me can point me in the right direction.\n\n\n\n \n\n\n\nCheers and thanks\n\n\n\n \n\n\n\n\n--\nHenrik Cednert\ncto | compositor\n\n\n\nAccording to pg_dump command in your script you are dumping your databases in custom format:\n\n \n\n--format=custom\n\n \n\nThese backups could only be restored using pg_restore (or something that wraps pg_restore).\n\nSo, you can safely add parallel option. It should not affect your restore procedure.\n\n \n\nRegards,\n\nIgor Neyman",
"msg_date": "Tue, 21 Nov 2017 16:27:12 +0000",
"msg_from": "\"Henrik Cednert (Filmlance)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "From: Henrik Cednert (Filmlance) [mailto:[email protected]]\nSent: Tuesday, November 21, 2017 11:27 AM\nTo: [email protected]\nSubject: Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade\n\nAhh! Nice catch Igor. Thanks. =)\n\nWill try and see if resolve can read that back in.\n\nStill very curious about the 3x slowdown in 9.5 pg_dump though.\n\n\n--\nHenrik Cednert\ncto | compositor\n\nFilmlance International\n\nBasically, you are dumping 40GB of data.\nI'd say even 212 minutes under 8.4 version was too slow.\nWhat kind of RAID is it? RAID1/RAID10/RAID5?\n\nRegards,\nIgor Neyman\n\n\n\n\n\n\n\n\n\n \n\n\nFrom: Henrik Cednert (Filmlance) [mailto:[email protected]]\n\nSent: Tuesday, November 21, 2017 11:27 AM\nTo: [email protected]\nSubject: Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade\n\n\n \n\nAhh! Nice catch Igor. Thanks. =) \n\n \n\n\nWill try and see if resolve can read that back in.\n\n\n \n\n\nStill very curious about the 3x slowdown in 9.5 pg_dump though.\n\n\n \n\n\n\n\n--\nHenrik Cednert\ncto | compositor\n\nFilmlance International\n\n \nBasically, you are dumping 40GB of data.\nI’d say even 212 minutes under 8.4 version was too slow.\nWhat kind of RAID is it? RAID1/RAID10/RAID5?\n \nRegards,\nIgor Neyman",
"msg_date": "Tue, 21 Nov 2017 16:34:03 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "RAID6. Doing disk test I have 1000MB/sec write and 1200MB/sec read.\r\n\r\n--\r\nHenrik Cednert\r\ncto | compositor\r\n\r\nFilmlance International\r\nmobile [ + 46 (0)704 71 89 54 ]\r\nskype [ cednert ]\r\n\r\nOn 21 Nov 2017, at 17:34, Igor Neyman <[email protected]<mailto:[email protected]>> wrote:\r\n\r\n\r\nFrom: Henrik Cednert (Filmlance) [mailto:[email protected]]\r\nSent: Tuesday, November 21, 2017 11:27 AM\r\nTo: [email protected]<mailto:[email protected]>\r\nSubject: Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade\r\n\r\nAhh! Nice catch Igor. Thanks. =)\r\n\r\nWill try and see if resolve can read that back in.\r\n\r\nStill very curious about the 3x slowdown in 9.5 pg_dump though.\r\n\r\n\r\n--\r\nHenrik Cednert\r\ncto | compositor\r\n\r\nFilmlance International\r\n\r\nBasically, you are dumping 40GB of data.\r\nI’d say even 212 minutes under 8.4 version was too slow.\r\nWhat kind of RAID is it? RAID1/RAID10/RAID5?\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n\n\n\n\n\n\r\nRAID6. Doing disk test I have 1000MB/sec write and 1200MB/sec read. \n\n--\nHenrik\r\n Cednert\ncto\r\n | compositor\n\nFilmlance\r\n International\nmobile\r\n [ +\r\n 46 (0)704 71 89 54 ]\nskype\r\n [ cednert ] \n\n\nOn 21 Nov 2017, at 17:34, Igor Neyman <[email protected]> wrote:\n\n\n\n\n \n\n\n\nFrom: Henrik Cednert (Filmlance) [mailto:[email protected]] \nSent: Tuesday, November 21, 2017 11:27 AM\nTo: [email protected]\nSubject: Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade\n\n\n\n \n\n\r\nAhh! Nice catch Igor. Thanks. =) \n\n\n \n\n\n\r\nWill try and see if resolve can read that back in.\n\n\n\n \n\n\n\r\nStill very curious about the 3x slowdown in 9.5 pg_dump though.\n\n\n\n \n\n\n\n\n\n--\r\nHenrik Cednert\r\ncto | compositor\n\nFilmlance International\n\n\n \n\nBasically, you are dumping 40GB of data.\n\nI’d say even 212 minutes under 8.4 version was too slow.\n\nWhat kind of RAID is it? RAID1/RAID10/RAID5?\n\n \n\nRegards,\n\nIgor Neyman",
"msg_date": "Tue, 21 Nov 2017 16:37:13 +0000",
"msg_from": "\"Henrik Cednert (Filmlance)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "Guys,\n\nSorry to bother you but can anyone help me unsubscribe from this list?\nI followed the instructions in the original email and got an error\nmessage...\nThanks,\n\n-- Shaul\n\nOn Tue, Nov 21, 2017 at 6:25 PM, Igor Neyman <[email protected]> wrote:\n\n>\n>\n> *From:* Henrik Cednert (Filmlance) [mailto:[email protected]]\n> *Sent:* Tuesday, November 21, 2017 9:29 AM\n> *To:* [email protected]\n> *Subject:* pg_dump 3 times as slow after 8.4 -> 9.5 upgrade\n>\n>\n>\n> Hello\n>\n>\n>\n> We use a system in filmproduction called DaVinci Resolve. It uses a pgsql\n> database when you work in a collaborative workflow and multiple people\n> share projects. Previously it was using pgsql 8.4 but for a new major\n> upgrade they recommend an upgrade to 9.5. Probably also to some macOS\n> limitation/support and that 9.x is required for macOS >10.11.\n>\n>\n>\n> They (BlackMagic Design) provide three tools for the migration.\n>\n> 1. For for dumping everything form the old 8.4 database\n>\n> 2. One for upgrading from 8.4 to 9.5\n>\n> 3. One for restoring the backup in step 1 in 9.5\n>\n>\n>\n> All that went smoothly and working in the systems also works smoothly and\n> as good as previously, maybe even a bit better/faster.\n>\n>\n>\n> What's not working smoothly is my daily pg_dump's though. I don't have a\n> reference to what's a big and what's a small database since I'm no db-guy\n> and don't really maintain nor work with it on a daily basis. Pretty much\n> only this system we use that has a db system like this. Below is a list of\n> what we dump.\n>\n>\n>\n> 930M Nov 18 13:31 filmserver03_2017-11-18_132043_dailies_2017_01.backup\n> 2.2K Nov 18 13:20 filmserver03_2017-11-18_132043_postgres.backup\n> 522K Nov 18 13:20 filmserver03_2017-11-18_132043_resolve.backup\n> 23G Nov 18 19:37 filmserver03_2017-11-18_132043_resolve_2017_01.backup\n> 5.1G Nov 18 20:54 filmserver03_2017-11-18_132043_resolve_2017_02.backup\n> 10G Nov 18 23:34 filmserver03_2017-11-18_132043_resolve_filmserver02.\n> backup\n> 516K Nov 18 23:35 filmserver03_2017-11-18_132043_temp_backup_test.backup\n> 1.9G Nov 19 00:05 filmserver03_2017-11-18_132043_temp_dev_resolve14.backup\n>\n>\n> The last pg_dump with 8.4 took 212 minutes and 49 seconds.And now with\n> 9.5 the very same pg_dump takes 644 minutes and 40 seconds. To it takes\n> about three times as long now and I have no idea to why. Nothing in the\n> system or hardware other than the pgsql upgrade have change.\n>\n>\n>\n> I dump the db's with a custom script and this is the line I use to get the\n> DB's:\n>\n> DATABASES=$(${BINARY_PATH}/psql --user=postgres -w --no-align\n> --tuples-only --command=\"SELECT datname from pg_database WHERE NOT\n> datistemplate\")\n>\n>\n>\n> After that I iterate over them with a for loop and dump with:\n>\n> ${BINARY_PATH}/pg_dump --host=localhost --user=postgres --no-password\n> --blobs --format=custom --verbose --file=${pg_dump_filename}_${database}.backup\n> ${database} | tee -a ${log_pg_dump}_${database}.log\n>\n>\n>\n> When observing the system during the dump it LOOKS like it did in 8.4.\n> pg_dump is using 100% of one core and from what I can see it does this\n> through out the operation. But it's still sooooo much slower. I read about\n> the parallell option in pg_dump for 9.5 but sadly I cannot dump like that\n> because the application in question can (probably) not import that format\n> on it's own and I would have to use pgrestore or something. Which in theory\n> is fine but sometimes one of the artists have to import the db backup. So\n> need to keep it simple.\n>\n>\n>\n> The system is:\n>\n> MacPro 5,1\n>\n> 2x2.66 GHz Quad Core Xeon\n>\n> 64 GB RAM\n>\n> macOS 10.11.6\n>\n> PostgreSQL 9.5.4\n>\n> DB on a 6 disk SSD RAID\n>\n>\n>\n>\n>\n> I hope I got all the info needed. Really hope someone with more expertise\n> and skills than me can point me in the right direction.\n>\n>\n>\n> Cheers and thanks\n>\n>\n>\n>\n> --\n> Henrik Cednert\n> cto | compositor\n>\n> According to pg_dump command in your script you are dumping your databases\n> in custom format:\n>\n>\n>\n> --format=custom\n>\n>\n>\n> These backups could only be restored using pg_restore (or something that\n> wraps pg_restore).\n>\n> So, you can safely add parallel option. It should not affect your restore\n> procedure.\n>\n>\n>\n> Regards,\n>\n> Igor Neyman\n>\n>\n>\n\nGuys,Sorry to bother you but can anyone help me unsubscribe from this list?I followed the instructions in the original email and got an error message...Thanks,-- ShaulOn Tue, Nov 21, 2017 at 6:25 PM, Igor Neyman <[email protected]> wrote:\n\n\n \n\n\nFrom: Henrik Cednert (Filmlance) [mailto:[email protected]]\n\nSent: Tuesday, November 21, 2017 9:29 AM\nTo: [email protected]\nSubject: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade\n\n\n\n\n\n \nHello \n\n \n\n\nWe use a system in filmproduction called DaVinci Resolve. It uses a pgsql database when you work in a collaborative workflow and multiple people share projects. Previously it was using pgsql 8.4 but for a new major upgrade they recommend\n an upgrade to 9.5. Probably also to some macOS limitation/support and that 9.x is required for macOS >10.11.\n\n\n \n\n\nThey (BlackMagic Design) provide three tools for the migration. \n\n\n1. For for dumping everything form the old 8.4 database\n\n\n2. One for upgrading from 8.4 to 9.5\n\n\n3. One for restoring the backup in step 1 in 9.5\n\n\n \n\n\nAll that went smoothly and working in the systems also works smoothly and as good as previously, maybe even a bit better/faster. \n\n\n \n\n\nWhat's not working smoothly is my daily pg_dump's though. I don't have a reference to what's a big and what's a small database since I'm no db-guy and don't really maintain nor work with it on a daily basis. Pretty much only this system\n we use that has a db system like this. Below is a list of what we dump.\n\n\n \n\n\n930M Nov 18 13:31 filmserver03_2017-11-18_132043_dailies_2017_01.backup\n2.2K Nov 18 13:20 filmserver03_2017-11-18_132043_postgres.backup\n522K Nov 18 13:20 filmserver03_2017-11-18_132043_resolve.backup\n23G Nov 18 19:37 filmserver03_2017-11-18_132043_resolve_2017_01.backup\n5.1G Nov 18 20:54 filmserver03_2017-11-18_132043_resolve_2017_02.backup\n10G Nov 18 23:34 filmserver03_2017-11-18_132043_resolve_filmserver02.backup\n516K Nov 18 23:35 filmserver03_2017-11-18_132043_temp_backup_test.backup\n1.9G Nov 19 00:05 filmserver03_2017-11-18_132043_temp_dev_resolve14.backup\n\n\nThe last pg_dump with 8.4 took 212 minutes and 49 seconds.And now with 9.5 the very same pg_dump takes 644 minutes and 40 seconds. To it takes about three times as long now and I have no idea to\n why. Nothing in the system or hardware other than the pgsql upgrade have change. \n\n\n \n\n\nI dump the db's with a custom script and this is the line I use to get the DB's:\n\n\n\n\nDATABASES=$(${BINARY_PATH}/psql --user=postgres -w --no-align --tuples-only --command=\"SELECT datname from pg_database WHERE NOT datistemplate\")\n\n\n\n \n\n\nAfter that I iterate over them with a for loop and dump with:\n\n\n\n${BINARY_PATH}/pg_dump --host=localhost --user=postgres --no-password --blobs --format=custom --verbose --file=${pg_dump_filename}_${database}.backup ${database} | tee -a ${log_pg_dump}_${database}.log\n \n\n\n\n \n\nWhen observing the system during the dump it LOOKS like it did in 8.4. pg_dump is using 100% of one core and from what I can see it does this through out the operation. But it's still sooooo much slower. I read about the parallell option\n in pg_dump for 9.5 but sadly I cannot dump like that because the application in question can (probably) not import that format on it's own and I would have to use pgrestore or something. Which in theory is fine but sometimes one of the artists have to import\n the db backup. So need to keep it simple.\n\n\n \n\n\nThe system is:\n\n\nMacPro 5,1\n\n\n2x2.66 GHz Quad Core Xeon\n\n\n64 GB RAM\n\n\nmacOS 10.11.6\n\n\nPostgreSQL 9.5.4\n\n\nDB on a 6 disk SSD RAID\n\n\n \n\n\n \n\n\nI hope I got all the info needed. Really hope someone with more expertise and skills than me can point me in the right direction.\n\n\n \n\n\nCheers and thanks\n\n\n \n\n\n\n--\nHenrik Cednert\ncto | compositor\n\n\nAccording to pg_dump command in your script you are dumping your databases in custom format:\n \n--format=custom\n \nThese backups could only be restored using pg_restore (or something that wraps pg_restore).\nSo, you can safely add parallel option. It should not affect your restore procedure.\n \nRegards,\nIgor Neyman",
"msg_date": "Tue, 21 Nov 2017 18:44:23 +0200",
"msg_from": "Shaul Dar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "From: Henrik Cednert (Filmlance) [mailto:[email protected]]\r\nSent: Tuesday, November 21, 2017 11:37 AM\r\nTo: [email protected]\r\nSubject: Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade\r\n\r\n\r\nAttention: This email was sent from someone outside of Perceptron. Always exercise caution when opening attachments or clicking links from unknown senders or when receiving unexpected emails.\r\n\r\nRAID6. Doing disk test I have 1000MB/sec write and 1200MB/sec read.\r\n\r\n--\r\nHenrik Cednert\r\ncto | compositor\r\n\r\nFilmlance International\r\nmobile [ + 46 (0)704 71 89 54 ]\r\nskype [ cednert ]\r\n\r\n_________________________________________________________________________________________________\r\n\r\nOkay, I was kind of wrong about 40GB. That’s the size of your compressed backup files, not the size of your databases.\r\nMay be your dbs are “bloated”?\r\nYou could try VACUUM FULL on your databases, when there is no other activity.\r\n\r\nIgor Neyman\r\n\n\n\n\n\n\n\n\n\n \n\n\nFrom: Henrik Cednert (Filmlance) [mailto:[email protected]]\r\n\nSent: Tuesday, November 21, 2017 11:37 AM\nTo: [email protected]\nSubject: Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade\n\n\n \nAttention: This email was sent from someone outside of Perceptron. Always exercise caution when opening attachments or clicking links from unknown senders or when receiving unexpected emails.\n \n\nRAID6. Doing disk test I have 1000MB/sec write and 1200MB/sec read. \n\n\n--\r\nHenrik Cednert\r\ncto | compositor\n\nFilmlance International\nmobile [ + 46 (0)704 71 89 54 ]\r\nskype [ cednert ] \n\n\n\n_________________________________________________________________________________________________\n \nOkay, I was kind of wrong about 40GB. That’s the size of your compressed backup files, not the size of your databases.\nMay be your dbs are “bloated”?\nYou could try VACUUM FULL on your databases, when there is no other activity.\n \nIgor Neyman",
"msg_date": "Tue, 21 Nov 2017 16:44:38 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "I VACUUM every sunday so that is done already. =/\r\n\r\nNot sure I have the proper params though since I'm not used to db's but have followed other's \"how to's\", but these are the lines in my script for that;\r\n\r\n${BINARY_PATH}/vacuumdb --analyze --host=localhost --username=postgres --echo --verbose --no-password ${database} | tee -a ${log_pg_optimize}_${database}.log\r\n${BINARY_PATH}/reindexdb --host=localhost --username=postgres --no-password --echo ${database} | tee -a ${log_pg_optimize}_${database}.log\r\n\r\n\r\n--\r\nHenrik Cednert\r\ncto | compositor\r\n\r\nFilmlance International\r\nmobile [ + 46 (0)704 71 89 54 ]\r\nskype [ cednert ]\r\n\r\nOn 21 Nov 2017, at 17:44, Igor Neyman <[email protected]<mailto:[email protected]>> wrote:\r\n\r\n\r\nFrom: Henrik Cednert (Filmlance) [mailto:[email protected]]\r\nSent: Tuesday, November 21, 2017 11:37 AM\r\nTo: [email protected]<mailto:[email protected]>\r\nSubject: Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade\r\n\r\n\r\nAttention: This email was sent from someone outside of Perceptron. Always exercise caution when opening attachments or clicking links from unknown senders or when receiving unexpected emails.\r\n\r\n\r\nRAID6. Doing disk test I have 1000MB/sec write and 1200MB/sec read.\r\n\r\n--\r\nHenrik Cednert\r\ncto | compositor\r\n\r\nFilmlance International\r\nmobile [ + 46 (0)704 71 89 54 ]\r\nskype [ cednert ]\r\n\r\n_________________________________________________________________________________________________\r\n\r\nOkay, I was kind of wrong about 40GB. That’s the size of your compressed backup files, not the size of your databases.\r\nMay be your dbs are “bloated”?\r\nYou could try VACUUM FULL on your databases, when there is no other activity.\r\n\r\nIgor Neyman\r\n\r\n\n\n\n\n\n\r\nI VACUUM every sunday so that is done already. =/ \r\n\n\nNot sure I have the proper params though since I'm not used to db's but have followed other's \"how to's\", but these are the lines in my script for that;\n\n\n\n${BINARY_PATH}/vacuumdb --analyze --host=localhost --username=postgres --echo --verbose --no-password ${database} | tee -a ${log_pg_optimize}_${database}.log\n\n${BINARY_PATH}/reindexdb --host=localhost --username=postgres --no-password --echo ${database} | tee -a ${log_pg_optimize}_${database}.log \n\n\n\n\n--\nHenrik\r\n Cednert\ncto\r\n | compositor\n\nFilmlance\r\n International\nmobile\r\n [ +\r\n 46 (0)704 71 89 54 ]\nskype\r\n [ cednert ] \n\n\nOn 21 Nov 2017, at 17:44, Igor Neyman <[email protected]> wrote:\n\n\n\n\n \n\n\n\nFrom: Henrik Cednert (Filmlance) [mailto:[email protected]] \nSent: Tuesday, November 21, 2017 11:37 AM\nTo: [email protected]\nSubject: Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade\n\n\n\n \n\nAttention: This email was sent from someone outside of Perceptron. Always exercise caution when opening attachments or clicking links from unknown senders or when receiving unexpected emails.\n\n \n\n\r\nRAID6. Doing disk test I have 1000MB/sec write and 1200MB/sec read. \n\n\n\n--\r\nHenrik Cednert\r\ncto | compositor\n\nFilmlance International\nmobile [ + 46 (0)704 71 89 54 ]\r\nskype [ cednert ]\n\n\n\n\n_________________________________________________________________________________________________\n\n \n\nOkay, I was kind of wrong about 40GB. That’s the size of your compressed backup files, not the size of your databases.\n\nMay be your dbs are “bloated”?\n\nYou could try VACUUM FULL on your databases, when there is no other activity.\n\n \n\nIgor Neyman",
"msg_date": "Tue, 21 Nov 2017 16:48:07 +0000",
"msg_from": "\"Henrik Cednert (Filmlance)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "From: Henrik Cednert (Filmlance) [mailto:[email protected]]\r\nSent: Tuesday, November 21, 2017 11:48 AM\r\nTo: [email protected]\r\nSubject: Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade\r\n\r\nI VACUUM every sunday so that is done already. =/\r\n\r\nNot sure I have the proper params though since I'm not used to db's but have followed other's \"how to's\", but these are the lines in my script for that;\r\n\r\n${BINARY_PATH}/vacuumdb --analyze --host=localhost --username=postgres --echo --verbose --no-password ${database} | tee -a ${log_pg_optimize}_${database}.log\r\n${BINARY_PATH}/reindexdb --host=localhost --username=postgres --no-password --echo ${database} | tee -a ${log_pg_optimize}_${database}.log\r\n\r\n\r\n--\r\nHenrik Cednert\r\ncto | compositor\r\n\r\nFilmlance International\r\nmobile [ + 46 (0)704 71 89 54 ]\r\nskype [ cednert ]\r\n\r\n_______________________________________________________________________________________________\r\n\r\nTo do vacuum full you need to add –full option to your vacuumdb command:\r\n\r\n${BINARY_PATH}/vacuumdb --full --analyze --host=localhost --username=postgres --echo --verbose --no-password ${database} | tee -a ${log_pg_optimize}_${database}.log\r\n\r\nJust be aware that “vacuum full” locks tables unlike just analyze”. So, like I said, no other acivity during this process.\r\n\r\nRegards,\r\nIgor\r\n\r\n\n\n\n\n\n\n\n\n\n \n\n\nFrom: Henrik Cednert (Filmlance) [mailto:[email protected]]\r\n\nSent: Tuesday, November 21, 2017 11:48 AM\nTo: [email protected]\nSubject: Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade\n\n\n \n\nI VACUUM every sunday so that is done already. =/ \n\n \n\n\nNot sure I have the proper params though since I'm not used to db's but have followed other's \"how to's\", but these are the lines in my script for that;\n\n\n \n\n\n\n${BINARY_PATH}/vacuumdb --analyze --host=localhost --username=postgres --echo --verbose --no-password ${database} | tee -a ${log_pg_optimize}_${database}.log\n\n\n\n${BINARY_PATH}/reindexdb --host=localhost --username=postgres --no-password --echo ${database} | tee -a ${log_pg_optimize}_${database}.log \n\n\n\n \n\n\n\n--\r\nHenrik Cednert\r\ncto | compositor\n\nFilmlance International\nmobile [ + 46 (0)704 71 89 54 ]\r\nskype [ cednert ] \n\n\n\n_______________________________________________________________________________________________\n \nTo do vacuum full you need to add –full option to your vacuumdb command:\n \n\n${BINARY_PATH}/vacuumdb --full --analyze --host=localhost --username=postgres --echo --verbose --no-password ${database} | tee -a ${log_pg_optimize}_${database}.log\n \nJust be aware that “vacuum full” locks tables unlike just analyze”. So, like I said, no other acivity during this process.\n \nRegards,\nIgor",
"msg_date": "Tue, 21 Nov 2017 16:57:30 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "\"Henrik Cednert (Filmlance)\" <[email protected]> writes:\n> The last pg_dump with 8.4 took 212 minutes and 49 seconds.And now with 9.5 the very same pg_dump takes 644 minutes and 40 seconds. To it takes about three times as long now and I have no idea to why. Nothing in the system or hardware other than the pgsql upgrade have change.\n\nCan you get a profile of where the machine is spending its time during the\ndump run? On Linux I'd recommend \"perf\", but on macOS, hmm ...\nYou could use Activity Monitor, but as far as I can see that just captures\nshort-duration snapshots, which might not be representative of a 10-hour\nrun. XCode's Instruments feature would probably be better about giving\na full picture, but it has a steep learning curve.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 21 Nov 2017 12:01:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "On Tue, Nov 21, 2017 at 12:01 PM, Tom Lane <[email protected]> wrote:\n> \"Henrik Cednert (Filmlance)\" <[email protected]> writes:\n>> The last pg_dump with 8.4 took 212 minutes and 49 seconds.And now with 9.5 the very same pg_dump takes 644 minutes and 40 seconds. To it takes about three times as long now and I have no idea to why. Nothing in the system or hardware other than the pgsql upgrade have change.\n>\n> Can you get a profile of where the machine is spending its time during the\n> dump run? On Linux I'd recommend \"perf\", but on macOS, hmm ...\n> You could use Activity Monitor, but as far as I can see that just captures\n> short-duration snapshots, which might not be representative of a 10-hour\n> run. XCode's Instruments feature would probably be better about giving\n> a full picture, but it has a steep learning curve.\n\nmacOS's \"sample\" is pretty easy to use and produces text format output\nthat is easy to email.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Tue, 21 Nov 2017 13:39:26 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Tue, Nov 21, 2017 at 12:01 PM, Tom Lane <[email protected]> wrote:\n>> Can you get a profile of where the machine is spending its time during the\n>> dump run? On Linux I'd recommend \"perf\", but on macOS, hmm ...\n>> You could use Activity Monitor, but as far as I can see that just captures\n>> short-duration snapshots, which might not be representative of a 10-hour\n>> run. XCode's Instruments feature would probably be better about giving\n>> a full picture, but it has a steep learning curve.\n\n> macOS's \"sample\" is pretty easy to use and produces text format output\n> that is easy to email.\n\nAh, good idea. But note that only traces one process, so you'd need to\nfirst determine whether it's pg_dump or the backend that's eating most\nof the CPU. Or sample both of them.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 21 Nov 2017 13:46:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "Hello\n\nRunning it with format \"directory\" produced something I cannot import form the host application. So I aborted that.\n\nRunning it now and recording with Instruments. Guess I'll have to leave it cooking for the full procedure but I've added an initial one to pastebin.\nhttps://pastebin.com/QHRYUQhb\n\nI'm not sure if I can attach screenshots here. Trying, screenshot from instruments after running for a few mins.\n\n\n[cid:096ABFB7-4DE0-4C82-BA26-7F14A846AEEA]\n\n--\nHenrik Cednert\ncto | compositor\n\nFilmlance International\nmobile [ + 46 (0)704 71 89 54 ]\nskype [ cednert ]\n\nOn 21 Nov 2017, at 19:46, Tom Lane <[email protected]<mailto:[email protected]>> wrote:\n\nRobert Haas <[email protected]<mailto:[email protected]>> writes:\nOn Tue, Nov 21, 2017 at 12:01 PM, Tom Lane <[email protected]<mailto:[email protected]>> wrote:\nCan you get a profile of where the machine is spending its time during the\ndump run? On Linux I'd recommend \"perf\", but on macOS, hmm ...\nYou could use Activity Monitor, but as far as I can see that just captures\nshort-duration snapshots, which might not be representative of a 10-hour\nrun. XCode's Instruments feature would probably be better about giving\na full picture, but it has a steep learning curve.\n\nmacOS's \"sample\" is pretty easy to use and produces text format output\nthat is easy to email.\n\nAh, good idea. But note that only traces one process, so you'd need to\nfirst determine whether it's pg_dump or the backend that's eating most\nof the CPU. Or sample both of them.\n\nregards, tom lane",
"msg_date": "Tue, 21 Nov 2017 18:53:29 +0000",
"msg_from": "\"Henrik Cednert (Filmlance)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "Hello\n\nRunning it with format \"directory\" produced something I cannot import form the host application. So I aborted that.\n\nRunning it now and recording with Instruments. Guess I'll have to leave it cooking for the full procedure but I've added an initial one to pastebin.\nhttps://pastebin.com/QHRYUQhb\n\nSent this with screenshot attached first but don't think the list supports that... So here's a screenshot from instruments after running for a few mins.\nhttps://www.dropbox.com/s/3vr5yzt4zs5svck/pg_dump_profile.png?dl=0\n\nCheers\n\n--\nHenrik Cednert\ncto | compositor\n\nFilmlance International\n\n\nOn 21 Nov 2017, at 19:46, Tom Lane <[email protected]<mailto:[email protected]>> wrote:\n\nRobert Haas <[email protected]<mailto:[email protected]>> writes:\nOn Tue, Nov 21, 2017 at 12:01 PM, Tom Lane <[email protected]<mailto:[email protected]>> wrote:\nCan you get a profile of where the machine is spending its time during the\ndump run? On Linux I'd recommend \"perf\", but on macOS, hmm ...\nYou could use Activity Monitor, but as far as I can see that just captures\nshort-duration snapshots, which might not be representative of a 10-hour\nrun. XCode's Instruments feature would probably be better about giving\na full picture, but it has a steep learning curve.\n\nmacOS's \"sample\" is pretty easy to use and produces text format output\nthat is easy to email.\n\nAh, good idea. But note that only traces one process, so you'd need to\nfirst determine whether it's pg_dump or the backend that's eating most\nof the CPU. Or sample both of them.\n\nregards, tom lane\n\n\n\n\n\n\n\n\nHello\n\n\nRunning it with format \"directory\" produced something I cannot import form the host application. So I aborted that.\n\n\nRunning it now and recording with Instruments. Guess I'll have to leave it cooking for the full procedure but I've added an initial one to pastebin.\nhttps://pastebin.com/QHRYUQhb\n\n\nSent this with screenshot attached first but don't think the list supports that... So here's a screenshot from instruments after running for a few mins. \nhttps://www.dropbox.com/s/3vr5yzt4zs5svck/pg_dump_profile.png?dl=0\n\n\nCheers\n\n--\nHenrik\n Cednert\ncto\n | compositor\n\nFilmlance\n International\n\n\n\n\nOn 21 Nov 2017, at 19:46, Tom Lane <[email protected]> wrote:\n\n\nRobert Haas <[email protected]> writes:\nOn Tue, Nov 21, 2017 at 12:01 PM, Tom Lane <[email protected]> wrote:\nCan you get a profile of where the machine is spending its time during the\ndump run? On Linux I'd recommend \"perf\", but on macOS, hmm ...\nYou could use Activity Monitor, but as far as I can see that just captures\nshort-duration snapshots, which might not be representative of a 10-hour\nrun. XCode's Instruments feature would probably be better about giving\na full picture, but it has a steep learning curve.\n\n\n\nmacOS's \"sample\" is pretty easy to use and produces text format output\nthat is easy to email.\n\n\nAh, good idea. But note that only traces one process, so you'd need to\nfirst determine whether it's pg_dump or the backend that's eating most\nof the CPU. Or sample both of them.\n\nregards, tom lane",
"msg_date": "Tue, 21 Nov 2017 18:59:08 +0000",
"msg_from": "\"Henrik Cednert (Filmlance)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "\"Henrik Cednert (Filmlance)\" <[email protected]> writes:\n> I'm not sure if I can attach screenshots here. Trying, screenshot from instruments after running for a few mins.\n\nIt looks like practically all of pg_dump's time is going into deflate(),\nie zlib. I don't find that terribly surprising in itself, but it offers\nno explanation for why you'd see a slowdown --- zlib isn't even our\ncode, nor has it been under active development for a long time, so\npresumably 8.4 and 9.5 would have used the same version. Perhaps you\nwere doing the 8.4 dump without compression enabled?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 21 Nov 2017 16:01:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "Hi Tom\n\nI'm honestly not sure about anything. =) I use the exact same flags as with 8.4 for the dump:\n\n${BINARY_PATH}/pg_dump --host=localhost --user=postgres --no-password --blobs --format=custom --verbose --file=${pg_dump_filename}_${database}.backup ${database}\n\nSo unless the default behaviour have changed in 9.x I'd say I don't use compression. I will try to force it to no compression and see if it's different.\n\nSadly the instruments session stopped recording when I logged out of the system yesterday. Doh. =/\n\nCheers\n\n--\nHenrik Cednert\ncto | compositor\n\nFilmlance International\nmobile [ + 46 (0)704 71 89 54 ]\nskype [ cednert ]\n\nOn 21 Nov 2017, at 22:01, Tom Lane <[email protected]<mailto:[email protected]>> wrote:\n\n\"Henrik Cednert (Filmlance)\" <[email protected]<mailto:[email protected]>> writes:\nI'm not sure if I can attach screenshots here. Trying, screenshot from instruments after running for a few mins.\n\nIt looks like practically all of pg_dump's time is going into deflate(),\nie zlib. I don't find that terribly surprising in itself, but it offers\nno explanation for why you'd see a slowdown --- zlib isn't even our\ncode, nor has it been under active development for a long time, so\npresumably 8.4 and 9.5 would have used the same version. Perhaps you\nwere doing the 8.4 dump without compression enabled?\n\nregards, tom lane\n\n\n\n\n\n\n\nHi Tom\n\n\nI'm honestly not sure about anything. =) I use the exact same flags as with 8.4 for the dump:\n\n\n${BINARY_PATH}/pg_dump --host=localhost --user=postgres --no-password --blobs --format=custom --verbose --file=${pg_dump_filename}_${database}.backup ${database} \n\n\nSo unless the default behaviour have changed in 9.x I'd say I don't use compression. I will try to force it to no compression and see if it's different.\n\n\nSadly the instruments session stopped recording when I logged out of the system yesterday. Doh. =/\n\n\nCheers\n\n--\nHenrik\n Cednert\ncto\n | compositor\n\nFilmlance\n International\nmobile\n [ +\n 46 (0)704 71 89 54 ]\nskype\n [ cednert ] \n\n\nOn 21 Nov 2017, at 22:01, Tom Lane <[email protected]> wrote:\n\n\n\"Henrik Cednert (Filmlance)\" <[email protected]> writes:\nI'm not sure if I can attach screenshots here. Trying, screenshot from instruments after running for a few mins.\n\n\nIt looks like practically all of pg_dump's time is going into deflate(),\nie zlib. I don't find that terribly surprising in itself, but it offers\nno explanation for why you'd see a slowdown --- zlib isn't even our\ncode, nor has it been under active development for a long time, so\npresumably 8.4 and 9.5 would have used the same version. Perhaps you\nwere doing the 8.4 dump without compression enabled?\n\nregards, tom lane",
"msg_date": "Wed, 22 Nov 2017 03:48:09 +0000",
"msg_from": "\"Henrik Cednert (Filmlance)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "Ha! So forcing compression to 0 i went from 644 minutes to 87 minutes. And this time I backed it to a afp share and from the looks of it I hit the roof on that eth interface. Size of backup went from 50GB to 260 GB though, hehehe.\n\nSo something seems to have changed regarding default compression level between 8.x and 9.6 when doing a custom format dump. I will time all the different levels and see if I can find out more.\n\nWHat's the normal way to deal with compression? Dump uncompressed and use something that threads better to compress the dump?\n\nCheers\n\n--\nHenrik Cednert\ncto | compositor\n\nFilmlance International\nmobile [ + 46 (0)704 71 89 54 ]\nskype [ cednert ]\n\nOn 22 Nov 2017, at 04:48, Henrik Cednert (Filmlance) <[email protected]<mailto:[email protected]>> wrote:\n\n\nThis sender failed our fraud detection checks and may not be who they appear to be. Learn about spoofing<http://aka.ms/LearnAboutSpoofing>\n Feedback<http://aka.ms/SafetyTipsFeedback>\nHi Tom\n\nI'm honestly not sure about anything. =) I use the exact same flags as with 8.4 for the dump:\n\n${BINARY_PATH}/pg_dump --host=localhost --user=postgres --no-password --blobs --format=custom --verbose --file=${pg_dump_filename}_${database}.backup ${database}\n\nSo unless the default behaviour have changed in 9.x I'd say I don't use compression. I will try to force it to no compression and see if it's different.\n\nSadly the instruments session stopped recording when I logged out of the system yesterday. Doh. =/\n\nCheers\n\n--\nHenrik Cednert\ncto | compositor\n\nFilmlance International\nmobile [ + 46 (0)704 71 89 54 ]\nskype [ cednert ]\n\nOn 21 Nov 2017, at 22:01, Tom Lane <[email protected]<mailto:[email protected]>> wrote:\n\n\"Henrik Cednert (Filmlance)\" <[email protected]<mailto:[email protected]>> writes:\nI'm not sure if I can attach screenshots here. Trying, screenshot from instruments after running for a few mins.\n\nIt looks like practically all of pg_dump's time is going into deflate(),\nie zlib. I don't find that terribly surprising in itself, but it offers\nno explanation for why you'd see a slowdown --- zlib isn't even our\ncode, nor has it been under active development for a long time, so\npresumably 8.4 and 9.5 would have used the same version. Perhaps you\nwere doing the 8.4 dump without compression enabled?\n\nregards, tom lane\n\n\n\n\n\n\n\n\nHa! So forcing compression to 0 i went from 644 minutes to 87 minutes. And this time I backed it to a afp share and from the looks of it I hit the roof on that eth interface. Size of backup went from 50GB to 260 GB though, hehehe. \n\n\nSo something seems to have changed regarding default compression level between 8.x and 9.6 when doing a custom format dump. I will time all the different levels and see if I can find out more. \n\n\nWHat's the normal way to deal with compression? Dump uncompressed and use something that threads better to compress the dump?\n\n\nCheers\n\n--\nHenrik\n Cednert\ncto\n | compositor\n\nFilmlance\n International\nmobile\n [ +\n 46 (0)704 71 89 54 ]\nskype\n [ cednert ] \n\n\nOn 22 Nov 2017, at 04:48, Henrik Cednert (Filmlance) <[email protected]> wrote:\n\n\n\n\n\n\n\n\n\nThis sender failed our fraud detection checks and may not be who they appear to be. Learn about spoofing\n\n\nFeedback\n\n\n\nHi Tom\n\n\nI'm honestly not sure about anything. =) I use the exact same flags as with 8.4 for the dump:\n\n\n${BINARY_PATH}/pg_dump --host=localhost --user=postgres --no-password --blobs --format=custom --verbose --file=${pg_dump_filename}_${database}.backup ${database} \n\n\nSo unless the default behaviour have changed in 9.x I'd say I don't use compression. I will try to force it to no compression and see if it's different.\n\n\nSadly the instruments session stopped recording when I logged out of the system yesterday. Doh. =/\n\n\nCheers\n\n--\nHenrik\n Cednert\ncto\n | compositor\n\nFilmlance\n International\nmobile\n [ +\n 46 (0)704 71 89 54 ]\nskype\n [ cednert ] \n\n\nOn 21 Nov 2017, at 22:01, Tom Lane <[email protected]> wrote:\n\n\n\"Henrik Cednert (Filmlance)\" <[email protected]> writes:\nI'm not sure if I can attach screenshots here. Trying, screenshot from instruments after running for a few mins.\n\n\nIt looks like practically all of pg_dump's time is going into deflate(),\nie zlib. I don't find that terribly surprising in itself, but it offers\nno explanation for why you'd see a slowdown --- zlib isn't even our\ncode, nor has it been under active development for a long time, so\npresumably 8.4 and 9.5 would have used the same version. Perhaps you\nwere doing the 8.4 dump without compression enabled?\n\nregards, tom lane",
"msg_date": "Wed, 22 Nov 2017 06:18:08 +0000",
"msg_from": "\"Henrik Cednert (Filmlance)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "\n> On Nov 21, 2017, at 10:18 PM, Henrik Cednert (Filmlance) <[email protected]> wrote:\n> \n> WHat's the normal way to deal with compression? Dump uncompressed and use something that threads better to compress the dump?\n\nI would say most likely your zlib is screwed up somehow, like maybe it didn't get optimized right by the C compiler or something else sucks w/ the compression settings. The CPU should easily blast away at that faster than disks can read.\n\nI did do some studies of this previously some years ago, and I found gzip -6 offered the best ratio between size reduction and CPU time out of a very wide range of formats, but at the time xz was also not yet available.\n\nIf I were you I would first pipe the uncompressed output through a separate compression command, then you can experiment with the flags and threads, and you already get another separate process for the kernel to put on other CPUs as an automatic bonus for multi-core with minimal work.\n\nAfter that, xz is GNU standard now and has xz -T for cranking up some threads, with little extra effort for the user. But it can be kind of slow so probably need to lower the compression level somewhat depending a bit on some time testing. I would try on some medium sized DB table, like a bit over the size of system RAM, instead of dumping this great big DB, in order to benchmark a couple times until it looks happy.\n\nMatthew\n",
"msg_date": "Wed, 22 Nov 2017 02:32:45 -0800",
"msg_from": "Matthew Hall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "Hello\n\nI've ran it with all the different compression levels on one of the smaller db's now. And not sending any flags to it see is, as I've seen hinted on some page on internet, same as level 6.\n\nI do, somewhat, share the opinion that something is up with zlib. But at the same time I haven't touch it since the 8.4 installation so it's a mystery how it could've failed on its own. The only thing performed was an upgrade from 8.4 to 9.5. But yes, I can not really say exactly what that upgrade touched and what it didn't touch. Will investigate further.\n\n\nCOMPRESSION LEVEL: 0\nFILE SIZE: 6205982696\nreal 0m38.218s\nuser 0m3.558s\nsys 0m17.309s\n\n\nCOMPRESSION LEVEL: 1\nFILE SIZE: 1391475419\nreal 4m3.725s\nuser 3m54.132s\nsys 0m5.565s\n\n\nCOMPRESSION LEVEL: 2\nFILE SIZE: 1344563403\nreal 4m18.574s\nuser 4m9.466s\nsys 0m5.417s\n\n\nCOMPRESSION LEVEL: 3\nFILE SIZE: 1267601394\nreal 5m23.373s\nuser 5m14.339s\nsys 0m5.462s\n\n\nCOMPRESSION LEVEL: 4\nFILE SIZE: 1241632684\nreal 6m19.501s\nuser 6m10.148s\nsys 0m5.655s\n\n\nCOMPRESSION LEVEL: 5\nFILE SIZE: 1178377949\nreal 9m18.449s\nuser 9m9.733s\nsys 0m5.169s\n\n\nCOMPRESSION LEVEL: 6\nFILE SIZE: 1137727582\nreal 13m28.424s\nuser 13m19.842s\nsys 0m5.036s\n\n\nCOMPRESSION LEVEL: 7\nFILE SIZE: 1126257786\nreal 16m39.392s\nuser 16m30.094s\nsys 0m5.724s\n\n\nCOMPRESSION LEVEL: 8\nFILE SIZE: 1111804793\nreal 30m37.135s\nuser 30m26.785s\nsys 0m6.660s\n\n\nCOMPRESSION LEVEL: 9\nFILE SIZE: 1112194596\nreal 33m40.325s\nuser 33m27.122s\nsys 0m6.498s\n\n\nCOMPRESSION LEVEL AT DEFAULT NO FLAG PASSED TO 'pg_dump'\nFILE SIZE: 1140261276\nreal 13m18.178s\nuser 13m9.417s\nsys 0m5.242s\n\n\n--\nHenrik Cednert\ncto | compositor\n\nFilmlance International\nmobile [ + 46 (0)704 71 89 54 ]\nskype [ cednert ]\n\nOn 22 Nov 2017, at 11:32, Matthew Hall <[email protected]<mailto:[email protected]>> wrote:\n\n\nOn Nov 21, 2017, at 10:18 PM, Henrik Cednert (Filmlance) <[email protected]<mailto:[email protected]>> wrote:\n\nWHat's the normal way to deal with compression? Dump uncompressed and use something that threads better to compress the dump?\n\nI would say most likely your zlib is screwed up somehow, like maybe it didn't get optimized right by the C compiler or something else sucks w/ the compression settings. The CPU should easily blast away at that faster than disks can read.\n\nI did do some studies of this previously some years ago, and I found gzip -6 offered the best ratio between size reduction and CPU time out of a very wide range of formats, but at the time xz was also not yet available.\n\nIf I were you I would first pipe the uncompressed output through a separate compression command, then you can experiment with the flags and threads, and you already get another separate process for the kernel to put on other CPUs as an automatic bonus for multi-core with minimal work.\n\nAfter that, xz is GNU standard now and has xz -T for cranking up some threads, with little extra effort for the user. But it can be kind of slow so probably need to lower the compression level somewhat depending a bit on some time testing. I would try on some medium sized DB table, like a bit over the size of system RAM, instead of dumping this great big DB, in order to benchmark a couple times until it looks happy.\n\nMatthew\n\n\n\n\n\n\n\nHello\n\n\nI've ran it with all the different compression levels on one of the smaller db's now. And not sending any flags to it see is, as I've seen hinted on some page on internet, same as level 6. \n\n\nI do, somewhat, share the opinion that something is up with zlib. But at the same time I haven't touch it since the 8.4 installation so it's a mystery how it could've failed on its own. The only thing performed was an upgrade from 8.4 to 9.5.\n But yes, I can not really say exactly what that upgrade touched and what it didn't touch. Will investigate further. \n\n\n\n\n\nCOMPRESSION LEVEL: 0\nFILE SIZE: 6205982696\nreal 0m38.218s\nuser 0m3.558s\nsys 0m17.309s\n\n\n\n\nCOMPRESSION LEVEL: 1\nFILE SIZE: 1391475419\nreal 4m3.725s\nuser 3m54.132s\nsys 0m5.565s\n\n\n\n\nCOMPRESSION LEVEL: 2\nFILE SIZE: 1344563403\nreal 4m18.574s\nuser 4m9.466s\nsys 0m5.417s\n\n\n\n\nCOMPRESSION LEVEL: 3\nFILE SIZE: 1267601394\nreal 5m23.373s\nuser 5m14.339s\nsys 0m5.462s\n\n\n\n\nCOMPRESSION LEVEL: 4\nFILE SIZE: 1241632684\nreal 6m19.501s\nuser 6m10.148s\nsys 0m5.655s\n\n\n\n\nCOMPRESSION LEVEL: 5\nFILE SIZE: 1178377949\nreal 9m18.449s\nuser 9m9.733s\nsys 0m5.169s\n\n\n\n\nCOMPRESSION LEVEL: 6\nFILE SIZE: 1137727582\nreal 13m28.424s\nuser 13m19.842s\nsys 0m5.036s\n\n\n\n\nCOMPRESSION LEVEL: 7\nFILE SIZE: 1126257786\nreal 16m39.392s\nuser 16m30.094s\nsys 0m5.724s\n\n\n\n\nCOMPRESSION LEVEL: 8\nFILE SIZE: 1111804793\nreal 30m37.135s\nuser 30m26.785s\nsys 0m6.660s\n\n\n\n\nCOMPRESSION LEVEL: 9\nFILE SIZE: 1112194596\nreal 33m40.325s\nuser 33m27.122s\nsys 0m6.498s\n\n\n\n\nCOMPRESSION LEVEL AT DEFAULT NO FLAG PASSED TO 'pg_dump'\nFILE SIZE: 1140261276\nreal 13m18.178s\nuser 13m9.417s\nsys 0m5.242s\n\n\n\n--\nHenrik\n Cednert\ncto\n | compositor\n\nFilmlance\n International\nmobile\n [ +\n 46 (0)704 71 89 54 ]\nskype\n [ cednert ] \n\n\nOn 22 Nov 2017, at 11:32, Matthew Hall <[email protected]> wrote:\n\n\n\nOn Nov 21, 2017, at 10:18 PM, Henrik Cednert (Filmlance) <[email protected]> wrote:\n\nWHat's the normal way to deal with compression? Dump uncompressed and use something that threads better to compress the dump?\n\n\nI would say most likely your zlib is screwed up somehow, like maybe it didn't get optimized right by the C compiler or something else sucks w/ the compression settings. The CPU should easily blast away at that faster than disks can read.\n\nI did do some studies of this previously some years ago, and I found gzip -6 offered the best ratio between size reduction and CPU time out of a very wide range of formats, but at the time xz was also not yet available.\n\nIf I were you I would first pipe the uncompressed output through a separate compression command, then you can experiment with the flags and threads, and you already get another separate process for the kernel to put on other CPUs as an automatic bonus for multi-core\n with minimal work.\n\nAfter that, xz is GNU standard now and has xz -T for cranking up some threads, with little extra effort for the user. But it can be kind of slow so probably need to lower the compression level somewhat depending a bit on some time testing. I would try on some\n medium sized DB table, like a bit over the size of system RAM, instead of dumping this great big DB, in order to benchmark a couple times until it looks happy.\n\nMatthew",
"msg_date": "Wed, 22 Nov 2017 12:17:30 +0000",
"msg_from": "\"Henrik Cednert (Filmlance)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "When investigating the zlib lead I looked at 8.4 installation and 9.5 installation. 9.5 includes zlib.h (/Library/PostgreSQL//9.5/include/zlib.h), but 8.4 doesn't. But that's a header file and I have no idea how that really works and if that's the one used by pgres9.5 or not. The version in it says 1.2.8 and that's what the Instruments are showing when I monitor pg_dump while running.\n\nGuess I'll have to install instruments in a dev env and do a pg_dump with 8.4 to see the difference. Tedious. =/\n\n--\nHenrik Cednert\ncto | compositor\n\nFilmlance International\nmobile [ + 46 (0)704 71 89 54 ]\nskype [ cednert ]\n\nOn 22 Nov 2017, at 13:17, Henrik Cednert (Filmlance) <[email protected]<mailto:[email protected]>> wrote:\n\n\nThis sender failed our fraud detection checks and may not be who they appear to be. Learn about spoofing<http://aka.ms/LearnAboutSpoofing>\n Feedback<http://aka.ms/SafetyTipsFeedback>\nHello\n\nI've ran it with all the different compression levels on one of the smaller db's now. And not sending any flags to it see is, as I've seen hinted on some page on internet, same as level 6.\n\nI do, somewhat, share the opinion that something is up with zlib. But at the same time I haven't touch it since the 8.4 installation so it's a mystery how it could've failed on its own. The only thing performed was an upgrade from 8.4 to 9.5. But yes, I can not really say exactly what that upgrade touched and what it didn't touch. Will investigate further.\n\n\nCOMPRESSION LEVEL: 0\nFILE SIZE: 6205982696\nreal 0m38.218s\nuser 0m3.558s\nsys 0m17.309s\n\n\nCOMPRESSION LEVEL: 1\nFILE SIZE: 1391475419\nreal 4m3.725s\nuser 3m54.132s\nsys 0m5.565s\n\n\nCOMPRESSION LEVEL: 2\nFILE SIZE: 1344563403\nreal 4m18.574s\nuser 4m9.466s\nsys 0m5.417s\n\n\nCOMPRESSION LEVEL: 3\nFILE SIZE: 1267601394\nreal 5m23.373s\nuser 5m14.339s\nsys 0m5.462s\n\n\nCOMPRESSION LEVEL: 4\nFILE SIZE: 1241632684\nreal 6m19.501s\nuser 6m10.148s\nsys 0m5.655s\n\n\nCOMPRESSION LEVEL: 5\nFILE SIZE: 1178377949\nreal 9m18.449s\nuser 9m9.733s\nsys 0m5.169s\n\n\nCOMPRESSION LEVEL: 6\nFILE SIZE: 1137727582\nreal 13m28.424s\nuser 13m19.842s\nsys 0m5.036s\n\n\nCOMPRESSION LEVEL: 7\nFILE SIZE: 1126257786\nreal 16m39.392s\nuser 16m30.094s\nsys 0m5.724s\n\n\nCOMPRESSION LEVEL: 8\nFILE SIZE: 1111804793\nreal 30m37.135s\nuser 30m26.785s\nsys 0m6.660s\n\n\nCOMPRESSION LEVEL: 9\nFILE SIZE: 1112194596\nreal 33m40.325s\nuser 33m27.122s\nsys 0m6.498s\n\n\nCOMPRESSION LEVEL AT DEFAULT NO FLAG PASSED TO 'pg_dump'\nFILE SIZE: 1140261276\nreal 13m18.178s\nuser 13m9.417s\nsys 0m5.242s\n\n\n--\nHenrik Cednert\ncto | compositor\n\nFilmlance International\nmobile [ + 46 (0)704 71 89 54 ]\nskype [ cednert ]\n\nOn 22 Nov 2017, at 11:32, Matthew Hall <[email protected]<mailto:[email protected]>> wrote:\n\n\nOn Nov 21, 2017, at 10:18 PM, Henrik Cednert (Filmlance) <[email protected]<mailto:[email protected]>> wrote:\n\nWHat's the normal way to deal with compression? Dump uncompressed and use something that threads better to compress the dump?\n\nI would say most likely your zlib is screwed up somehow, like maybe it didn't get optimized right by the C compiler or something else sucks w/ the compression settings. The CPU should easily blast away at that faster than disks can read.\n\nI did do some studies of this previously some years ago, and I found gzip -6 offered the best ratio between size reduction and CPU time out of a very wide range of formats, but at the time xz was also not yet available.\n\nIf I were you I would first pipe the uncompressed output through a separate compression command, then you can experiment with the flags and threads, and you already get another separate process for the kernel to put on other CPUs as an automatic bonus for multi-core with minimal work.\n\nAfter that, xz is GNU standard now and has xz -T for cranking up some threads, with little extra effort for the user. But it can be kind of slow so probably need to lower the compression level somewhat depending a bit on some time testing. I would try on some medium sized DB table, like a bit over the size of system RAM, instead of dumping this great big DB, in order to benchmark a couple times until it looks happy.\n\nMatthew\n\n\n\n\n\n\n\n\nWhen investigating the zlib lead I looked at 8.4 installation and 9.5 installation. 9.5 includes zlib.h (/Library/PostgreSQL//9.5/include/zlib.h), but 8.4 doesn't. But that's a header file and I have no idea how that really works and if that's the one used\n by pgres9.5 or not. The version in it says 1.2.8 and that's what the Instruments are showing when I monitor pg_dump while running. \n\n\nGuess I'll have to install instruments in a dev env and do a pg_dump with 8.4 to see the difference. Tedious. =/ \n\n--\nHenrik\n Cednert\ncto\n | compositor\n\nFilmlance\n International\nmobile\n [ +\n 46 (0)704 71 89 54 ]\nskype\n [ cednert ] \n\n\nOn 22 Nov 2017, at 13:17, Henrik Cednert (Filmlance) <[email protected]> wrote:\n\n\n\n\n\n\n\n\n\nThis sender failed our fraud detection checks and may not be who they appear to be. Learn about spoofing\n\n\nFeedback\n\n\n\nHello\n\n\nI've ran it with all the different compression levels on one of the smaller db's now. And not sending any flags to it see is, as I've seen hinted on some page on internet, same as level 6. \n\n\nI do, somewhat, share the opinion that something is up with zlib. But at the same time I haven't touch it since the 8.4 installation so it's a mystery how it could've failed on its own. The only thing performed was an upgrade from 8.4 to 9.5.\n But yes, I can not really say exactly what that upgrade touched and what it didn't touch. Will investigate further. \n\n\n\n\n\nCOMPRESSION LEVEL: 0\nFILE SIZE: 6205982696\nreal 0m38.218s\nuser 0m3.558s\nsys 0m17.309s\n\n\n\n\nCOMPRESSION LEVEL: 1\nFILE SIZE: 1391475419\nreal 4m3.725s\nuser 3m54.132s\nsys 0m5.565s\n\n\n\n\nCOMPRESSION LEVEL: 2\nFILE SIZE: 1344563403\nreal 4m18.574s\nuser 4m9.466s\nsys 0m5.417s\n\n\n\n\nCOMPRESSION LEVEL: 3\nFILE SIZE: 1267601394\nreal 5m23.373s\nuser 5m14.339s\nsys 0m5.462s\n\n\n\n\nCOMPRESSION LEVEL: 4\nFILE SIZE: 1241632684\nreal 6m19.501s\nuser 6m10.148s\nsys 0m5.655s\n\n\n\n\nCOMPRESSION LEVEL: 5\nFILE SIZE: 1178377949\nreal 9m18.449s\nuser 9m9.733s\nsys 0m5.169s\n\n\n\n\nCOMPRESSION LEVEL: 6\nFILE SIZE: 1137727582\nreal 13m28.424s\nuser 13m19.842s\nsys 0m5.036s\n\n\n\n\nCOMPRESSION LEVEL: 7\nFILE SIZE: 1126257786\nreal 16m39.392s\nuser 16m30.094s\nsys 0m5.724s\n\n\n\n\nCOMPRESSION LEVEL: 8\nFILE SIZE: 1111804793\nreal 30m37.135s\nuser 30m26.785s\nsys 0m6.660s\n\n\n\n\nCOMPRESSION LEVEL: 9\nFILE SIZE: 1112194596\nreal 33m40.325s\nuser 33m27.122s\nsys 0m6.498s\n\n\n\n\nCOMPRESSION LEVEL AT DEFAULT NO FLAG PASSED TO 'pg_dump'\nFILE SIZE: 1140261276\nreal 13m18.178s\nuser 13m9.417s\nsys 0m5.242s\n\n\n\n--\nHenrik\n Cednert\ncto\n | compositor\n\nFilmlance\n International\nmobile\n [ +\n 46 (0)704 71 89 54 ]\nskype\n [ cednert ] \n\n\nOn 22 Nov 2017, at 11:32, Matthew Hall <[email protected]> wrote:\n\n\n\nOn Nov 21, 2017, at 10:18 PM, Henrik Cednert (Filmlance) <[email protected]> wrote:\n\nWHat's the normal way to deal with compression? Dump uncompressed and use something that threads better to compress the dump?\n\n\nI would say most likely your zlib is screwed up somehow, like maybe it didn't get optimized right by the C compiler or something else sucks w/ the compression settings. The CPU should easily blast away at that faster than disks can read.\n\nI did do some studies of this previously some years ago, and I found gzip -6 offered the best ratio between size reduction and CPU time out of a very wide range of formats, but at the time xz was also not yet available.\n\nIf I were you I would first pipe the uncompressed output through a separate compression command, then you can experiment with the flags and threads, and you already get another separate process for the kernel to put on other CPUs as an automatic bonus for multi-core\n with minimal work.\n\nAfter that, xz is GNU standard now and has xz -T for cranking up some threads, with little extra effort for the user. But it can be kind of slow so probably need to lower the compression level somewhat depending a bit on some time testing. I would try on some\n medium sized DB table, like a bit over the size of system RAM, instead of dumping this great big DB, in order to benchmark a couple times until it looks happy.\n\nMatthew",
"msg_date": "Wed, 22 Nov 2017 13:06:58 +0000",
"msg_from": "\"Henrik Cednert (Filmlance)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "Hi,\n\nOn 2017-11-22 02:32:45 -0800, Matthew Hall wrote:\n> I would say most likely your zlib is screwed up somehow, like maybe it\n> didn't get optimized right by the C compiler or something else sucks\n> w/ the compression settings. The CPU should easily blast away at that\n> faster than disks can read.\n\nHuh? Zlib compresses at a few 10s of MB/s.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Wed, 22 Nov 2017 08:56:19 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "On Nov 22, 2017, at 5:06 AM, Henrik Cednert (Filmlance) <[email protected]> wrote:\n> \n> When investigating the zlib lead I looked at 8.4 installation and 9.5 installation. 9.5 includes zlib.h (/Library/PostgreSQL//9.5/include/zlib.h), but 8.4 doesn't. But that's a header file and I have no idea how that really works and if that's the one used by pgres9.5 or not. The version in it says 1.2.8 and that's what the Instruments are showing when I monitor pg_dump while running. \n> \n> Guess I'll have to install instruments in a dev env and do a pg_dump with 8.4 to see the difference. Tedious. =/ \n\nI would also check the library linkages of the pg_dump binaries.\n\nSee if one thing is using an embedded zlib and the other a system zlib.\n\nThen you could imagine one didn't get compiled with the best-performing CFLAGS, etc.\n\nMatthew.\n",
"msg_date": "Wed, 22 Nov 2017 11:52:11 -0800",
"msg_from": "Matthew Hall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "Hi Matthew\n\nActually running that test in a vm right now. =)\n\nThis is the same db dumped from 9.5 and 8.4 with compression 6 in the same system (smaller db in a vm).\n\n9.5:\nreal 82m33.744s\nuser 60m55.069s\nsys 3m3.375s\n\n8.4\nreal 42m46.381s\nuser 23m50.145s\nsys 2m9.853s\n\nWhen looking at a sample and/or instruments I think I can confirm what your hunch was/is. But I'm not skilled enough to say what's right and wrong nor what action to take. But 8.4 seems to use a system library libz.1.dylib while the 9.4 dump refers to libz.1.2.8.dylib which I think is the one shipping with that particular installation I'm using (/Library/PostgreSQL//9.5/include/zlib.h).\nhttps://www.dropbox.com/s/q1f4p7jzw0ceynh/libz.png?dl=0\n\nhttps://pastebin.com/RWWsumQL\n\nI have no idea if I can relink the libs in 9.5 to other ones? support from the software company in question have suggested updating to a newer version of 9.5 but not sure that'll solve it. I'm on thin ice here and not sure how to proceed. I'm not even sure if I should or if I should dump uncompressed and let something threaded take care of the compression. Sadly i'm the type of guy that can't let go so would be nice to get this to work properly anyways. =)\n\nCHeers and many thanks again.\n\n\n\n--\nHenrik Cednert\ncto | compositor\n\nFilmlance International\n\nOn 22 Nov 2017, at 20:52, Matthew Hall <[email protected]<mailto:[email protected]>> wrote:\n\nOn Nov 22, 2017, at 5:06 AM, Henrik Cednert (Filmlance) <[email protected]<mailto:[email protected]>> wrote:\n\nWhen investigating the zlib lead I looked at 8.4 installation and 9.5 installation. 9.5 includes zlib.h (/Library/PostgreSQL//9.5/include/zlib.h), but 8.4 doesn't. But that's a header file and I have no idea how that really works and if that's the one used by pgres9.5 or not. The version in it says 1.2.8 and that's what the Instruments are showing when I monitor pg_dump while running.\n\nGuess I'll have to install instruments in a dev env and do a pg_dump with 8.4 to see the difference. Tedious. =/\n\nI would also check the library linkages of the pg_dump binaries.\n\nSee if one thing is using an embedded zlib and the other a system zlib.\n\nThen you could imagine one didn't get compiled with the best-performing CFLAGS, etc.\n\nMatthew.\n\n\n\n\n\n\n\nHi Matthew\n\n\nActually running that test in a vm right now. =) \n\n\nThis is the same db dumped from 9.5 and 8.4 with compression 6 in the same system (smaller db in a vm).\n\n\n9.5:\n\nreal 82m33.744s\nuser 60m55.069s\nsys 3m3.375s\n\n\n8.4\n\nreal 42m46.381s\nuser 23m50.145s\nsys 2m9.853s\n\n\n\nWhen looking at a sample and/or instruments I think I can confirm what your hunch was/is. But I'm not skilled enough to say what's right and wrong nor what action to take. But 8.4 seems to use a system library libz.1.dylib while the 9.4 dump refers\n to libz.1.2.8.dylib which I think is the one shipping with that particular installation I'm using (/Library/PostgreSQL//9.5/include/zlib.h).\nhttps://www.dropbox.com/s/q1f4p7jzw0ceynh/libz.png?dl=0\n\n\nhttps://pastebin.com/RWWsumQL\n\n\nI have no idea if I can relink the libs in 9.5 to other ones? support from the software company in question have suggested updating to a newer version of 9.5 but not sure that'll solve it. I'm on thin ice here and not sure how to proceed. I'm\n not even sure if I should or if I should dump uncompressed and let something threaded take care of the compression. Sadly i'm the type of guy that can't let go so would be nice to get this to work properly anyways. =) \n\n\nCHeers and many thanks again.\n \n\n\n\n--\nHenrik\n Cednert\ncto\n | compositor\n\nFilmlance\n International\n\n\nOn 22 Nov 2017, at 20:52, Matthew Hall <[email protected]> wrote:\n\n\nOn Nov 22, 2017, at 5:06 AM, Henrik Cednert (Filmlance) <[email protected]> wrote:\n\nWhen investigating the zlib lead I looked at 8.4 installation and 9.5 installation. 9.5 includes zlib.h (/Library/PostgreSQL//9.5/include/zlib.h), but 8.4 doesn't. But that's a header file and I have no idea how that really works and if that's the one used\n by pgres9.5 or not. The version in it says 1.2.8 and that's what the Instruments are showing when I monitor pg_dump while running.\n\n\nGuess I'll have to install instruments in a dev env and do a pg_dump with 8.4 to see the difference. Tedious. =/\n\n\n\nI would also check the library linkages of the pg_dump binaries.\n\nSee if one thing is using an embedded zlib and the other a system zlib.\n\nThen you could imagine one didn't get compiled with the best-performing CFLAGS, etc.\n\nMatthew.",
"msg_date": "Wed, 22 Nov 2017 20:06:53 +0000",
"msg_from": "\"Henrik Cednert (Filmlance)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "Hello,\n\nI had this behaviors when the upgraded pg 9.5 was on ssl mode by default.\n\nSo i deactivated ssl mode in postgresql.conf. That's all.\n\nRegards,\n\nPatrick\n\n\n\nOn 11/21/2017 03:28 PM, Henrik Cednert (Filmlance) wrote:\n> Hello\n>\n> We use a system in filmproduction called DaVinci Resolve. It uses a\n> pgsql database when you work in a collaborative workflow and multiple\n> people share projects. Previously it was using pgsql 8.4 but for a new\n> major upgrade they recommend an upgrade to 9.5. Probably also to some\n> macOS limitation/support and that 9.x is required for macOS >10.11.\n>\n> They (BlackMagic Design) provide three tools for the migration.�\n> 1. For for dumping everything form the old 8.4 database\n> 2. One for upgrading from 8.4 to 9.5\n> 3. One for restoring the backup in step 1 in 9.5\n>\n> All that went smoothly and working in the systems also works smoothly\n> and as good as previously, maybe even a bit better/faster.�\n>\n> What's not working smoothly is my daily pg_dump's though. I don't have\n> a reference to what's a big and what's a small database since I'm no\n> db-guy and don't really maintain nor work with it on a daily basis.\n> Pretty much only this system we use that has a db system like this.\n> Below is a list of what we dump.\n>\n> 930M Nov 18 13:31 filmserver03_2017-11-18_132043_dailies_2017_01.backup\n> 2.2K Nov 18 13:20 filmserver03_2017-11-18_132043_postgres.backup\n> 522K Nov 18 13:20 filmserver03_2017-11-18_132043_resolve.backup\n> 23G Nov 18 19:37 filmserver03_2017-11-18_132043_resolve_2017_01.backup\n> 5.1G Nov 18 20:54 filmserver03_2017-11-18_132043_resolve_2017_02.backup\n> 10G Nov 18 23:34\n> filmserver03_2017-11-18_132043_resolve_filmserver02.backup\n> 516K Nov 18 23:35 filmserver03_2017-11-18_132043_temp_backup_test.backup\n> 1.9G Nov 19 00:05 filmserver03_2017-11-18_132043_temp_dev_resolve14.backup\n>\n>\n> The last pg_dump with 8.4 took�212 minutes and 49 seconds.And now with\n> 9.5 the very same pg_dump takes�644 minutes and 40 seconds. To it\n> takes about three times as long now and I have no idea to why.�Nothing\n> in the system or hardware other than the pgsql upgrade have change.��\n>\n> I dump the db's with a custom script and this is the line I use to get\n> the DB's:\n> DATABASES=$(${BINARY_PATH}/psql --user=postgres -w --no-align\n> --tuples-only --command=\"SELECT datname from pg_database WHERE NOT\n> datistemplate\")\n>\n> After that I iterate over them with a for loop and dump with:\n> ${BINARY_PATH}/pg_dump --host=localhost --user=postgres --no-password\n> --blobs --format=custom --verbose\n> --file=${pg_dump_filename}_${database}.backup ${database} | tee -a\n> ${log_pg_dump}_${database}.log � �\n>\n> When observing the system during the dump it LOOKS like it did in 8.4.\n> pg_dump is using 100% of one core and from what I can see it does this\n> through out the operation. But it's still sooooo much slower. I read\n> about the parallell option in pg_dump for 9.5 but sadly I cannot dump\n> like that because the application in question can (probably) not\n> import that format on it's own and I would have to use pgrestore or\n> something. Which in theory is fine but sometimes one of the artists\n> have to import the db backup. So need to keep it simple.\n>\n> The system is:\n> MacPro 5,1\n> 2x2.66 GHz Quad Core Xeon\n> 64 GB RAM\n> macOS 10.11.6\n> PostgreSQL 9.5.4\n> DB on a 6 disk SSD RAID\n>\n>\n> I hope I got all the info needed. Really hope someone with more\n> expertise and skills than me can point me in the right direction.\n>\n> Cheers and thanks\n>\n>\n> --\n> Henrik Cednert\n> cto | compositor\n>\n>\n\n\n\n\n\n\n\nHello,\nI had this behaviors when the upgraded pg 9.5 was on ssl mode by\n default.\nSo i deactivated ssl mode in postgresql.conf. That's all.\n\nRegards,\nPatrick\n\n\n\n\nOn 11/21/2017 03:28 PM, Henrik Cednert\n (Filmlance) wrote:\n\n\n\n\n\n Hello\n \n\nWe use a system in filmproduction called DaVinci\n Resolve. It uses a pgsql database when you work in a\n collaborative workflow and multiple people share projects.\n Previously it was using pgsql 8.4 but for a new major\n upgrade they recommend an upgrade to 9.5. Probably also to\n some macOS limitation/support and that 9.x is required for\n macOS >10.11.\n\n\nThey (BlackMagic Design) provide three tools for\n the migration.�\n1. For for dumping everything form the old 8.4\n database\n2. One for upgrading from 8.4 to 9.5\n3. One for restoring the backup in step 1 in 9.5\n\n\nAll that went smoothly and working in the\n systems also works smoothly and as good as previously, maybe\n even a bit better/faster.�\n\n\nWhat's not working smoothly is my daily\n pg_dump's though. I don't have a reference to what's a big\n and what's a small database since I'm no db-guy and don't\n really maintain nor work with it on a daily basis. Pretty\n much only this system we use that has a db system like this.\n Below is a list of what we dump.\n\n\n930M Nov 18 13:31\n filmserver03_2017-11-18_132043_dailies_2017_01.backup\n2.2K Nov 18 13:20\n filmserver03_2017-11-18_132043_postgres.backup\n522K Nov 18 13:20\n filmserver03_2017-11-18_132043_resolve.backup\n23G Nov 18 19:37\n filmserver03_2017-11-18_132043_resolve_2017_01.backup\n5.1G Nov 18 20:54\n filmserver03_2017-11-18_132043_resolve_2017_02.backup\n10G Nov 18 23:34\n filmserver03_2017-11-18_132043_resolve_filmserver02.backup\n516K Nov 18 23:35\n filmserver03_2017-11-18_132043_temp_backup_test.backup\n1.9G Nov 19 00:05\n filmserver03_2017-11-18_132043_temp_dev_resolve14.backup\n\n\n The last pg_dump with 8.4 took�212 minutes and\n 49 seconds.And now with 9.5 the very same pg_dump takes�644 minutes and 40 seconds. To it takes\n about three times as long now and I have no idea to\n why.�Nothing in the\n system or hardware other than the pgsql upgrade have\n change.��\n\n\nI dump the db's with a custom\n script and this is the line I use to get the DB's:\n\n\nDATABASES=$(${BINARY_PATH}/psql\n --user=postgres -w --no-align --tuples-only\n --command=\"SELECT datname from pg_database WHERE NOT\n datistemplate\")\n\n\n\nAfter that I iterate over them with a for\n loop and dump with:\n\n${BINARY_PATH}/pg_dump --host=localhost\n --user=postgres --no-password --blobs\n --format=custom --verbose\n --file=${pg_dump_filename}_${database}.backup\n ${database} | tee -a ${log_pg_dump}_${database}.log\n � �\n\n\n\nWhen observing the system during the dump it\n LOOKS like it did in 8.4. pg_dump is using 100% of one core\n and from what I can see it does this through out the\n operation. But it's still sooooo much slower. I read about\n the parallell option in pg_dump for 9.5 but sadly I cannot\n dump like that because the application in question can\n (probably) not import that format on it's own and I would\n have to use pgrestore or something. Which in theory is fine\n but sometimes one of the artists have to import the db\n backup. So need to keep it simple.\n\n\nThe system is:\nMacPro 5,1\n2x2.66 GHz Quad Core Xeon\n64 GB RAM\nmacOS 10.11.6\nPostgreSQL 9.5.4\nDB on a 6 disk SSD RAID\n\n\n\n\nI hope I got all the info needed. Really hope\n someone with more expertise and skills than me can point me\n in the right direction.\n\n\nCheers and thanks\n\n\n--\nHenrik Cednert\ncto | compositor",
"msg_date": "Wed, 22 Nov 2017 22:07:30 +0100",
"msg_from": "Patrick KUI-LI <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "On 22 Nov 2017, at 22:07, Patrick KUI-LI <[email protected]<mailto:[email protected]>> wrote:\n\n\nHello,\n\nI had this behaviors when the upgraded pg 9.5 was on ssl mode by default.\n\nSo i deactivated ssl mode in postgresql.conf. That's all.\n\nRegards,\n\nPatrick\n\n\nHello\n\nAnd you just uncommented the 'ssl = off' line in the config for this?\n\nIs this default behaviour different from 8.4? Is there a 'show running config' for pgsql?\n\nI tried that in the test vm and didn't really give me a significant difference.\n\nCOMPRESSION LEVEL: 6, SSL ON\nreal 82m33.744s\nuser 60m55.069s\nsys 3m3.375s\n\n\nCOMPRESSION LEVEL: 6, SSL OFF\nreal 76m31.083s\nuser 61m23.282s\nsys 1m23.341s\n\n\n\n\n\n\n\n\n\nOn 22 Nov 2017, at 22:07, Patrick KUI-LI <[email protected]> wrote:\n\n\n\nHello,\nI had this behaviors when the upgraded pg 9.5 was on ssl mode by default.\nSo i deactivated ssl mode in postgresql.conf. That's all.\n\nRegards,\nPatrick\n\n\n\n\n\n\n\n\nHello\n\n\nAnd you just uncommented the 'ssl = off' line in the config for this? \n\n\nIs this default behaviour different from 8.4? Is there a 'show running config' for pgsql?\n\n\n\nI tried that in the test vm and didn't really give me a significant difference. \n\n\nCOMPRESSION LEVEL: 6, SSL ON\n\nreal\n82m33.744s\nuser\n60m55.069s\nsys\n3m3.375s\n\n\n\n\n \nCOMPRESSION LEVEL: 6, SSL OFF\n\nreal\n76m31.083s\nuser\n61m23.282s\nsys\n1m23.341s",
"msg_date": "Thu, 23 Nov 2017 09:26:16 +0000",
"msg_from": "\"Henrik Cednert (Filmlance)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "I confess I don't do dump or any backups much other than file system \nsnapshots.\n\nBut when I do, I don't like how long it takes.\n\nI confess my database is big, I have about 200 GB. But still, dumping it \nshould not take 48 hours (and running) while the system is 75% idle and \nreads are at 4.5 MB/s when the system sustains over 100 MB/s during \nprocessing of table scan and hash join queries.\n\nSomething is wrong with the dump thing. And no, it's not SSL or \nwhatever, I am doing it on a local system with local connections. \nVersion 9.5 something.\n\nregards,\n-Gunther\n\n\nOn 11/23/2017 4:26, Henrik Cednert (Filmlance) wrote:\n>\n>\n>> On 22 Nov 2017, at 22:07, Patrick KUI-LI <[email protected] \n>> <mailto:[email protected]>> wrote:\n>>\n>> Hello,\n>>\n>> I had this behaviors when the upgraded pg 9.5 was on ssl mode by default.\n>>\n>> So i deactivated ssl mode in postgresql.conf. That's all.\n>>\n>> Regards,\n>>\n>> Patrick\n>>\n>\n>\n> Hello\n>\n> And you just uncommented the �'ssl = off' line in the config for this?\n>\n> Is this default behaviour different from 8.4? Is there a 'show running \n> config' for pgsql?\n>\n> I tried that in the test vm and didn't really give me a significant \n> difference.\n>\n> COMPRESSION LEVEL: 6, SSL ON\n> real82m33.744s\n> user60m55.069s\n> sys3m3.375s\n>\n> COMPRESSION LEVEL: 6, SSL OFF\n> real76m31.083s\n> user61m23.282s\n> sys1m23.341s\n\n\n\n\n\n\n\nI confess I don't do dump or any backups much other than file\n system snapshots.\nBut when I do, I don't like how long it takes.\nI confess my database is big, I have about 200 GB. But still,\n dumping it should not take 48 hours (and running) while the system\n is 75% idle and reads are at 4.5 MB/s when the system sustains\n over 100 MB/s during processing of table scan and hash join\n queries.\nSomething is wrong with the dump thing. And no, it's not SSL or\n whatever, I am doing it on a local system with local connections.\n Version 9.5 something.\n\nregards,\n -Gunther\n\n\nOn 11/23/2017 4:26, Henrik Cednert\n (Filmlance) wrote:\n\n\n\n\n\n\nOn 22 Nov 2017, at 22:07, Patrick KUI-LI <[email protected]> wrote:\n\n\n\nHello,\nI had this behaviors when the upgraded pg 9.5\n was on ssl mode by default.\nSo i deactivated ssl mode in postgresql.conf.\n That's all.\n\nRegards,\nPatrick\n\n\n\n\n\n\n\n\nHello\n\n\n And you just uncommented the �'ssl = off' line in the config for\n this?��\n\n\nIs this default behaviour different from\n 8.4? Is there a 'show running config' for pgsql?\n\n\n\nI tried that in the test vm and didn't really give\n me a significant difference.�\n\n\nCOMPRESSION LEVEL: 6, SSL ON\n\nreal\n82m33.744s\nuser\n60m55.069s\nsys\n3m3.375s\n\n\n\n\n�\nCOMPRESSION LEVEL: 6, SSL OFF\n\nreal\n76m31.083s\nuser\n61m23.282s\nsys\n1m23.341s",
"msg_date": "Thu, 7 Dec 2017 07:46:36 -0500",
"msg_from": "Gunther <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "Gunther wrote:\n> Something is wrong with the dump thing. And no, it's not SSL or whatever,\n> I am doing it on a local system with local connections. Version 9.5 something.\n\nThat's a lot of useful information.\n\nTry to profile where the time is spent, using \"perf\" or similar.\n\nDo you connect via the network, TCP localhost or UNIX sockets?\nThe last option should be the fastest.\n\nYours,\nLaurenz Albe\n\n",
"msg_date": "Thu, 07 Dec 2017 18:31:35 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
},
{
"msg_contents": "On Thu, Dec 7, 2017 at 2:31 PM, Laurenz Albe <[email protected]> wrote:\n> Gunther wrote:\n>> Something is wrong with the dump thing. And no, it's not SSL or whatever,\n>> I am doing it on a local system with local connections. Version 9.5 something.\n>\n> That's a lot of useful information.\n>\n> Try to profile where the time is spent, using \"perf\" or similar.\n>\n> Do you connect via the network, TCP localhost or UNIX sockets?\n> The last option should be the fastest.\n\nYou can use SSL over a local TCP connection. Whether it's the case is the thing.\n\nIn my experience, SSL isn't a problem, but compression *is*. With a\nmodern-enough openssl, enabling compression is tough, it's forcefully\ndisabled by default due to the vulnerabilities that were discovered\nrelated to its use lately.\n\nSo chances are, no matter what you configured, compression isn't being used.\n\nI never measured it compared to earlier versions, but pg_dump is\nindeed quite slow, and the biggest offender is formatting the COPY\ndata to be transmitted over the wire. That's why parallel dump is so\nuseful, you can use all your cores and achieve almost perfect\nmulticore acceleration.\n\nCompression of the archive is also a big overhead, if you want\ncompression but want to keep the overhead to the minimum, set the\nminimum compression level (1).\n\nSomething like:\n\npg_dump -Fd -j 8 -Z 1 -f target_dir yourdb\n\n",
"msg_date": "Thu, 7 Dec 2017 14:56:37 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade"
}
] |
[
{
"msg_contents": "Hi!\n\nI've seen few letters like this on mailing list and for some reason thought\nthat probably it won't happen to us, but here I am lol.\n\nIt's \"nestloop hits again\" situation.\n\nI'll try to provide plan from 9.6 later, but right now I have only plan\nfrom 10.1.\n\nQuery: https://pastebin.com/9b953tT7\nIt was running under 3 seconds (it's our default timeout) and now it runs\nfor 12 minutes.\n\n\\d adroom: https://pastebin.com/vBrPGtxT (3800 rows)\n\\d adroom_stat: https://pastebin.com/CkBArCC9 (47mln rows, 1.5mln satisfy\ncondition on day column)\n\\d domains: https://pastebin.com/65hk7YCm (73000 rows)\n\nAll three tables are analyzed.\n\nEXPLAIN ANALYZE: https://pastebin.com/PenHEgf0\nEXPLAIN ANALYZE with nestloop off: https://pastebin.com/zX35CPCV (0.8s)\n\nRegarding server parameters - it's a mighty beast with 2x E5-2630 v3, 192Gb\nof RAM and two very, very fast NVME server class SSD's in RAID1.\n\nWhat can I do with it?\n\n\nAlso maybe this will be useful:\n\n1st query, runs under 1ms\nselect title, id, groups->0->>'provider' provider, domain_ids from adroom\nwhere groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and\ncurrent_timestamp between start_ts and stop_ts\n\n2nd query that uses 1st one, runs under 3 ms\nselect distinct unnest(domain_ids) FROM (select title, id,\ngroups->0->>'provider' provider, domain_ids from adroom where\ngroups->0->>'provider' ~ '^target_mail_ru' and not is_paused and\ncurrent_timestamp between start_ts and stop_ts) t1\n\n3rd query which returns 1.5mln rows, runs in about 0.6s\nSELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day\nbetween date_trunc('day', current_timestamp - interval '1 week') and\ndate_trunc('day', current_timestamp)\n\nBUT if I'll add to 3rd query one additional condition, which is basically\n2nd query, it will ran same 12 minutes:\nSELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day\nbetween date_trunc('day', current_timestamp - interval '1 week') and\ndate_trunc('day', current_timestamp) AND domain_id IN (select distinct\nunnest(domain_ids) FROM (select title, id, groups->0->>'provider' provider,\ndomain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and\nnot is_paused and current_timestamp between start_ts and stop_ts) t1)\n\nPlan of last query:\n Nested Loop (cost=88.63..25617.31 rows=491 width=16) (actual\ntime=3.512..733248.271 rows=1442797 loops=1)\n -> HashAggregate (cost=88.06..88.07 rows=1 width=4) (actual\ntime=3.380..13.561 rows=3043 loops=1)\n Group Key: (unnest(adroom.domain_ids))\n -> HashAggregate (cost=88.03..88.04 rows=1 width=4) (actual\ntime=2.199..2.607 rows=3043 loops=1)\n Group Key: unnest(adroom.domain_ids)\n -> ProjectSet (cost=0.28..87.78 rows=100 width=4) (actual\ntime=0.701..1.339 rows=3173 loops=1)\n -> Index Scan using adroom_active_idx on adroom\n(cost=0.28..87.27 rows=1 width=167) (actual time=0.688..1.040 rows=4\nloops=1)\n Index Cond: ((CURRENT_TIMESTAMP >= start_ts) AND\n(CURRENT_TIMESTAMP <= stop_ts))\n Filter: (((groups -> 0) ->> 'provider'::text) ~\n'^target_mail_ru'::text)\n Rows Removed by Filter: 41\n -> Index Scan using\nadroom_stat_day_adroom_id_domain_id_url_id_is_wlabp_idx on adroom_stat\n(cost=0.58..25524.33 rows=491 width=16) (actual time=104.847..240.846\nrows=474 loops=3043)\n Index Cond: ((day >= date_trunc('day'::text, (CURRENT_TIMESTAMP -\n'7 days'::interval))) AND (day <= date_trunc('day'::text,\nCURRENT_TIMESTAMP)) AND (domain_id = (unnest(adroom.domain_ids))))\n Planning time: 1.580 ms\n Execution time: 733331.740 ms\n\nDmitry Shalashov, relap.io & surfingbird.ru\n\nHi!I've seen few letters like this on mailing list and for some reason thought that probably it won't happen to us, but here I am lol.It's \"nestloop hits again\" situation.I'll try to provide plan from 9.6 later, but right now I have only plan from 10.1.Query: https://pastebin.com/9b953tT7It was running under 3 seconds (it's our default timeout) and now it runs for 12 minutes.\\d adroom: https://pastebin.com/vBrPGtxT (3800 rows)\\d adroom_stat: https://pastebin.com/CkBArCC9 (47mln rows, 1.5mln satisfy condition on day column)\\d domains: https://pastebin.com/65hk7YCm (73000 rows)All three tables are analyzed.EXPLAIN ANALYZE: https://pastebin.com/PenHEgf0EXPLAIN ANALYZE with nestloop off: https://pastebin.com/zX35CPCV (0.8s)Regarding server parameters - it's a mighty beast with 2x E5-2630 v3, 192Gb of RAM and two very, very fast NVME server class SSD's in RAID1.What can I do with it?Also maybe this will be useful:1st query, runs under 1msselect title, id, groups->0->>'provider' provider, domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and current_timestamp between start_ts and stop_ts2nd query that uses 1st one, runs under 3 msselect distinct unnest(domain_ids) FROM (select title, id, groups->0->>'provider' provider, domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and current_timestamp between start_ts and stop_ts) t13rd query which returns 1.5mln rows, runs in about 0.6sSELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day between date_trunc('day', current_timestamp - interval '1 week') and date_trunc('day', current_timestamp)BUT if I'll add to 3rd query one additional condition, which is basically 2nd query, it will ran same 12 minutes:SELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day between date_trunc('day', current_timestamp - interval '1 week') and date_trunc('day', current_timestamp) AND domain_id IN (select distinct unnest(domain_ids) FROM (select title, id, groups->0->>'provider' provider, domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and current_timestamp between start_ts and stop_ts) t1)Plan of last query: Nested Loop (cost=88.63..25617.31 rows=491 width=16) (actual time=3.512..733248.271 rows=1442797 loops=1) -> HashAggregate (cost=88.06..88.07 rows=1 width=4) (actual time=3.380..13.561 rows=3043 loops=1) Group Key: (unnest(adroom.domain_ids)) -> HashAggregate (cost=88.03..88.04 rows=1 width=4) (actual time=2.199..2.607 rows=3043 loops=1) Group Key: unnest(adroom.domain_ids) -> ProjectSet (cost=0.28..87.78 rows=100 width=4) (actual time=0.701..1.339 rows=3173 loops=1) -> Index Scan using adroom_active_idx on adroom (cost=0.28..87.27 rows=1 width=167) (actual time=0.688..1.040 rows=4 loops=1) Index Cond: ((CURRENT_TIMESTAMP >= start_ts) AND (CURRENT_TIMESTAMP <= stop_ts)) Filter: (((groups -> 0) ->> 'provider'::text) ~ '^target_mail_ru'::text) Rows Removed by Filter: 41 -> Index Scan using adroom_stat_day_adroom_id_domain_id_url_id_is_wlabp_idx on adroom_stat (cost=0.58..25524.33 rows=491 width=16) (actual time=104.847..240.846 rows=474 loops=3043) Index Cond: ((day >= date_trunc('day'::text, (CURRENT_TIMESTAMP - '7 days'::interval))) AND (day <= date_trunc('day'::text, CURRENT_TIMESTAMP)) AND (domain_id = (unnest(adroom.domain_ids)))) Planning time: 1.580 ms Execution time: 733331.740 msDmitry Shalashov, relap.io & surfingbird.ru",
"msg_date": "Wed, 22 Nov 2017 17:13:39 +0300",
"msg_from": "Dmitry Shalashov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query became very slow after 9.6 -> 10 upgrade"
},
{
"msg_contents": "Hello!\n\nWhat about :\n\nselect name,setting from pg_settings where name like '%_cost';\n\n \n\n--\n\nAlex Ignatov \nPostgres Professional: <http://www.postgrespro.com> http://www.postgrespro.com \nThe Russian Postgres Company\n\n \n\n \n\nFrom: Dmitry Shalashov [mailto:[email protected]] \nSent: Wednesday, November 22, 2017 5:14 PM\nTo: [email protected]\nSubject: Query became very slow after 9.6 -> 10 upgrade\n\n \n\nHi!\n\n \n\nI've seen few letters like this on mailing list and for some reason thought that probably it won't happen to us, but here I am lol.\n\n \n\nIt's \"nestloop hits again\" situation.\n\n \n\nI'll try to provide plan from 9.6 later, but right now I have only plan from 10.1.\n\n \n\nQuery: https://pastebin.com/9b953tT7\n\nIt was running under 3 seconds (it's our default timeout) and now it runs for 12 minutes.\n\n \n\n\\d adroom: https://pastebin.com/vBrPGtxT (3800 rows)\n\n\\d adroom_stat: https://pastebin.com/CkBArCC9 (47mln rows, 1.5mln satisfy condition on day column)\n\n\\d domains: https://pastebin.com/65hk7YCm (73000 rows)\n\n \n\nAll three tables are analyzed.\n\n \n\nEXPLAIN ANALYZE: https://pastebin.com/PenHEgf0\n\nEXPLAIN ANALYZE with nestloop off: https://pastebin.com/zX35CPCV (0.8s)\n\n \n\nRegarding server parameters - it's a mighty beast with 2x E5-2630 v3, 192Gb of RAM and two very, very fast NVME server class SSD's in RAID1.\n\n \n\nWhat can I do with it?\n\n \n\n \n\nAlso maybe this will be useful:\n\n \n\n1st query, runs under 1ms\n\nselect title, id, groups->0->>'provider' provider, domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and current_timestamp between start_ts and stop_ts\n\n \n\n2nd query that uses 1st one, runs under 3 ms\n\nselect distinct unnest(domain_ids) FROM (select title, id, groups->0->>'provider' provider, domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and current_timestamp between start_ts and stop_ts) t1\n\n \n\n3rd query which returns 1.5mln rows, runs in about 0.6s\n\nSELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day between date_trunc('day', current_timestamp - interval '1 week') and date_trunc('day', current_timestamp)\n\n \n\nBUT if I'll add to 3rd query one additional condition, which is basically 2nd query, it will ran same 12 minutes:\n\nSELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day between date_trunc('day', current_timestamp - interval '1 week') and date_trunc('day', current_timestamp) AND domain_id IN (select distinct unnest(domain_ids) FROM (select title, id, groups->0->>'provider' provider, domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and current_timestamp between start_ts and stop_ts) t1)\n\n \n\nPlan of last query:\n\n Nested Loop (cost=88.63..25617.31 rows=491 width=16) (actual time=3.512..733248.271 rows=1442797 loops=1)\n\n -> HashAggregate (cost=88.06..88.07 rows=1 width=4) (actual time=3.380..13.561 rows=3043 loops=1)\n\n Group Key: (unnest(adroom.domain_ids))\n\n -> HashAggregate (cost=88.03..88.04 rows=1 width=4) (actual time=2.199..2.607 rows=3043 loops=1)\n\n Group Key: unnest(adroom.domain_ids)\n\n -> ProjectSet (cost=0.28..87.78 rows=100 width=4) (actual time=0.701..1.339 rows=3173 loops=1)\n\n -> Index Scan using adroom_active_idx on adroom (cost=0.28..87.27 rows=1 width=167) (actual time=0.688..1.040 rows=4 loops=1)\n\n Index Cond: ((CURRENT_TIMESTAMP >= start_ts) AND (CURRENT_TIMESTAMP <= stop_ts))\n\n Filter: (((groups -> 0) ->> 'provider'::text) ~ '^target_mail_ru'::text)\n\n Rows Removed by Filter: 41\n\n -> Index Scan using adroom_stat_day_adroom_id_domain_id_url_id_is_wlabp_idx on adroom_stat (cost=0.58..25524.33 rows=491 width=16) (actual time=104.847..240.846 rows=474 loops=3043)\n\n Index Cond: ((day >= date_trunc('day'::text, (CURRENT_TIMESTAMP - '7 days'::interval))) AND (day <= date_trunc('day'::text, CURRENT_TIMESTAMP)) AND (domain_id = (unnest(adroom.domain_ids))))\n\n Planning time: 1.580 ms\n\n Execution time: 733331.740 ms\n\n \n\nDmitry Shalashov, <http://relap.io/> relap.io & <http://surfingbird.ru> surfingbird.ru\n\n\nHello!What about :select name,setting from pg_settings where name like '%_cost'; --Alex Ignatov Postgres Professional: http://www.postgrespro.com The Russian Postgres Company From: Dmitry Shalashov [mailto:[email protected]] Sent: Wednesday, November 22, 2017 5:14 PMTo: [email protected]: Query became very slow after 9.6 -> 10 upgrade Hi! I've seen few letters like this on mailing list and for some reason thought that probably it won't happen to us, but here I am lol. It's \"nestloop hits again\" situation. I'll try to provide plan from 9.6 later, but right now I have only plan from 10.1. Query: https://pastebin.com/9b953tT7It was running under 3 seconds (it's our default timeout) and now it runs for 12 minutes. \\d adroom: https://pastebin.com/vBrPGtxT (3800 rows)\\d adroom_stat: https://pastebin.com/CkBArCC9 (47mln rows, 1.5mln satisfy condition on day column)\\d domains: https://pastebin.com/65hk7YCm (73000 rows) All three tables are analyzed. EXPLAIN ANALYZE: https://pastebin.com/PenHEgf0EXPLAIN ANALYZE with nestloop off: https://pastebin.com/zX35CPCV (0.8s) Regarding server parameters - it's a mighty beast with 2x E5-2630 v3, 192Gb of RAM and two very, very fast NVME server class SSD's in RAID1. What can I do with it? Also maybe this will be useful: 1st query, runs under 1msselect title, id, groups->0->>'provider' provider, domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and current_timestamp between start_ts and stop_ts 2nd query that uses 1st one, runs under 3 msselect distinct unnest(domain_ids) FROM (select title, id, groups->0->>'provider' provider, domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and current_timestamp between start_ts and stop_ts) t1 3rd query which returns 1.5mln rows, runs in about 0.6sSELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day between date_trunc('day', current_timestamp - interval '1 week') and date_trunc('day', current_timestamp) BUT if I'll add to 3rd query one additional condition, which is basically 2nd query, it will ran same 12 minutes:SELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day between date_trunc('day', current_timestamp - interval '1 week') and date_trunc('day', current_timestamp) AND domain_id IN (select distinct unnest(domain_ids) FROM (select title, id, groups->0->>'provider' provider, domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and current_timestamp between start_ts and stop_ts) t1) Plan of last query: Nested Loop (cost=88.63..25617.31 rows=491 width=16) (actual time=3.512..733248.271 rows=1442797 loops=1) -> HashAggregate (cost=88.06..88.07 rows=1 width=4) (actual time=3.380..13.561 rows=3043 loops=1) Group Key: (unnest(adroom.domain_ids)) -> HashAggregate (cost=88.03..88.04 rows=1 width=4) (actual time=2.199..2.607 rows=3043 loops=1) Group Key: unnest(adroom.domain_ids) -> ProjectSet (cost=0.28..87.78 rows=100 width=4) (actual time=0.701..1.339 rows=3173 loops=1) -> Index Scan using adroom_active_idx on adroom (cost=0.28..87.27 rows=1 width=167) (actual time=0.688..1.040 rows=4 loops=1) Index Cond: ((CURRENT_TIMESTAMP >= start_ts) AND (CURRENT_TIMESTAMP <= stop_ts)) Filter: (((groups -> 0) ->> 'provider'::text) ~ '^target_mail_ru'::text) Rows Removed by Filter: 41 -> Index Scan using adroom_stat_day_adroom_id_domain_id_url_id_is_wlabp_idx on adroom_stat (cost=0.58..25524.33 rows=491 width=16) (actual time=104.847..240.846 rows=474 loops=3043) Index Cond: ((day >= date_trunc('day'::text, (CURRENT_TIMESTAMP - '7 days'::interval))) AND (day <= date_trunc('day'::text, CURRENT_TIMESTAMP)) AND (domain_id = (unnest(adroom.domain_ids)))) Planning time: 1.580 ms Execution time: 733331.740 ms Dmitry Shalashov, relap.io & surfingbird.ru",
"msg_date": "Wed, 22 Nov 2017 17:24:01 +0300",
"msg_from": "\"Alex Ignatov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Query became very slow after 9.6 -> 10 upgrade"
},
{
"msg_contents": "Sure, here it goes:\n\n name | setting\n----------------------+---------\n cpu_index_tuple_cost | 0.005\n cpu_operator_cost | 0.0025\n cpu_tuple_cost | 0.01\n parallel_setup_cost | 1000\n parallel_tuple_cost | 0.1\n random_page_cost | 1\n seq_page_cost | 1\n\n\nDmitry Shalashov, relap.io & surfingbird.ru\n\n2017-11-22 17:24 GMT+03:00 Alex Ignatov <[email protected]>:\n\n> Hello!\n>\n> What about :\n>\n> select name,setting from pg_settings where name like '%_cost';\n>\n>\n>\n> --\n>\n> Alex Ignatov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n>\n>\n>\n> *From:* Dmitry Shalashov [mailto:[email protected]]\n> *Sent:* Wednesday, November 22, 2017 5:14 PM\n> *To:* [email protected]\n> *Subject:* Query became very slow after 9.6 -> 10 upgrade\n>\n>\n>\n> Hi!\n>\n>\n>\n> I've seen few letters like this on mailing list and for some reason\n> thought that probably it won't happen to us, but here I am lol.\n>\n>\n>\n> It's \"nestloop hits again\" situation.\n>\n>\n>\n> I'll try to provide plan from 9.6 later, but right now I have only plan\n> from 10.1.\n>\n>\n>\n> Query: https://pastebin.com/9b953tT7\n>\n> It was running under 3 seconds (it's our default timeout) and now it runs\n> for 12 minutes.\n>\n>\n>\n> \\d adroom: https://pastebin.com/vBrPGtxT (3800 rows)\n>\n> \\d adroom_stat: https://pastebin.com/CkBArCC9 (47mln rows, 1.5mln satisfy\n> condition on day column)\n>\n> \\d domains: https://pastebin.com/65hk7YCm (73000 rows)\n>\n>\n>\n> All three tables are analyzed.\n>\n>\n>\n> EXPLAIN ANALYZE: https://pastebin.com/PenHEgf0\n>\n> EXPLAIN ANALYZE with nestloop off: https://pastebin.com/zX35CPCV (0.8s)\n>\n>\n>\n> Regarding server parameters - it's a mighty beast with 2x E5-2630 v3,\n> 192Gb of RAM and two very, very fast NVME server class SSD's in RAID1.\n>\n>\n>\n> What can I do with it?\n>\n>\n>\n>\n>\n> Also maybe this will be useful:\n>\n>\n>\n> 1st query, runs under 1ms\n>\n> select title, id, groups->0->>'provider' provider, domain_ids from adroom\n> where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and\n> current_timestamp between start_ts and stop_ts\n>\n>\n>\n> 2nd query that uses 1st one, runs under 3 ms\n>\n> select distinct unnest(domain_ids) FROM (select title, id,\n> groups->0->>'provider' provider, domain_ids from adroom where\n> groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and\n> current_timestamp between start_ts and stop_ts) t1\n>\n>\n>\n> 3rd query which returns 1.5mln rows, runs in about 0.6s\n>\n> SELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day\n> between date_trunc('day', current_timestamp - interval '1 week') and\n> date_trunc('day', current_timestamp)\n>\n>\n>\n> BUT if I'll add to 3rd query one additional condition, which is basically\n> 2nd query, it will ran same 12 minutes:\n>\n> SELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day\n> between date_trunc('day', current_timestamp - interval '1 week') and\n> date_trunc('day', current_timestamp) AND domain_id IN (select distinct\n> unnest(domain_ids) FROM (select title, id, groups->0->>'provider' provider,\n> domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and\n> not is_paused and current_timestamp between start_ts and stop_ts) t1)\n>\n>\n>\n> Plan of last query:\n>\n> Nested Loop (cost=88.63..25617.31 rows=491 width=16) (actual\n> time=3.512..733248.271 rows=1442797 loops=1)\n>\n> -> HashAggregate (cost=88.06..88.07 rows=1 width=4) (actual\n> time=3.380..13.561 rows=3043 loops=1)\n>\n> Group Key: (unnest(adroom.domain_ids))\n>\n> -> HashAggregate (cost=88.03..88.04 rows=1 width=4) (actual\n> time=2.199..2.607 rows=3043 loops=1)\n>\n> Group Key: unnest(adroom.domain_ids)\n>\n> -> ProjectSet (cost=0.28..87.78 rows=100 width=4) (actual\n> time=0.701..1.339 rows=3173 loops=1)\n>\n> -> Index Scan using adroom_active_idx on adroom\n> (cost=0.28..87.27 rows=1 width=167) (actual time=0.688..1.040 rows=4\n> loops=1)\n>\n> Index Cond: ((CURRENT_TIMESTAMP >= start_ts)\n> AND (CURRENT_TIMESTAMP <= stop_ts))\n>\n> Filter: (((groups -> 0) ->> 'provider'::text) ~\n> '^target_mail_ru'::text)\n>\n> Rows Removed by Filter: 41\n>\n> -> Index Scan using adroom_stat_day_adroom_id_domain_id_url_id_is_wlabp_idx\n> on adroom_stat (cost=0.58..25524.33 rows=491 width=16) (actual\n> time=104.847..240.846 rows=474 loops=3043)\n>\n> Index Cond: ((day >= date_trunc('day'::text, (CURRENT_TIMESTAMP -\n> '7 days'::interval))) AND (day <= date_trunc('day'::text,\n> CURRENT_TIMESTAMP)) AND (domain_id = (unnest(adroom.domain_ids))))\n>\n> Planning time: 1.580 ms\n>\n> Execution time: 733331.740 ms\n>\n>\n>\n> Dmitry Shalashov, relap.io & surfingbird.ru\n>\n\nSure, here it goes: name | setting----------------------+--------- cpu_index_tuple_cost | 0.005 cpu_operator_cost | 0.0025 cpu_tuple_cost | 0.01 parallel_setup_cost | 1000 parallel_tuple_cost | 0.1 random_page_cost | 1 seq_page_cost | 1Dmitry Shalashov, relap.io & surfingbird.ru\n2017-11-22 17:24 GMT+03:00 Alex Ignatov <[email protected]>:Hello!What about :select name,setting from pg_settings where name like '%_cost'; --Alex Ignatov Postgres Professional: http://www.postgrespro.com The Russian Postgres Company From: Dmitry Shalashov [mailto:[email protected]] Sent: Wednesday, November 22, 2017 5:14 PMTo: [email protected]: Query became very slow after 9.6 -> 10 upgrade Hi! I've seen few letters like this on mailing list and for some reason thought that probably it won't happen to us, but here I am lol. It's \"nestloop hits again\" situation. I'll try to provide plan from 9.6 later, but right now I have only plan from 10.1. Query: https://pastebin.com/9b953tT7It was running under 3 seconds (it's our default timeout) and now it runs for 12 minutes. \\d adroom: https://pastebin.com/vBrPGtxT (3800 rows)\\d adroom_stat: https://pastebin.com/CkBArCC9 (47mln rows, 1.5mln satisfy condition on day column)\\d domains: https://pastebin.com/65hk7YCm (73000 rows) All three tables are analyzed. EXPLAIN ANALYZE: https://pastebin.com/PenHEgf0EXPLAIN ANALYZE with nestloop off: https://pastebin.com/zX35CPCV (0.8s) Regarding server parameters - it's a mighty beast with 2x E5-2630 v3, 192Gb of RAM and two very, very fast NVME server class SSD's in RAID1. What can I do with it? Also maybe this will be useful: 1st query, runs under 1msselect title, id, groups->0->>'provider' provider, domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and current_timestamp between start_ts and stop_ts 2nd query that uses 1st one, runs under 3 msselect distinct unnest(domain_ids) FROM (select title, id, groups->0->>'provider' provider, domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and current_timestamp between start_ts and stop_ts) t1 3rd query which returns 1.5mln rows, runs in about 0.6sSELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day between date_trunc('day', current_timestamp - interval '1 week') and date_trunc('day', current_timestamp) BUT if I'll add to 3rd query one additional condition, which is basically 2nd query, it will ran same 12 minutes:SELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day between date_trunc('day', current_timestamp - interval '1 week') and date_trunc('day', current_timestamp) AND domain_id IN (select distinct unnest(domain_ids) FROM (select title, id, groups->0->>'provider' provider, domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and current_timestamp between start_ts and stop_ts) t1) Plan of last query: Nested Loop (cost=88.63..25617.31 rows=491 width=16) (actual time=3.512..733248.271 rows=1442797 loops=1) -> HashAggregate (cost=88.06..88.07 rows=1 width=4) (actual time=3.380..13.561 rows=3043 loops=1) Group Key: (unnest(adroom.domain_ids)) -> HashAggregate (cost=88.03..88.04 rows=1 width=4) (actual time=2.199..2.607 rows=3043 loops=1) Group Key: unnest(adroom.domain_ids) -> ProjectSet (cost=0.28..87.78 rows=100 width=4) (actual time=0.701..1.339 rows=3173 loops=1) -> Index Scan using adroom_active_idx on adroom (cost=0.28..87.27 rows=1 width=167) (actual time=0.688..1.040 rows=4 loops=1) Index Cond: ((CURRENT_TIMESTAMP >= start_ts) AND (CURRENT_TIMESTAMP <= stop_ts)) Filter: (((groups -> 0) ->> 'provider'::text) ~ '^target_mail_ru'::text) Rows Removed by Filter: 41 -> Index Scan using adroom_stat_day_adroom_id_domain_id_url_id_is_wlabp_idx on adroom_stat (cost=0.58..25524.33 rows=491 width=16) (actual time=104.847..240.846 rows=474 loops=3043) Index Cond: ((day >= date_trunc('day'::text, (CURRENT_TIMESTAMP - '7 days'::interval))) AND (day <= date_trunc('day'::text, CURRENT_TIMESTAMP)) AND (domain_id = (unnest(adroom.domain_ids)))) Planning time: 1.580 ms Execution time: 733331.740 ms Dmitry Shalashov, relap.io & surfingbird.ru",
"msg_date": "Wed, 22 Nov 2017 17:29:20 +0300",
"msg_from": "Dmitry Shalashov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query became very slow after 9.6 -> 10 upgrade"
},
{
"msg_contents": "Here is my select right after initdb:\n\n \n\npostgres=# select name,setting from pg_settings where name like '%_cost';\n\n name | setting\n\n----------------------+---------\n\ncpu_index_tuple_cost | 0.005\n\ncpu_operator_cost | 0.0025\n\ncpu_tuple_cost | 0.01\n\nparallel_setup_cost | 1000\n\nparallel_tuple_cost | 0.1\n\nrandom_page_cost | 4\n\nseq_page_cost | 1\n\n \n\n \n\nCan you generate plan with random_page_cost = 4?\n\n \n\n \n\n--\n\nAlex Ignatov \nPostgres Professional: <http://www.postgrespro.com> http://www.postgrespro.com \nThe Russian Postgres Company\n\n \n\nFrom: Dmitry Shalashov [mailto:[email protected]] \nSent: Wednesday, November 22, 2017 5:29 PM\nTo: Alex Ignatov <[email protected]>\nCc: [email protected]\nSubject: Re: Query became very slow after 9.6 -> 10 upgrade\n\n \n\nSure, here it goes:\n\n \n\n name | setting\n\n----------------------+---------\n\n cpu_index_tuple_cost | 0.005\n\n cpu_operator_cost | 0.0025\n\n cpu_tuple_cost | 0.01\n\n parallel_setup_cost | 1000\n\n parallel_tuple_cost | 0.1\n\n random_page_cost | 1\n\n seq_page_cost | 1\n\n\n\n\n \n\nDmitry Shalashov, <http://relap.io/> relap.io & <http://surfingbird.ru> surfingbird.ru\n\n \n\n2017-11-22 17:24 GMT+03:00 Alex Ignatov <[email protected] <mailto:[email protected]> >:\n\nHello!\n\nWhat about :\n\nselect name,setting from pg_settings where name like '%_cost';\n\n \n\n--\n\nAlex Ignatov \nPostgres Professional: <http://www.postgrespro.com> http://www.postgrespro.com \nThe Russian Postgres Company\n\n \n\n \n\nFrom: Dmitry Shalashov [mailto:[email protected] <mailto:[email protected]> ] \nSent: Wednesday, November 22, 2017 5:14 PM\nTo: [email protected] <mailto:[email protected]> \nSubject: Query became very slow after 9.6 -> 10 upgrade\n\n \n\nHi!\n\n \n\nI've seen few letters like this on mailing list and for some reason thought that probably it won't happen to us, but here I am lol.\n\n \n\nIt's \"nestloop hits again\" situation.\n\n \n\nI'll try to provide plan from 9.6 later, but right now I have only plan from 10.1.\n\n \n\nQuery: https://pastebin.com/9b953tT7\n\nIt was running under 3 seconds (it's our default timeout) and now it runs for 12 minutes.\n\n \n\n\\d adroom: https://pastebin.com/vBrPGtxT (3800 rows)\n\n\\d adroom_stat: https://pastebin.com/CkBArCC9 (47mln rows, 1.5mln satisfy condition on day column)\n\n\\d domains: https://pastebin.com/65hk7YCm (73000 rows)\n\n \n\nAll three tables are analyzed.\n\n \n\nEXPLAIN ANALYZE: https://pastebin.com/PenHEgf0\n\nEXPLAIN ANALYZE with nestloop off: https://pastebin.com/zX35CPCV (0.8s)\n\n \n\nRegarding server parameters - it's a mighty beast with 2x E5-2630 v3, 192Gb of RAM and two very, very fast NVME server class SSD's in RAID1.\n\n \n\nWhat can I do with it?\n\n \n\n \n\nAlso maybe this will be useful:\n\n \n\n1st query, runs under 1ms\n\nselect title, id, groups->0->>'provider' provider, domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and current_timestamp between start_ts and stop_ts\n\n \n\n2nd query that uses 1st one, runs under 3 ms\n\nselect distinct unnest(domain_ids) FROM (select title, id, groups->0->>'provider' provider, domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and current_timestamp between start_ts and stop_ts) t1\n\n \n\n3rd query which returns 1.5mln rows, runs in about 0.6s\n\nSELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day between date_trunc('day', current_timestamp - interval '1 week') and date_trunc('day', current_timestamp)\n\n \n\nBUT if I'll add to 3rd query one additional condition, which is basically 2nd query, it will ran same 12 minutes:\n\nSELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day between date_trunc('day', current_timestamp - interval '1 week') and date_trunc('day', current_timestamp) AND domain_id IN (select distinct unnest(domain_ids) FROM (select title, id, groups->0->>'provider' provider, domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and current_timestamp between start_ts and stop_ts) t1)\n\n \n\nPlan of last query:\n\n Nested Loop (cost=88.63..25617.31 rows=491 width=16) (actual time=3.512..733248.271 rows=1442797 loops=1)\n\n -> HashAggregate (cost=88.06..88.07 rows=1 width=4) (actual time=3.380..13.561 rows=3043 loops=1)\n\n Group Key: (unnest(adroom.domain_ids))\n\n -> HashAggregate (cost=88.03..88.04 rows=1 width=4) (actual time=2.199..2.607 rows=3043 loops=1)\n\n Group Key: unnest(adroom.domain_ids)\n\n -> ProjectSet (cost=0.28..87.78 rows=100 width=4) (actual time=0.701..1.339 rows=3173 loops=1)\n\n -> Index Scan using adroom_active_idx on adroom (cost=0.28..87.27 rows=1 width=167) (actual time=0.688..1.040 rows=4 loops=1)\n\n Index Cond: ((CURRENT_TIMESTAMP >= start_ts) AND (CURRENT_TIMESTAMP <= stop_ts))\n\n Filter: (((groups -> 0) ->> 'provider'::text) ~ '^target_mail_ru'::text)\n\n Rows Removed by Filter: 41\n\n -> Index Scan using adroom_stat_day_adroom_id_domain_id_url_id_is_wlabp_idx on adroom_stat (cost=0.58..25524.33 rows=491 width=16) (actual time=104.847..240.846 rows=474 loops=3043)\n\n Index Cond: ((day >= date_trunc('day'::text, (CURRENT_TIMESTAMP - '7 days'::interval))) AND (day <= date_trunc('day'::text, CURRENT_TIMESTAMP)) AND (domain_id = (unnest(adroom.domain_ids))))\n\n Planning time: 1.580 ms\n\n Execution time: 733331.740 ms\n\n \n\nDmitry Shalashov, <http://relap.io/> relap.io & <http://surfingbird.ru> surfingbird.ru\n\n \n\n\nHere is my select right after initdb: postgres=# select name,setting from pg_settings where name like '%_cost'; name | setting----------------------+--------- cpu_index_tuple_cost | 0.005 cpu_operator_cost | 0.0025 cpu_tuple_cost | 0.01 parallel_setup_cost | 1000 parallel_tuple_cost | 0.1 random_page_cost | 4 seq_page_cost | 1 Can you generate plan with random_page_cost = 4? --Alex Ignatov Postgres Professional: http://www.postgrespro.com The Russian Postgres Company From: Dmitry Shalashov [mailto:[email protected]] Sent: Wednesday, November 22, 2017 5:29 PMTo: Alex Ignatov <[email protected]>Cc: [email protected]: Re: Query became very slow after 9.6 -> 10 upgrade Sure, here it goes: name | setting----------------------+--------- cpu_index_tuple_cost | 0.005 cpu_operator_cost | 0.0025 cpu_tuple_cost | 0.01 parallel_setup_cost | 1000 parallel_tuple_cost | 0.1 random_page_cost | 1 seq_page_cost | 1 Dmitry Shalashov, relap.io & surfingbird.ru 2017-11-22 17:24 GMT+03:00 Alex Ignatov <[email protected]>:Hello!What about :select name,setting from pg_settings where name like '%_cost'; --Alex Ignatov Postgres Professional: http://www.postgrespro.com The Russian Postgres Company From: Dmitry Shalashov [mailto:[email protected]] Sent: Wednesday, November 22, 2017 5:14 PMTo: [email protected]: Query became very slow after 9.6 -> 10 upgrade Hi! I've seen few letters like this on mailing list and for some reason thought that probably it won't happen to us, but here I am lol. It's \"nestloop hits again\" situation. I'll try to provide plan from 9.6 later, but right now I have only plan from 10.1. Query: https://pastebin.com/9b953tT7It was running under 3 seconds (it's our default timeout) and now it runs for 12 minutes. \\d adroom: https://pastebin.com/vBrPGtxT (3800 rows)\\d adroom_stat: https://pastebin.com/CkBArCC9 (47mln rows, 1.5mln satisfy condition on day column)\\d domains: https://pastebin.com/65hk7YCm (73000 rows) All three tables are analyzed. EXPLAIN ANALYZE: https://pastebin.com/PenHEgf0EXPLAIN ANALYZE with nestloop off: https://pastebin.com/zX35CPCV (0.8s) Regarding server parameters - it's a mighty beast with 2x E5-2630 v3, 192Gb of RAM and two very, very fast NVME server class SSD's in RAID1. What can I do with it? Also maybe this will be useful: 1st query, runs under 1msselect title, id, groups->0->>'provider' provider, domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and current_timestamp between start_ts and stop_ts 2nd query that uses 1st one, runs under 3 msselect distinct unnest(domain_ids) FROM (select title, id, groups->0->>'provider' provider, domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and current_timestamp between start_ts and stop_ts) t1 3rd query which returns 1.5mln rows, runs in about 0.6sSELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day between date_trunc('day', current_timestamp - interval '1 week') and date_trunc('day', current_timestamp) BUT if I'll add to 3rd query one additional condition, which is basically 2nd query, it will ran same 12 minutes:SELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day between date_trunc('day', current_timestamp - interval '1 week') and date_trunc('day', current_timestamp) AND domain_id IN (select distinct unnest(domain_ids) FROM (select title, id, groups->0->>'provider' provider, domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and current_timestamp between start_ts and stop_ts) t1) Plan of last query: Nested Loop (cost=88.63..25617.31 rows=491 width=16) (actual time=3.512..733248.271 rows=1442797 loops=1) -> HashAggregate (cost=88.06..88.07 rows=1 width=4) (actual time=3.380..13.561 rows=3043 loops=1) Group Key: (unnest(adroom.domain_ids)) -> HashAggregate (cost=88.03..88.04 rows=1 width=4) (actual time=2.199..2.607 rows=3043 loops=1) Group Key: unnest(adroom.domain_ids) -> ProjectSet (cost=0.28..87.78 rows=100 width=4) (actual time=0.701..1.339 rows=3173 loops=1) -> Index Scan using adroom_active_idx on adroom (cost=0.28..87.27 rows=1 width=167) (actual time=0.688..1.040 rows=4 loops=1) Index Cond: ((CURRENT_TIMESTAMP >= start_ts) AND (CURRENT_TIMESTAMP <= stop_ts)) Filter: (((groups -> 0) ->> 'provider'::text) ~ '^target_mail_ru'::text) Rows Removed by Filter: 41 -> Index Scan using adroom_stat_day_adroom_id_domain_id_url_id_is_wlabp_idx on adroom_stat (cost=0.58..25524.33 rows=491 width=16) (actual time=104.847..240.846 rows=474 loops=3043) Index Cond: ((day >= date_trunc('day'::text, (CURRENT_TIMESTAMP - '7 days'::interval))) AND (day <= date_trunc('day'::text, CURRENT_TIMESTAMP)) AND (domain_id = (unnest(adroom.domain_ids)))) Planning time: 1.580 ms Execution time: 733331.740 ms Dmitry Shalashov, relap.io & surfingbird.ru",
"msg_date": "Wed, 22 Nov 2017 17:44:18 +0300",
"msg_from": "\"Alex Ignatov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Query became very slow after 9.6 -> 10 upgrade"
},
{
"msg_contents": "I believe that with SSD disks random_page_cost should be very cheap, but\nhere you go (I decided to settle on EXPLAIN without ANALYZE this time, is\nthis is good enough?):\n\n Sort (cost=18410.26..18410.27 rows=1 width=63)\n Sort Key: (sum(st.shows)) DESC\n CTE a\n -> Index Scan using adroom_active_idx on adroom (cost=0.28..301.85\nrows=1 width=233)\n Index Cond: ((CURRENT_TIMESTAMP >= start_ts) AND\n(CURRENT_TIMESTAMP <= stop_ts))\n Filter: (((groups -> 0) ->> 'provider'::text) ~\n'^target_mail_ru'::text)\n CTE b\n -> HashAggregate (cost=1.28..1.29 rows=1 width=40)\n Group Key: a.provider, a.id, unnest(a.domain_ids)\n -> ProjectSet (cost=0.00..0.53 rows=100 width=40)\n -> CTE Scan on a (cost=0.00..0.02 rows=1 width=68)\n -> GroupAggregate (cost=18107.09..18107.11 rows=1 width=63)\n Group Key: b.provider, d.domain\n -> Sort (cost=18107.09..18107.09 rows=1 width=55)\n Sort Key: b.provider, d.domain\n -> Nested Loop (cost=1.00..18107.08 rows=1 width=55)\n Join Filter: ((b.id = st.adroom_id) AND (b.domain_id =\nst.domain_id))\n -> Nested Loop (cost=0.42..8.46 rows=1 width=59)\n -> CTE Scan on b (cost=0.00..0.02 rows=1\nwidth=40)\n -> Index Scan using domains_pkey on domains d\n(cost=0.42..8.44 rows=1 width=19)\n Index Cond: (id = b.domain_id)\n -> Index Scan using\nadroom_stat_day_adroom_id_domain_id_url_id_is_wlabp_idx on adroom_stat st\n(cost=0.58..180\n91.26 rows=491 width=16)\n Index Cond: ((day >= date_trunc('day'::text,\n(CURRENT_TIMESTAMP - '7 days'::interval))) AND (day <=\ndate_trunc('day'::text, CURRENT_TIMESTAMP)) AND (domain_id = d.id))\n\n\nDmitry Shalashov, relap.io & surfingbird.ru\n\n2017-11-22 17:44 GMT+03:00 Alex Ignatov <[email protected]>:\n\n> Here is my select right after initdb:\n>\n>\n>\n> postgres=# select name,setting from pg_settings where name like '%_cost';\n>\n> name | setting\n>\n> ----------------------+---------\n>\n> cpu_index_tuple_cost | 0.005\n>\n> cpu_operator_cost | 0.0025\n>\n> cpu_tuple_cost | 0.01\n>\n> parallel_setup_cost | 1000\n>\n> parallel_tuple_cost | 0.1\n>\n> random_page_cost | 4\n>\n> seq_page_cost | 1\n>\n>\n>\n>\n>\n> Can you generate plan with random_page_cost = 4?\n>\n>\n>\n>\n>\n> --\n>\n> Alex Ignatov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n>\n> *From:* Dmitry Shalashov [mailto:[email protected]]\n> *Sent:* Wednesday, November 22, 2017 5:29 PM\n> *To:* Alex Ignatov <[email protected]>\n> *Cc:* [email protected]\n> *Subject:* Re: Query became very slow after 9.6 -> 10 upgrade\n>\n>\n>\n> Sure, here it goes:\n>\n>\n>\n> name | setting\n>\n> ----------------------+---------\n>\n> cpu_index_tuple_cost | 0.005\n>\n> cpu_operator_cost | 0.0025\n>\n> cpu_tuple_cost | 0.01\n>\n> parallel_setup_cost | 1000\n>\n> parallel_tuple_cost | 0.1\n>\n> random_page_cost | 1\n>\n> seq_page_cost | 1\n>\n>\n>\n>\n> Dmitry Shalashov, relap.io & surfingbird.ru\n>\n>\n>\n> 2017-11-22 17:24 GMT+03:00 Alex Ignatov <[email protected]>:\n>\n> Hello!\n>\n> What about :\n>\n> select name,setting from pg_settings where name like '%_cost';\n>\n>\n>\n> --\n>\n> Alex Ignatov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n>\n>\n>\n> *From:* Dmitry Shalashov [mailto:[email protected]]\n> *Sent:* Wednesday, November 22, 2017 5:14 PM\n> *To:* [email protected]\n> *Subject:* Query became very slow after 9.6 -> 10 upgrade\n>\n>\n>\n> Hi!\n>\n>\n>\n> I've seen few letters like this on mailing list and for some reason\n> thought that probably it won't happen to us, but here I am lol.\n>\n>\n>\n> It's \"nestloop hits again\" situation.\n>\n>\n>\n> I'll try to provide plan from 9.6 later, but right now I have only plan\n> from 10.1.\n>\n>\n>\n> Query: https://pastebin.com/9b953tT7\n>\n> It was running under 3 seconds (it's our default timeout) and now it runs\n> for 12 minutes.\n>\n>\n>\n> \\d adroom: https://pastebin.com/vBrPGtxT (3800 rows)\n>\n> \\d adroom_stat: https://pastebin.com/CkBArCC9 (47mln rows, 1.5mln satisfy\n> condition on day column)\n>\n> \\d domains: https://pastebin.com/65hk7YCm (73000 rows)\n>\n>\n>\n> All three tables are analyzed.\n>\n>\n>\n> EXPLAIN ANALYZE: https://pastebin.com/PenHEgf0\n>\n> EXPLAIN ANALYZE with nestloop off: https://pastebin.com/zX35CPCV (0.8s)\n>\n>\n>\n> Regarding server parameters - it's a mighty beast with 2x E5-2630 v3,\n> 192Gb of RAM and two very, very fast NVME server class SSD's in RAID1.\n>\n>\n>\n> What can I do with it?\n>\n>\n>\n>\n>\n> Also maybe this will be useful:\n>\n>\n>\n> 1st query, runs under 1ms\n>\n> select title, id, groups->0->>'provider' provider, domain_ids from adroom\n> where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and\n> current_timestamp between start_ts and stop_ts\n>\n>\n>\n> 2nd query that uses 1st one, runs under 3 ms\n>\n> select distinct unnest(domain_ids) FROM (select title, id,\n> groups->0->>'provider' provider, domain_ids from adroom where\n> groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and\n> current_timestamp between start_ts and stop_ts) t1\n>\n>\n>\n> 3rd query which returns 1.5mln rows, runs in about 0.6s\n>\n> SELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day\n> between date_trunc('day', current_timestamp - interval '1 week') and\n> date_trunc('day', current_timestamp)\n>\n>\n>\n> BUT if I'll add to 3rd query one additional condition, which is basically\n> 2nd query, it will ran same 12 minutes:\n>\n> SELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day\n> between date_trunc('day', current_timestamp - interval '1 week') and\n> date_trunc('day', current_timestamp) AND domain_id IN (select distinct\n> unnest(domain_ids) FROM (select title, id, groups->0->>'provider' provider,\n> domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and\n> not is_paused and current_timestamp between start_ts and stop_ts) t1)\n>\n>\n>\n> Plan of last query:\n>\n> Nested Loop (cost=88.63..25617.31 rows=491 width=16) (actual\n> time=3.512..733248.271 rows=1442797 loops=1)\n>\n> -> HashAggregate (cost=88.06..88.07 rows=1 width=4) (actual\n> time=3.380..13.561 rows=3043 loops=1)\n>\n> Group Key: (unnest(adroom.domain_ids))\n>\n> -> HashAggregate (cost=88.03..88.04 rows=1 width=4) (actual\n> time=2.199..2.607 rows=3043 loops=1)\n>\n> Group Key: unnest(adroom.domain_ids)\n>\n> -> ProjectSet (cost=0.28..87.78 rows=100 width=4) (actual\n> time=0.701..1.339 rows=3173 loops=1)\n>\n> -> Index Scan using adroom_active_idx on adroom\n> (cost=0.28..87.27 rows=1 width=167) (actual time=0.688..1.040 rows=4\n> loops=1)\n>\n> Index Cond: ((CURRENT_TIMESTAMP >= start_ts)\n> AND (CURRENT_TIMESTAMP <= stop_ts))\n>\n> Filter: (((groups -> 0) ->> 'provider'::text) ~\n> '^target_mail_ru'::text)\n>\n> Rows Removed by Filter: 41\n>\n> -> Index Scan using adroom_stat_day_adroom_id_domain_id_url_id_is_wlabp_idx\n> on adroom_stat (cost=0.58..25524.33 rows=491 width=16) (actual\n> time=104.847..240.846 rows=474 loops=3043)\n>\n> Index Cond: ((day >= date_trunc('day'::text, (CURRENT_TIMESTAMP -\n> '7 days'::interval))) AND (day <= date_trunc('day'::text,\n> CURRENT_TIMESTAMP)) AND (domain_id = (unnest(adroom.domain_ids))))\n>\n> Planning time: 1.580 ms\n>\n> Execution time: 733331.740 ms\n>\n>\n>\n> Dmitry Shalashov, relap.io & surfingbird.ru\n>\n>\n>\n\nI believe that with SSD disks random_page_cost should be very cheap, but here you go (I decided to settle on EXPLAIN without ANALYZE this time, is this is good enough?): Sort (cost=18410.26..18410.27 rows=1 width=63) Sort Key: (sum(st.shows)) DESC CTE a -> Index Scan using adroom_active_idx on adroom (cost=0.28..301.85 rows=1 width=233) Index Cond: ((CURRENT_TIMESTAMP >= start_ts) AND (CURRENT_TIMESTAMP <= stop_ts)) Filter: (((groups -> 0) ->> 'provider'::text) ~ '^target_mail_ru'::text) CTE b -> HashAggregate (cost=1.28..1.29 rows=1 width=40) Group Key: a.provider, a.id, unnest(a.domain_ids) -> ProjectSet (cost=0.00..0.53 rows=100 width=40) -> CTE Scan on a (cost=0.00..0.02 rows=1 width=68) -> GroupAggregate (cost=18107.09..18107.11 rows=1 width=63) Group Key: b.provider, d.domain -> Sort (cost=18107.09..18107.09 rows=1 width=55) Sort Key: b.provider, d.domain -> Nested Loop (cost=1.00..18107.08 rows=1 width=55) Join Filter: ((b.id = st.adroom_id) AND (b.domain_id = st.domain_id)) -> Nested Loop (cost=0.42..8.46 rows=1 width=59) -> CTE Scan on b (cost=0.00..0.02 rows=1 width=40) -> Index Scan using domains_pkey on domains d (cost=0.42..8.44 rows=1 width=19) Index Cond: (id = b.domain_id) -> Index Scan using adroom_stat_day_adroom_id_domain_id_url_id_is_wlabp_idx on adroom_stat st (cost=0.58..18091.26 rows=491 width=16) Index Cond: ((day >= date_trunc('day'::text, (CURRENT_TIMESTAMP - '7 days'::interval))) AND (day <= date_trunc('day'::text, CURRENT_TIMESTAMP)) AND (domain_id = d.id))Dmitry Shalashov, relap.io & surfingbird.ru\n2017-11-22 17:44 GMT+03:00 Alex Ignatov <[email protected]>:Here is my select right after initdb: postgres=# select name,setting from pg_settings where name like '%_cost'; name | setting----------------------+--------- cpu_index_tuple_cost | 0.005 cpu_operator_cost | 0.0025 cpu_tuple_cost | 0.01 parallel_setup_cost | 1000 parallel_tuple_cost | 0.1 random_page_cost | 4 seq_page_cost | 1 Can you generate plan with random_page_cost = 4? --Alex Ignatov Postgres Professional: http://www.postgrespro.com The Russian Postgres Company From: Dmitry Shalashov [mailto:[email protected]] Sent: Wednesday, November 22, 2017 5:29 PMTo: Alex Ignatov <[email protected]>Cc: [email protected]: Re: Query became very slow after 9.6 -> 10 upgrade Sure, here it goes: name | setting----------------------+--------- cpu_index_tuple_cost | 0.005 cpu_operator_cost | 0.0025 cpu_tuple_cost | 0.01 parallel_setup_cost | 1000 parallel_tuple_cost | 0.1 random_page_cost | 1 seq_page_cost | 1 Dmitry Shalashov, relap.io & surfingbird.ru 2017-11-22 17:24 GMT+03:00 Alex Ignatov <[email protected]>:Hello!What about :select name,setting from pg_settings where name like '%_cost'; --Alex Ignatov Postgres Professional: http://www.postgrespro.com The Russian Postgres Company From: Dmitry Shalashov [mailto:[email protected]] Sent: Wednesday, November 22, 2017 5:14 PMTo: [email protected]: Query became very slow after 9.6 -> 10 upgrade Hi! I've seen few letters like this on mailing list and for some reason thought that probably it won't happen to us, but here I am lol. It's \"nestloop hits again\" situation. I'll try to provide plan from 9.6 later, but right now I have only plan from 10.1. Query: https://pastebin.com/9b953tT7It was running under 3 seconds (it's our default timeout) and now it runs for 12 minutes. \\d adroom: https://pastebin.com/vBrPGtxT (3800 rows)\\d adroom_stat: https://pastebin.com/CkBArCC9 (47mln rows, 1.5mln satisfy condition on day column)\\d domains: https://pastebin.com/65hk7YCm (73000 rows) All three tables are analyzed. EXPLAIN ANALYZE: https://pastebin.com/PenHEgf0EXPLAIN ANALYZE with nestloop off: https://pastebin.com/zX35CPCV (0.8s) Regarding server parameters - it's a mighty beast with 2x E5-2630 v3, 192Gb of RAM and two very, very fast NVME server class SSD's in RAID1. What can I do with it? Also maybe this will be useful: 1st query, runs under 1msselect title, id, groups->0->>'provider' provider, domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and current_timestamp between start_ts and stop_ts 2nd query that uses 1st one, runs under 3 msselect distinct unnest(domain_ids) FROM (select title, id, groups->0->>'provider' provider, domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and current_timestamp between start_ts and stop_ts) t1 3rd query which returns 1.5mln rows, runs in about 0.6sSELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day between date_trunc('day', current_timestamp - interval '1 week') and date_trunc('day', current_timestamp) BUT if I'll add to 3rd query one additional condition, which is basically 2nd query, it will ran same 12 minutes:SELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day between date_trunc('day', current_timestamp - interval '1 week') and date_trunc('day', current_timestamp) AND domain_id IN (select distinct unnest(domain_ids) FROM (select title, id, groups->0->>'provider' provider, domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and not is_paused and current_timestamp between start_ts and stop_ts) t1) Plan of last query: Nested Loop (cost=88.63..25617.31 rows=491 width=16) (actual time=3.512..733248.271 rows=1442797 loops=1) -> HashAggregate (cost=88.06..88.07 rows=1 width=4) (actual time=3.380..13.561 rows=3043 loops=1) Group Key: (unnest(adroom.domain_ids)) -> HashAggregate (cost=88.03..88.04 rows=1 width=4) (actual time=2.199..2.607 rows=3043 loops=1) Group Key: unnest(adroom.domain_ids) -> ProjectSet (cost=0.28..87.78 rows=100 width=4) (actual time=0.701..1.339 rows=3173 loops=1) -> Index Scan using adroom_active_idx on adroom (cost=0.28..87.27 rows=1 width=167) (actual time=0.688..1.040 rows=4 loops=1) Index Cond: ((CURRENT_TIMESTAMP >= start_ts) AND (CURRENT_TIMESTAMP <= stop_ts)) Filter: (((groups -> 0) ->> 'provider'::text) ~ '^target_mail_ru'::text) Rows Removed by Filter: 41 -> Index Scan using adroom_stat_day_adroom_id_domain_id_url_id_is_wlabp_idx on adroom_stat (cost=0.58..25524.33 rows=491 width=16) (actual time=104.847..240.846 rows=474 loops=3043) Index Cond: ((day >= date_trunc('day'::text, (CURRENT_TIMESTAMP - '7 days'::interval))) AND (day <= date_trunc('day'::text, CURRENT_TIMESTAMP)) AND (domain_id = (unnest(adroom.domain_ids)))) Planning time: 1.580 ms Execution time: 733331.740 ms Dmitry Shalashov, relap.io & surfingbird.ru",
"msg_date": "Wed, 22 Nov 2017 17:51:22 +0300",
"msg_from": "Dmitry Shalashov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query became very slow after 9.6 -> 10 upgrade"
},
{
"msg_contents": "IMHO the problems here are due to poor cardinality estimates.\n\nFor example in the first query, the problem is here:\n\n -> Nested Loop (cost=0.42..2.46 rows=1 width=59)\n (actual time=2.431..91.330 rows=3173 loops=1)\n -> CTE Scan on b (cost=0.00..0.02 rows=1 width=40)\n (actual time=2.407..23.115 rows=3173 loops=1)\n -> Index Scan using domains_pkey on domains d\n (cost=0.42..2.44 rows=1 width=19)\n (actual time=0.018..0.018 rows=1 loops=3173)\n\nThat is, the database expects the CTE to return 1 row, but it returns\n3173 of them, which makes the nested loop very inefficient.\n\nSimilarly for the other query, where this happens:\n\n Nested Loop (cost=88.63..25617.31 rows=491 width=16)\n (actual time=3.512..733248.271 rows=1442797 loops=1)\n -> HashAggregate (cost=88.06..88.07 rows=1 width=4)\n (actual time=3.380..13.561 rows=3043 loops=1)\n\nThat is, about 1:3000 difference in both cases.\n\nThose estimation errors seem to be caused by a condition that is almost\nimpossible to estimate, because in both queries it does this:\n\n groups->0->>'provider' ~ '^something'\n\nThat is, it's a regexp on an expression. You might try creating an index\non the expression (which is the only way to add expression statistics),\nand reformulate the condition as LIKE (which I believe we can estimate\nbetter than regular expressions, but I haven't tried).\n\nSo something like\n\n CREATE INDEX ON adroom ((groups->0->>'provider'));\n\n WHERE groups->0->>'provider' LIKE 'something%';\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 22 Nov 2017 16:07:26 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query became very slow after 9.6 -> 10 upgrade"
},
{
"msg_contents": "Dmitry Shalashov <[email protected]> writes:\n> BUT if I'll add to 3rd query one additional condition, which is basically\n> 2nd query, it will ran same 12 minutes:\n> SELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day\n> between date_trunc('day', current_timestamp - interval '1 week') and\n> date_trunc('day', current_timestamp) AND domain_id IN (select distinct\n> unnest(domain_ids) FROM (select title, id, groups->0->>'provider' provider,\n> domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and\n> not is_paused and current_timestamp between start_ts and stop_ts) t1)\n\n> Plan of last query:\n> Nested Loop (cost=88.63..25617.31 rows=491 width=16) (actual\n> time=3.512..733248.271 rows=1442797 loops=1)\n> -> HashAggregate (cost=88.06..88.07 rows=1 width=4) (actual\n> time=3.380..13.561 rows=3043 loops=1)\n> Group Key: (unnest(adroom.domain_ids))\n> -> HashAggregate (cost=88.03..88.04 rows=1 width=4) (actual\n> time=2.199..2.607 rows=3043 loops=1)\n> Group Key: unnest(adroom.domain_ids)\n> -> ProjectSet (cost=0.28..87.78 rows=100 width=4) (actual\n> time=0.701..1.339 rows=3173 loops=1)\n\nHm, seems like the problem is that that lower HashAggregate is estimated\nas having only one row out, which is way off and doesn't sound like a\nparticularly bright default estimate anyway. (And then we're doing an\nadditional HashAggregate on top of that, which is useless --- implies\nthat something isn't realizing that the output of the SELECT DISTINCT\nis already distinct.)\n\nI'm suspicious that this is breakage from the work that was done on\ntargetlist SRFs in v10, but that's just a guess at this point.\n\nTrying simple test queries involving WHERE x IN (SELECT DISTINCT\nunnest(foo) FROM ...), I do not see a behavior like this, so there is some\nnot-very-obvious contributing factor in your situation. Can you put\ntogether a self-contained test case that produces a bogus one-row\nestimate? Extra points if it produces duplicate HashAgg steps.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 22 Nov 2017 10:19:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query became very slow after 9.6 -> 10 upgrade"
},
{
"msg_contents": "Turns out we had not 9.6 but 9.5.\n\nAnd query plan from 9.5 is:\n\n Sort (cost=319008.18..319008.19 rows=1 width=556) (actual\ntime=0.028..0.028 rows=0 loops=1)\n Sort Key: (sum(st.shows)) DESC\n Sort Method: quicksort Memory: 25kB\n CTE a\n -> Index Scan using adroom_active_idx on adroom (cost=0.13..5.21\nrows=1 width=584) (actual time=0.004..0.004 rows=0 loops=1)\n Index Cond: ((now() >= start_ts) AND (now() <= stop_ts))\n Filter: (((groups -> 0) ->> 'provider'::text) ~\n'^target_mail_ru'::text)\n CTE b\n -> HashAggregate (cost=1.27..1.77 rows=100 width=68) (actual\ntime=0.005..0.005 rows=0 loops=1)\n Group Key: a.provider, a.id, unnest(a.domain_ids)\n -> CTE Scan on a (cost=0.00..0.52 rows=100 width=68) (actual\ntime=0.004..0.004 rows=0 loops=1)\n -> HashAggregate (cost=319001.17..319001.18 rows=1 width=556) (actual\ntime=0.013..0.013 rows=0 loops=1)\n Group Key: b.provider, d.domain\n -> Hash Join (cost=16.55..319001.16 rows=1 width=556) (actual\ntime=0.013..0.013 rows=0 loops=1)\n Hash Cond: ((st.adroom_id = b.id) AND (st.domain_id =\nb.domain_id))\n -> Hash Join (cost=13.05..318633.29 rows=48581 width=536)\n(never executed)\n Hash Cond: (st.domain_id = d.id)\n -> Index Scan using\nadroom_stat_day_adroom_id_domain_id_url_id_is_wlabp_idx on adroom_stat st\n(cost=0.58..313307.30 rows=1287388 width=16) (never executed)\n Index Cond: ((day >= date_trunc('day'::text,\n(now() - '7 days'::interval))) AND (day <= date_trunc('day'::text, now())))\n -> Hash (cost=11.10..11.10 rows=110 width=520)\n(never executed)\n -> Seq Scan on domains d (cost=0.00..11.10\nrows=110 width=520) (never executed)\n -> Hash (cost=2.00..2.00 rows=100 width=40) (actual\ntime=0.007..0.007 rows=0 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 8kB\n -> CTE Scan on b (cost=0.00..2.00 rows=100 width=40)\n(actual time=0.007..0.007 rows=0 loops=1)\n Planning time: 6.641 ms\n Execution time: 0.203 ms\n\n\nAlso I prepared test case for Tom and sent it to him.\n\n\nDmitry Shalashov, relap.io & surfingbird.ru\n\n2017-11-22 18:19 GMT+03:00 Tom Lane <[email protected]>:\n\n> Dmitry Shalashov <[email protected]> writes:\n> > BUT if I'll add to 3rd query one additional condition, which is basically\n> > 2nd query, it will ran same 12 minutes:\n> > SELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day\n> > between date_trunc('day', current_timestamp - interval '1 week') and\n> > date_trunc('day', current_timestamp) AND domain_id IN (select distinct\n> > unnest(domain_ids) FROM (select title, id, groups->0->>'provider'\n> provider,\n> > domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru'\n> and\n> > not is_paused and current_timestamp between start_ts and stop_ts) t1)\n>\n> > Plan of last query:\n> > Nested Loop (cost=88.63..25617.31 rows=491 width=16) (actual\n> > time=3.512..733248.271 rows=1442797 loops=1)\n> > -> HashAggregate (cost=88.06..88.07 rows=1 width=4) (actual\n> > time=3.380..13.561 rows=3043 loops=1)\n> > Group Key: (unnest(adroom.domain_ids))\n> > -> HashAggregate (cost=88.03..88.04 rows=1 width=4) (actual\n> > time=2.199..2.607 rows=3043 loops=1)\n> > Group Key: unnest(adroom.domain_ids)\n> > -> ProjectSet (cost=0.28..87.78 rows=100 width=4)\n> (actual\n> > time=0.701..1.339 rows=3173 loops=1)\n>\n> Hm, seems like the problem is that that lower HashAggregate is estimated\n> as having only one row out, which is way off and doesn't sound like a\n> particularly bright default estimate anyway. (And then we're doing an\n> additional HashAggregate on top of that, which is useless --- implies\n> that something isn't realizing that the output of the SELECT DISTINCT\n> is already distinct.)\n>\n> I'm suspicious that this is breakage from the work that was done on\n> targetlist SRFs in v10, but that's just a guess at this point.\n>\n> Trying simple test queries involving WHERE x IN (SELECT DISTINCT\n> unnest(foo) FROM ...), I do not see a behavior like this, so there is some\n> not-very-obvious contributing factor in your situation. Can you put\n> together a self-contained test case that produces a bogus one-row\n> estimate? Extra points if it produces duplicate HashAgg steps.\n>\n> regards, tom lane\n>\n\nTurns out we had not 9.6 but 9.5.And query plan from 9.5 is: Sort (cost=319008.18..319008.19 rows=1 width=556) (actual time=0.028..0.028 rows=0 loops=1) Sort Key: (sum(st.shows)) DESC Sort Method: quicksort Memory: 25kB CTE a -> Index Scan using adroom_active_idx on adroom (cost=0.13..5.21 rows=1 width=584) (actual time=0.004..0.004 rows=0 loops=1) Index Cond: ((now() >= start_ts) AND (now() <= stop_ts)) Filter: (((groups -> 0) ->> 'provider'::text) ~ '^target_mail_ru'::text) CTE b -> HashAggregate (cost=1.27..1.77 rows=100 width=68) (actual time=0.005..0.005 rows=0 loops=1) Group Key: a.provider, a.id, unnest(a.domain_ids) -> CTE Scan on a (cost=0.00..0.52 rows=100 width=68) (actual time=0.004..0.004 rows=0 loops=1) -> HashAggregate (cost=319001.17..319001.18 rows=1 width=556) (actual time=0.013..0.013 rows=0 loops=1) Group Key: b.provider, d.domain -> Hash Join (cost=16.55..319001.16 rows=1 width=556) (actual time=0.013..0.013 rows=0 loops=1) Hash Cond: ((st.adroom_id = b.id) AND (st.domain_id = b.domain_id)) -> Hash Join (cost=13.05..318633.29 rows=48581 width=536) (never executed) Hash Cond: (st.domain_id = d.id) -> Index Scan using adroom_stat_day_adroom_id_domain_id_url_id_is_wlabp_idx on adroom_stat st (cost=0.58..313307.30 rows=1287388 width=16) (never executed) Index Cond: ((day >= date_trunc('day'::text, (now() - '7 days'::interval))) AND (day <= date_trunc('day'::text, now()))) -> Hash (cost=11.10..11.10 rows=110 width=520) (never executed) -> Seq Scan on domains d (cost=0.00..11.10 rows=110 width=520) (never executed) -> Hash (cost=2.00..2.00 rows=100 width=40) (actual time=0.007..0.007 rows=0 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 8kB -> CTE Scan on b (cost=0.00..2.00 rows=100 width=40) (actual time=0.007..0.007 rows=0 loops=1) Planning time: 6.641 ms Execution time: 0.203 msAlso I prepared test case for Tom and sent it to him.Dmitry Shalashov, relap.io & surfingbird.ru\n2017-11-22 18:19 GMT+03:00 Tom Lane <[email protected]>:Dmitry Shalashov <[email protected]> writes:\n> BUT if I'll add to 3rd query one additional condition, which is basically\n> 2nd query, it will ran same 12 minutes:\n> SELECT adroom_id, domain_id, shows, clicks FROM adroom_stat WHERE day\n> between date_trunc('day', current_timestamp - interval '1 week') and\n> date_trunc('day', current_timestamp) AND domain_id IN (select distinct\n> unnest(domain_ids) FROM (select title, id, groups->0->>'provider' provider,\n> domain_ids from adroom where groups->0->>'provider' ~ '^target_mail_ru' and\n> not is_paused and current_timestamp between start_ts and stop_ts) t1)\n\n> Plan of last query:\n> Nested Loop (cost=88.63..25617.31 rows=491 width=16) (actual\n> time=3.512..733248.271 rows=1442797 loops=1)\n> -> HashAggregate (cost=88.06..88.07 rows=1 width=4) (actual\n> time=3.380..13.561 rows=3043 loops=1)\n> Group Key: (unnest(adroom.domain_ids))\n> -> HashAggregate (cost=88.03..88.04 rows=1 width=4) (actual\n> time=2.199..2.607 rows=3043 loops=1)\n> Group Key: unnest(adroom.domain_ids)\n> -> ProjectSet (cost=0.28..87.78 rows=100 width=4) (actual\n> time=0.701..1.339 rows=3173 loops=1)\n\nHm, seems like the problem is that that lower HashAggregate is estimated\nas having only one row out, which is way off and doesn't sound like a\nparticularly bright default estimate anyway. (And then we're doing an\nadditional HashAggregate on top of that, which is useless --- implies\nthat something isn't realizing that the output of the SELECT DISTINCT\nis already distinct.)\n\nI'm suspicious that this is breakage from the work that was done on\ntargetlist SRFs in v10, but that's just a guess at this point.\n\nTrying simple test queries involving WHERE x IN (SELECT DISTINCT\nunnest(foo) FROM ...), I do not see a behavior like this, so there is some\nnot-very-obvious contributing factor in your situation. Can you put\ntogether a self-contained test case that produces a bogus one-row\nestimate? Extra points if it produces duplicate HashAgg steps.\n\n regards, tom lane",
"msg_date": "Thu, 23 Nov 2017 00:34:47 +0300",
"msg_from": "Dmitry Shalashov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query became very slow after 9.6 -> 10 upgrade"
},
{
"msg_contents": "Dmitry Shalashov <[email protected]> writes:\n> Turns out we had not 9.6 but 9.5.\n\nI'd managed to reproduce the weird planner behavior locally in the\nregression database:\n\nregression=# create table foo (f1 int[], f2 int);\nCREATE TABLE\nregression=# explain select * from tenk1 where unique2 in (select distinct unnest(f1) from foo where f2=1);\n QUERY PLAN \n-----------------------------------------------------------------------------------\n Nested Loop (cost=30.85..80.50 rows=6 width=244)\n -> HashAggregate (cost=30.57..30.63 rows=6 width=4)\n Group Key: (unnest(foo.f1))\n -> HashAggregate (cost=30.42..30.49 rows=6 width=4)\n Group Key: unnest(foo.f1)\n -> ProjectSet (cost=0.00..28.92 rows=600 width=4)\n -> Seq Scan on foo (cost=0.00..25.88 rows=6 width=32)\n Filter: (f2 = 1)\n -> Index Scan using tenk1_unique2 on tenk1 (cost=0.29..8.30 rows=1 width=244)\n Index Cond: (unique2 = (unnest(foo.f1)))\n(10 rows)\n\nDigging into it, the reason for the duplicate HashAggregate step was that\nquery_supports_distinctness() punted on SRFs-in-the-targetlist, basically\non the argument that it wasn't worth extra work to handle that case.\nThinking a bit harder, it seems to me that the correct analysis is:\n1. If we are proving distinctness on the grounds of a DISTINCT clause,\nthen it doesn't matter whether there are any SRFs, because DISTINCT\nremoves duplicates after tlist SRF expansion.\n2. But tlist SRFs break the ability to prove distinctness on the grounds\nof GROUP BY, unless all of them are within grouping columns.\nIt still seems like detecting the second case is harder than it's worth,\nbut we can trivially handle the first case, with little more than some\ncode rearrangement.\n\nThe other problem is that the output rowcount of the sub-select (ie, of\nthe HashAggregate) is being estimated as though the SRF weren't there.\nThis turns out to be because estimate_num_groups() doesn't consider the\npossibility of SRFs in the grouping columns. It never has, but in 9.6 and\nbefore the problem was masked by the fact that grouping_planner scaled up\nthe result rowcount by tlist_returns_set_rows() *after* performing\ngrouping. Now we're effectively doing that in the other order, which is\nmore correct, but that means estimate_num_groups() has to apply some sort\nof adjustment. I suggest that it just multiply its old estimate by the\nmaximum of the SRF expansion counts. That's likely to be an overestimate,\nbut it's really hard to do better without specific knowledge of the\nindividual SRF's behavior.\n\nIn short, I propose the attached fixes. I've checked this and it seems\nto fix Dmitry's original problem according to the test case he sent\noff-list.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 22 Nov 2017 18:07:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query became very slow after 9.6 -> 10 upgrade"
},
{
"msg_contents": "We tried to apply the patch on 10.1 source, but something is wrong it seems:\n\npatch -p1 < ../1.patch\n(Stripping trailing CRs from patch; use --binary to disable.)\npatching file src/backend/optimizer/plan/analyzejoins.c\n(Stripping trailing CRs from patch; use --binary to disable.)\npatching file src/backend/utils/adt/selfuncs.c\nHunk #1 succeeded at 3270 (offset -91 lines).\nHunk #2 succeeded at 3304 (offset -91 lines).\nHunk #3 succeeded at 3313 (offset -91 lines).\nHunk #4 succeeded at 3393 (offset -91 lines).\npatch unexpectedly ends in middle of line\nHunk #5 succeeded at 3570 with fuzz 1 (offset -91 lines).\n\n\nDmitry Shalashov, relap.io & surfingbird.ru\n\n2017-11-23 2:07 GMT+03:00 Tom Lane <[email protected]>:\n\n> Dmitry Shalashov <[email protected]> writes:\n> > Turns out we had not 9.6 but 9.5.\n>\n> I'd managed to reproduce the weird planner behavior locally in the\n> regression database:\n>\n> regression=# create table foo (f1 int[], f2 int);\n> CREATE TABLE\n> regression=# explain select * from tenk1 where unique2 in (select distinct\n> unnest(f1) from foo where f2=1);\n> QUERY PLAN\n> ------------------------------------------------------------\n> -----------------------\n> Nested Loop (cost=30.85..80.50 rows=6 width=244)\n> -> HashAggregate (cost=30.57..30.63 rows=6 width=4)\n> Group Key: (unnest(foo.f1))\n> -> HashAggregate (cost=30.42..30.49 rows=6 width=4)\n> Group Key: unnest(foo.f1)\n> -> ProjectSet (cost=0.00..28.92 rows=600 width=4)\n> -> Seq Scan on foo (cost=0.00..25.88 rows=6\n> width=32)\n> Filter: (f2 = 1)\n> -> Index Scan using tenk1_unique2 on tenk1 (cost=0.29..8.30 rows=1\n> width=244)\n> Index Cond: (unique2 = (unnest(foo.f1)))\n> (10 rows)\n>\n> Digging into it, the reason for the duplicate HashAggregate step was that\n> query_supports_distinctness() punted on SRFs-in-the-targetlist, basically\n> on the argument that it wasn't worth extra work to handle that case.\n> Thinking a bit harder, it seems to me that the correct analysis is:\n> 1. If we are proving distinctness on the grounds of a DISTINCT clause,\n> then it doesn't matter whether there are any SRFs, because DISTINCT\n> removes duplicates after tlist SRF expansion.\n> 2. But tlist SRFs break the ability to prove distinctness on the grounds\n> of GROUP BY, unless all of them are within grouping columns.\n> It still seems like detecting the second case is harder than it's worth,\n> but we can trivially handle the first case, with little more than some\n> code rearrangement.\n>\n> The other problem is that the output rowcount of the sub-select (ie, of\n> the HashAggregate) is being estimated as though the SRF weren't there.\n> This turns out to be because estimate_num_groups() doesn't consider the\n> possibility of SRFs in the grouping columns. It never has, but in 9.6 and\n> before the problem was masked by the fact that grouping_planner scaled up\n> the result rowcount by tlist_returns_set_rows() *after* performing\n> grouping. Now we're effectively doing that in the other order, which is\n> more correct, but that means estimate_num_groups() has to apply some sort\n> of adjustment. I suggest that it just multiply its old estimate by the\n> maximum of the SRF expansion counts. That's likely to be an overestimate,\n> but it's really hard to do better without specific knowledge of the\n> individual SRF's behavior.\n>\n> In short, I propose the attached fixes. I've checked this and it seems\n> to fix Dmitry's original problem according to the test case he sent\n> off-list.\n>\n> regards, tom lane\n>\n>\n\nWe tried to apply the patch on 10.1 source, but something is wrong it seems:patch -p1 < ../1.patch(Stripping trailing CRs from patch; use --binary to disable.)patching file src/backend/optimizer/plan/analyzejoins.c(Stripping trailing CRs from patch; use --binary to disable.)patching file src/backend/utils/adt/selfuncs.cHunk #1 succeeded at 3270 (offset -91 lines).Hunk #2 succeeded at 3304 (offset -91 lines).Hunk #3 succeeded at 3313 (offset -91 lines).Hunk #4 succeeded at 3393 (offset -91 lines).patch unexpectedly ends in middle of lineHunk #5 succeeded at 3570 with fuzz 1 (offset -91 lines).Dmitry Shalashov, relap.io & surfingbird.ru\n2017-11-23 2:07 GMT+03:00 Tom Lane <[email protected]>:Dmitry Shalashov <[email protected]> writes:\n> Turns out we had not 9.6 but 9.5.\n\nI'd managed to reproduce the weird planner behavior locally in the\nregression database:\n\nregression=# create table foo (f1 int[], f2 int);\nCREATE TABLE\nregression=# explain select * from tenk1 where unique2 in (select distinct unnest(f1) from foo where f2=1);\n QUERY PLAN\n-----------------------------------------------------------------------------------\n Nested Loop (cost=30.85..80.50 rows=6 width=244)\n -> HashAggregate (cost=30.57..30.63 rows=6 width=4)\n Group Key: (unnest(foo.f1))\n -> HashAggregate (cost=30.42..30.49 rows=6 width=4)\n Group Key: unnest(foo.f1)\n -> ProjectSet (cost=0.00..28.92 rows=600 width=4)\n -> Seq Scan on foo (cost=0.00..25.88 rows=6 width=32)\n Filter: (f2 = 1)\n -> Index Scan using tenk1_unique2 on tenk1 (cost=0.29..8.30 rows=1 width=244)\n Index Cond: (unique2 = (unnest(foo.f1)))\n(10 rows)\n\nDigging into it, the reason for the duplicate HashAggregate step was that\nquery_supports_distinctness() punted on SRFs-in-the-targetlist, basically\non the argument that it wasn't worth extra work to handle that case.\nThinking a bit harder, it seems to me that the correct analysis is:\n1. If we are proving distinctness on the grounds of a DISTINCT clause,\nthen it doesn't matter whether there are any SRFs, because DISTINCT\nremoves duplicates after tlist SRF expansion.\n2. But tlist SRFs break the ability to prove distinctness on the grounds\nof GROUP BY, unless all of them are within grouping columns.\nIt still seems like detecting the second case is harder than it's worth,\nbut we can trivially handle the first case, with little more than some\ncode rearrangement.\n\nThe other problem is that the output rowcount of the sub-select (ie, of\nthe HashAggregate) is being estimated as though the SRF weren't there.\nThis turns out to be because estimate_num_groups() doesn't consider the\npossibility of SRFs in the grouping columns. It never has, but in 9.6 and\nbefore the problem was masked by the fact that grouping_planner scaled up\nthe result rowcount by tlist_returns_set_rows() *after* performing\ngrouping. Now we're effectively doing that in the other order, which is\nmore correct, but that means estimate_num_groups() has to apply some sort\nof adjustment. I suggest that it just multiply its old estimate by the\nmaximum of the SRF expansion counts. That's likely to be an overestimate,\nbut it's really hard to do better without specific knowledge of the\nindividual SRF's behavior.\n\nIn short, I propose the attached fixes. I've checked this and it seems\nto fix Dmitry's original problem according to the test case he sent\noff-list.\n\n regards, tom lane",
"msg_date": "Thu, 23 Nov 2017 17:58:23 +0300",
"msg_from": "Dmitry Shalashov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query became very slow after 9.6 -> 10 upgrade"
},
{
"msg_contents": "Dmitry Shalashov <[email protected]> writes:\n> We tried to apply the patch on 10.1 source, but something is wrong it seems:\n> patch -p1 < ../1.patch\n> (Stripping trailing CRs from patch; use --binary to disable.)\n> patching file src/backend/optimizer/plan/analyzejoins.c\n> (Stripping trailing CRs from patch; use --binary to disable.)\n> patching file src/backend/utils/adt/selfuncs.c\n> Hunk #1 succeeded at 3270 (offset -91 lines).\n> Hunk #2 succeeded at 3304 (offset -91 lines).\n> Hunk #3 succeeded at 3313 (offset -91 lines).\n> Hunk #4 succeeded at 3393 (offset -91 lines).\n> patch unexpectedly ends in middle of line\n> Hunk #5 succeeded at 3570 with fuzz 1 (offset -91 lines).\n\nThe line number offsets are expected when applying to v10, but it looks\nlike you failed to transfer the attachment cleanly ... there were\ncertainly not CRs in it when I mailed it. The output on v10\nshould just look like\n\npatching file src/backend/optimizer/plan/analyzejoins.c\npatching file src/backend/utils/adt/selfuncs.c\nHunk #1 succeeded at 3270 (offset -91 lines).\nHunk #2 succeeded at 3304 (offset -91 lines).\nHunk #3 succeeded at 3313 (offset -91 lines).\nHunk #4 succeeded at 3393 (offset -91 lines).\nHunk #5 succeeded at 3570 (offset -91 lines).\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 23 Nov 2017 12:00:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query became very slow after 9.6 -> 10 upgrade"
},
{
"msg_contents": "> The line number offsets are expected when applying to v10, but it looks\n> like you failed to transfer the attachment cleanly ...\n\nYes, it was some mistake on our side.\n\nIt looks that patch helps us. Tom, thank you!\nI'm still testing it though, just in case.\n\nWhat are PostgreSQL schedule on releasing fixes like this? Can I expect\nthat it will be in 10.2 and when can I expect 10.2, approximately of course?\n\n\nDmitry Shalashov, relap.io & surfingbird.ru\n\n2017-11-23 20:00 GMT+03:00 Tom Lane <[email protected]>:\n\n> Dmitry Shalashov <[email protected]> writes:\n> > We tried to apply the patch on 10.1 source, but something is wrong it\n> seems:\n> > patch -p1 < ../1.patch\n> > (Stripping trailing CRs from patch; use --binary to disable.)\n> > patching file src/backend/optimizer/plan/analyzejoins.c\n> > (Stripping trailing CRs from patch; use --binary to disable.)\n> > patching file src/backend/utils/adt/selfuncs.c\n> > Hunk #1 succeeded at 3270 (offset -91 lines).\n> > Hunk #2 succeeded at 3304 (offset -91 lines).\n> > Hunk #3 succeeded at 3313 (offset -91 lines).\n> > Hunk #4 succeeded at 3393 (offset -91 lines).\n> > patch unexpectedly ends in middle of line\n> > Hunk #5 succeeded at 3570 with fuzz 1 (offset -91 lines).\n>\n> The line number offsets are expected when applying to v10, but it looks\n> like you failed to transfer the attachment cleanly ... there were\n> certainly not CRs in it when I mailed it. The output on v10\n> should just look like\n>\n> patching file src/backend/optimizer/plan/analyzejoins.c\n> patching file src/backend/utils/adt/selfuncs.c\n> Hunk #1 succeeded at 3270 (offset -91 lines).\n> Hunk #2 succeeded at 3304 (offset -91 lines).\n> Hunk #3 succeeded at 3313 (offset -91 lines).\n> Hunk #4 succeeded at 3393 (offset -91 lines).\n> Hunk #5 succeeded at 3570 (offset -91 lines).\n>\n> regards, tom lane\n>\n\n> The line number offsets are expected when applying to v10, but it looks> like you failed to transfer the attachment cleanly ...Yes, it was some mistake on our side.It looks that patch helps us. Tom, thank you!I'm still testing it though, just in case.What are PostgreSQL schedule on releasing fixes like this? Can I expect that it will be in 10.2 and when can I expect 10.2, approximately of course?Dmitry Shalashov, relap.io & surfingbird.ru\n2017-11-23 20:00 GMT+03:00 Tom Lane <[email protected]>:Dmitry Shalashov <[email protected]> writes:\n> We tried to apply the patch on 10.1 source, but something is wrong it seems:\n> patch -p1 < ../1.patch\n> (Stripping trailing CRs from patch; use --binary to disable.)\n> patching file src/backend/optimizer/plan/analyzejoins.c\n> (Stripping trailing CRs from patch; use --binary to disable.)\n> patching file src/backend/utils/adt/selfuncs.c\n> Hunk #1 succeeded at 3270 (offset -91 lines).\n> Hunk #2 succeeded at 3304 (offset -91 lines).\n> Hunk #3 succeeded at 3313 (offset -91 lines).\n> Hunk #4 succeeded at 3393 (offset -91 lines).\n> patch unexpectedly ends in middle of line\n> Hunk #5 succeeded at 3570 with fuzz 1 (offset -91 lines).\n\nThe line number offsets are expected when applying to v10, but it looks\nlike you failed to transfer the attachment cleanly ... there were\ncertainly not CRs in it when I mailed it. The output on v10\nshould just look like\n\npatching file src/backend/optimizer/plan/analyzejoins.c\npatching file src/backend/utils/adt/selfuncs.c\nHunk #1 succeeded at 3270 (offset -91 lines).\nHunk #2 succeeded at 3304 (offset -91 lines).\nHunk #3 succeeded at 3313 (offset -91 lines).\nHunk #4 succeeded at 3393 (offset -91 lines).\nHunk #5 succeeded at 3570 (offset -91 lines).\n\n regards, tom lane",
"msg_date": "Fri, 24 Nov 2017 18:44:21 +0300",
"msg_from": "Dmitry Shalashov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query became very slow after 9.6 -> 10 upgrade"
},
{
"msg_contents": "Dmitry Shalashov <[email protected]> writes:\n> It looks that patch helps us. Tom, thank you!\n> I'm still testing it though, just in case.\n\nExcellent, please follow up if you learn anything new.\n\n> What are PostgreSQL schedule on releasing fixes like this? Can I expect\n> that it will be in 10.2 and when can I expect 10.2, approximately of course?\n\nI haven't pushed it to the git repo yet, but I will shortly, and then\nit will be in the next minor release. That will probably be in\nearly February, per our release policy:\nhttps://www.postgresql.org/developer/roadmap/\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 24 Nov 2017 11:39:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query became very slow after 9.6 -> 10 upgrade"
},
{
"msg_contents": "> Excellent, please follow up if you learn anything new.\n\nSure. But my testing is over and something new might come out only\nincidentally now. Testing hasn't reveal anything interesting.\n\n> That will probably be in\n> early February, per our release policy:\n\nok, thanks. That makes me kinda hope for some security problem :)\n\nIs it completely safe to use manually patched version in production?\n\n\nDmitry Shalashov, relap.io & surfingbird.ru\n\n2017-11-24 19:39 GMT+03:00 Tom Lane <[email protected]>:\n\n> Dmitry Shalashov <[email protected]> writes:\n> > It looks that patch helps us. Tom, thank you!\n> > I'm still testing it though, just in case.\n>\n> Excellent, please follow up if you learn anything new.\n>\n> > What are PostgreSQL schedule on releasing fixes like this? Can I expect\n> > that it will be in 10.2 and when can I expect 10.2, approximately of\n> course?\n>\n> I haven't pushed it to the git repo yet, but I will shortly, and then\n> it will be in the next minor release. That will probably be in\n> early February, per our release policy:\n> https://www.postgresql.org/developer/roadmap/\n>\n> regards, tom lane\n>\n\n> Excellent, please follow up if you learn anything new.Sure. But my testing is over and something new might come out only incidentally now. Testing hasn't reveal anything interesting.> That will probably be in> early February, per our release policy:ok, thanks. That makes me kinda hope for some security problem :)Is it completely safe to use manually patched version in production?Dmitry Shalashov, relap.io & surfingbird.ru\n2017-11-24 19:39 GMT+03:00 Tom Lane <[email protected]>:Dmitry Shalashov <[email protected]> writes:\n> It looks that patch helps us. Tom, thank you!\n> I'm still testing it though, just in case.\n\nExcellent, please follow up if you learn anything new.\n\n> What are PostgreSQL schedule on releasing fixes like this? Can I expect\n> that it will be in 10.2 and when can I expect 10.2, approximately of course?\n\nI haven't pushed it to the git repo yet, but I will shortly, and then\nit will be in the next minor release. That will probably be in\nearly February, per our release policy:\nhttps://www.postgresql.org/developer/roadmap/\n\n regards, tom lane",
"msg_date": "Sat, 25 Nov 2017 14:54:55 +0300",
"msg_from": "Dmitry Shalashov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query became very slow after 9.6 -> 10 upgrade"
},
{
"msg_contents": "On Sat, Nov 25, 2017 at 8:54 PM, Dmitry Shalashov <[email protected]> wrote:\n> Is it completely safe to use manually patched version in production?\n\nPatching upstream PostgreSQL to fix a critical bug is something that\ncan of course be done. And to reach a state where you think something\nis safe to use in production first be sure to test it thoroughly on a\nstage instance. The author is also working on Postgres for 20 years,\nso this gives some insurance.\n-- \nMichael\n\n",
"msg_date": "Sat, 25 Nov 2017 21:13:42 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query became very slow after 9.6 -> 10 upgrade"
},
{
"msg_contents": "> The author is also working on Postgres for 20 years,\n> so this gives some insurance.\n\nI know. Tom is a legend. But still I'd like to hear from him to be sure :)\n\n\nDmitry Shalashov, relap.io & surfingbird.ru\n\n2017-11-25 15:13 GMT+03:00 Michael Paquier <[email protected]>:\n\n> On Sat, Nov 25, 2017 at 8:54 PM, Dmitry Shalashov <[email protected]>\n> wrote:\n> > Is it completely safe to use manually patched version in production?\n>\n> Patching upstream PostgreSQL to fix a critical bug is something that\n> can of course be done. And to reach a state where you think something\n> is safe to use in production first be sure to test it thoroughly on a\n> stage instance. The author is also working on Postgres for 20 years,\n> so this gives some insurance.\n> --\n> Michael\n>\n\n> The author is also working on Postgres for 20 years,> so this gives some insurance.I know. Tom is a legend. But still I'd like to hear from him to be sure :)Dmitry Shalashov, relap.io & surfingbird.ru\n2017-11-25 15:13 GMT+03:00 Michael Paquier <[email protected]>:On Sat, Nov 25, 2017 at 8:54 PM, Dmitry Shalashov <[email protected]> wrote:\n> Is it completely safe to use manually patched version in production?\n\nPatching upstream PostgreSQL to fix a critical bug is something that\ncan of course be done. And to reach a state where you think something\nis safe to use in production first be sure to test it thoroughly on a\nstage instance. The author is also working on Postgres for 20 years,\nso this gives some insurance.\n--\nMichael",
"msg_date": "Sat, 25 Nov 2017 18:39:07 +0300",
"msg_from": "Dmitry Shalashov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query became very slow after 9.6 -> 10 upgrade"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> On Sat, Nov 25, 2017 at 8:54 PM, Dmitry Shalashov <[email protected]> wrote:\n>> Is it completely safe to use manually patched version in production?\n\n> Patching upstream PostgreSQL to fix a critical bug is something that\n> can of course be done. And to reach a state where you think something\n> is safe to use in production first be sure to test it thoroughly on a\n> stage instance. The author is also working on Postgres for 20 years,\n> so this gives some insurance.\n\nIt's not like there's some magic dust that we sprinkle on the code at\nrelease time ;-). If there's a problem with that patch, it's much more\nlikely that you'd discover it through field testing than that we would\nnotice it during development (we missed the original problem after all).\nSo you can do that field testing now, or after 10.2 comes out. The\nformer seems preferable, if you are comfortable with building a patched\ncopy at all. I don't know what your normal source of Postgres executables\nis, but all the common packaging technologies make it pretty easy to\nrebuild a package from source with patch(es) added. Modifying your\nvendor's SRPM (or equivalent concept if you're not on Red Hat) is a\ngood skill to have.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 25 Nov 2017 10:42:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query became very slow after 9.6 -> 10 upgrade"
},
{
"msg_contents": "Ok, understood :-)\n\n\nDmitry Shalashov, relap.io & surfingbird.ru\n\n2017-11-25 18:42 GMT+03:00 Tom Lane <[email protected]>:\n\n> Michael Paquier <[email protected]> writes:\n> > On Sat, Nov 25, 2017 at 8:54 PM, Dmitry Shalashov <[email protected]>\n> wrote:\n> >> Is it completely safe to use manually patched version in production?\n>\n> > Patching upstream PostgreSQL to fix a critical bug is something that\n> > can of course be done. And to reach a state where you think something\n> > is safe to use in production first be sure to test it thoroughly on a\n> > stage instance. The author is also working on Postgres for 20 years,\n> > so this gives some insurance.\n>\n> It's not like there's some magic dust that we sprinkle on the code at\n> release time ;-). If there's a problem with that patch, it's much more\n> likely that you'd discover it through field testing than that we would\n> notice it during development (we missed the original problem after all).\n> So you can do that field testing now, or after 10.2 comes out. The\n> former seems preferable, if you are comfortable with building a patched\n> copy at all. I don't know what your normal source of Postgres executables\n> is, but all the common packaging technologies make it pretty easy to\n> rebuild a package from source with patch(es) added. Modifying your\n> vendor's SRPM (or equivalent concept if you're not on Red Hat) is a\n> good skill to have.\n>\n> regards, tom lane\n>\n\nOk, understood :-)Dmitry Shalashov, relap.io & surfingbird.ru\n2017-11-25 18:42 GMT+03:00 Tom Lane <[email protected]>:Michael Paquier <[email protected]> writes:\n> On Sat, Nov 25, 2017 at 8:54 PM, Dmitry Shalashov <[email protected]> wrote:\n>> Is it completely safe to use manually patched version in production?\n\n> Patching upstream PostgreSQL to fix a critical bug is something that\n> can of course be done. And to reach a state where you think something\n> is safe to use in production first be sure to test it thoroughly on a\n> stage instance. The author is also working on Postgres for 20 years,\n> so this gives some insurance.\n\nIt's not like there's some magic dust that we sprinkle on the code at\nrelease time ;-). If there's a problem with that patch, it's much more\nlikely that you'd discover it through field testing than that we would\nnotice it during development (we missed the original problem after all).\nSo you can do that field testing now, or after 10.2 comes out. The\nformer seems preferable, if you are comfortable with building a patched\ncopy at all. I don't know what your normal source of Postgres executables\nis, but all the common packaging technologies make it pretty easy to\nrebuild a package from source with patch(es) added. Modifying your\nvendor's SRPM (or equivalent concept if you're not on Red Hat) is a\ngood skill to have.\n\n regards, tom lane",
"msg_date": "Sat, 25 Nov 2017 21:59:40 +0300",
"msg_from": "Dmitry Shalashov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query became very slow after 9.6 -> 10 upgrade"
}
] |
[
{
"msg_contents": "Hi,\n\nWe have table created like this:\n\nCREATE TABLE xyz AS SELECT generate_series(1,10000000,1) AS gs;\n\nNow:\n\ndb=# explain analyze select * from xyz where gs&1=1;\n\n QUERY PLAN\n----------------------------------------------------------------------------\n-----------------------------------\n Seq Scan on xyz (cost=0.00..260815.38 rows=68920 width=4) (actual\ntime=0.044..2959.728 rows=5000000 loops=1)\n Filter: ((gs & 1) = 1)\n Rows Removed by Filter: 5000000\n Planning time: 0.133 ms\n Execution time: 3340.886 ms\n(5 rows)\n\nAnd after adding additional clause to WHERE:\n\ndb=# explain analyze select * from xyz where gs&1=1 and gs&2=2;\n\n QUERY PLAN\n----------------------------------------------------------------------------\n---------------------------------\n Seq Scan on xyz (cost=0.00..329735.50 rows=345 width=4) (actual\ntime=0.045..3010.430 rows=2500000 loops=1)\n Filter: (((gs & 1) = 1) AND ((gs & 2) = 2))\n Rows Removed by Filter: 7500000\n Planning time: 0.106 ms\n Execution time: 3176.355 ms\n(5 rows)\n\nAnd one more clause:\n\nnewrr=# explain analyze select * from xyz where gs&1=1 and gs&2=2 and\ngs&4=4;\n QUERY PLAN\n----------------------------------------------------------------------------\n-------------------------------\n Seq Scan on xyz (cost=0.00..398655.62 rows=2 width=4) (actual\ntime=0.052..3329.422 rows=1250000 loops=1)\n Filter: (((gs & 1) = 1) AND ((gs & 2) = 2) AND ((gs & 4) = 4))\n Rows Removed by Filter: 8750000\n Planning time: 0.119 ms\n Execution time: 3415.839 ms\n(5 rows)\n\nAs we can see estimates differs significally from the actual records count -\nonly three clauses are reducing estimated number of records from 10000000 to\n2.\n\nI noticed that each additional clause reduces the number about 200 times and\ndefine DEFAULT_NUM_DISTINCT is responsible for this behaviur.\n\nI think that this variable should be lower or maybe estimation using\nDEFAULT_NUM_DISTTINCT should be done once per table.\n\nArtur Zajac\n\n\n",
"msg_date": "Wed, 22 Nov 2017 15:29:54 +0100",
"msg_from": "=?iso-8859-2?Q?Artur_Zaj=B1c?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad estimates"
},
{
"msg_contents": "I'm assuming you never analyzed the table after creation & data load? What\ndoes this show you:\n\nselect * from pg_stat_all_tables where relname='xyz';\n\nDon.\n\n-- \nDon Seiler\nwww.seiler.us\n\nI'm assuming you never analyzed the table after creation & data load? What does this show you:select * from pg_stat_all_tables where relname='xyz';Don.-- Don Seilerwww.seiler.us",
"msg_date": "Wed, 22 Nov 2017 08:49:04 -0600",
"msg_from": "Don Seiler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad estimates"
},
{
"msg_contents": "On Wed, Nov 22, 2017 at 03:29:54PM +0100, Artur Zając wrote:\n> CREATE TABLE xyz AS SELECT generate_series(1,10000000,1) AS gs;\n> \n> db=# explain analyze select * from xyz where gs&1=1;\n> Seq Scan on xyz (cost=0.00..260815.38 rows=68920 width=4) (actual time=0.044..2959.728 rows=5000000 loops=1)\n...\n> newrr=# explain analyze select * from xyz where gs&1=1 and gs&2=2 and gs&4=4;\n> Seq Scan on xyz (cost=0.00..398655.62 rows=2 width=4) (actual time=0.052..3329.422 rows=1250000 loops=1)\n\n> I noticed that each additional clause reduces the number about 200 times and\n> define DEFAULT_NUM_DISTINCT is responsible for this behaviur.\n\nI think it's actually:\n\nsrc/include/utils/selfuncs.h-/* default selectivity estimate for boolean and null test nodes */\nsrc/include/utils/selfuncs.h-#define DEFAULT_UNK_SEL 0.005\n\n..which is 1/200.\n\nNote, you can do this, which helps a bit by collecting stats for the index\nexpr:\n\npostgres=# CREATE INDEX ON xyz((gs&1));\npostgres=# ANALYZE xyz;\npostgres=# explain analyze SELECT * FROM xyz WHERE gs&1=1 AND gs&2=2 AND gs&4=4;\n Bitmap Heap Scan on xyz (cost=91643.59..259941.99 rows=124 width=4) (actual time=472.376..2294.035 rows=1250000 loops=1)\n Recheck Cond: ((gs & 1) = 1)\n Filter: (((gs & 2) = 2) AND ((gs & 4) = 4))\n Rows Removed by Filter: 3750000\n Heap Blocks: exact=44248\n -> Bitmap Index Scan on xyz_expr_idx (cost=0.00..91643.55 rows=4962016 width=0) (actual time=463.477..463.477 rows=5000000 loops=1)\n Index Cond: ((gs & 1) = 1)\n\nJustin\n\n",
"msg_date": "Wed, 22 Nov 2017 08:57:13 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad estimates (DEFAULT_UNK_SEL)"
},
{
"msg_contents": "=?iso-8859-2?Q?Artur_Zaj=B1c?= <[email protected]> writes:\n[ poor estimates for WHERE clauses like \"(gs & 1) = 1\" ]\n\nDon't hold your breath waiting for that to get better on its own.\nYou need to work with the planner, not expect it to perform magic.\nIt has no stats that would help it discover what the behavior of\nthat sort of WHERE clause is; nor is there a good reason for it\nto think that the selectivity of such a clause is only 0.5 rather\nthan something more in line with the usual behavior of an equality\nconstraint on an integer value.\n\nOne way you could attack the problem, if you're wedded to this data\nrepresentation, is to create expression indexes on the terms \"(gs & x)\"\nfor all the values of x you use. Not only would that result in better\nestimates (after an ANALYZE) but it would also open the door to satisfying\nthis type of query through an index search. A downside is that updating\nall those indexes could make DML on the table pretty expensive.\n\nIf you're not wedded to this data representation, consider replacing that\ninteger flags column with a bunch of boolean columns. You might or might\nnot want indexes on the booleans, but in any case ANALYZE would create\nstats that would allow decent estimates for \"WHERE boolval\".\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 22 Nov 2017 10:01:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad estimates"
},
{
"msg_contents": "It doesn’t help in this case.\n\n \n\n--\n\nAlex Ignatov \nPostgres Professional: <http://www.postgrespro.com> http://www.postgrespro.com \nThe Russian Postgres Company\n\n \n\nFrom: Don Seiler [mailto:[email protected]] \nSent: Wednesday, November 22, 2017 5:49 PM\nTo: Artur Zając <[email protected]>\nCc: [email protected]\nSubject: Re: Bad estimates\n\n \n\nI'm assuming you never analyzed the table after creation & data load? What does this show you:\n\n \n\nselect * from pg_stat_all_tables where relname='xyz';\n\n \n\nDon.\n\n \n\n-- \n\nDon Seiler\nwww.seiler.us <http://www.seiler.us> \n\n\nIt doesn’t help in this case. --Alex Ignatov Postgres Professional: http://www.postgrespro.com The Russian Postgres Company From: Don Seiler [mailto:[email protected]] Sent: Wednesday, November 22, 2017 5:49 PMTo: Artur Zając <[email protected]>Cc: [email protected]: Re: Bad estimates I'm assuming you never analyzed the table after creation & data load? What does this show you: select * from pg_stat_all_tables where relname='xyz'; Don. -- Don Seilerwww.seiler.us",
"msg_date": "Wed, 22 Nov 2017 18:05:11 +0300",
"msg_from": "\"Alex Ignatov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Bad estimates"
},
{
"msg_contents": "Artur Zając wrote:\n> We have table created like this:\n> \n> CREATE TABLE xyz AS SELECT generate_series(1,10000000,1) AS gs;\n> \n> Now:\n> \n> explain analyze select * from xyz where gs&1=1;\n\n> Seq Scan on xyz (cost=0.00..260815.38 rows=68920 width=4)\n> (actual time=0.044..2959.728 rows=5000000 loops=1)\n> Filter: ((gs & 1) = 1)\n> Rows Removed by Filter: 5000000\n[...]\n> And one more clause:\n> \n> explain analyze select * from xyz where gs&1=1 and gs&2=2 and gs&4=4;\n\n> Seq Scan on xyz (cost=0.00..398655.62 rows=2 width=4)\n> (actual time=0.052..3329.422 rows=1250000 loops=1)\n> Filter: (((gs & 1) = 1) AND ((gs & 2) = 2) AND ((gs & 4) = 4))\n> Rows Removed by Filter: 8750000\n\n> As we can see estimates differs significally from the actual records count -\n> only three clauses are reducing estimated number of records from 10000000 to\n> 2.\n> \n> I noticed that each additional clause reduces the number about 200 times and\n> define DEFAULT_NUM_DISTINCT is responsible for this behaviur.\n> \n> I think that this variable should be lower or maybe estimation using\n> DEFAULT_NUM_DISTTINCT should be done once per table.\n\nThe problem is that the expression \"gs & 1\" is a black box for the\noptimizer; it cannot estimate how selective the condition is and falls\nback to a default value that is too low.\n\nYou can create an index to\na) improve the estimate\nand\nb) speed up the queries:\n\nCREATE INDEX ON xyz ((gs & 1), (gs & 2), (gs & 4));\n\nDon't forget to ANALYZE afterwards.\n\nYours,\nLaurenz Albe\n\n",
"msg_date": "Wed, 22 Nov 2017 16:09:54 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad estimates"
},
{
"msg_contents": "Thank you for your response,\n\nClause used by me is not important (I used binary & operator only for\nexample), I tried to show some kind of problems.\n\nNow I did another test:\n\nalter table xyz add x int;\nalter table xyz add y int;\nalter table xyz add z int;\nupdate xyz set x=gs,y=gs,z=gs;\n\ncreate index xyza_i1 on xyz ((x%200));\ncreate index xyza_i2 on xyz ((y%200));\ncreate index xyza_i3 on xyz ((z%200));\n\nvacuum full verbose xyza;\n\nAnd now:\n\nexplain analyze select gs from xyza where (x%200)=1 and (y%200)=1 and\n(z%200)=1;\n\n QUERY PLAN\n----------------------------------------------------------------------------\n------------------------------------------------------\n Bitmap Heap Scan on xyz (cost=2782.81..2786.83 rows=1 width=4) (actual\ntime=134.827..505.642 rows=50000 loops=1)\n Recheck Cond: (((z % 200) = 1) AND ((y % 200) = 1) AND ((x % 200) = 1))\n Heap Blocks: exact=50000\n -> BitmapAnd (cost=2782.81..2782.81 rows=1 width=0) (actual\ntime=108.712..108.712 rows=0 loops=1)\n -> Bitmap Index Scan on xyza_i3 (cost=0.00..927.43 rows=50000\nwidth=0) (actual time=22.857..22.857 rows=50000 loops=1)\n Index Cond: ((z % 200) = 1)\n -> Bitmap Index Scan on xyza_i2 (cost=0.00..927.43 rows=50000\nwidth=0) (actual time=26.058..26.058 rows=50000 loops=1)\n Index Cond: ((y % 200) = 1)\n -> Bitmap Index Scan on xyza_i1 (cost=0.00..927.43 rows=50000\nwidth=0) (actual time=23.079..23.079 rows=50000 loops=1)\n Index Cond: ((x % 200) = 1)\n Planning time: 0.340 ms\n Execution time: 513.171 ms\n(12 rows)\n\nEstimates are exactly the same because it's assumed that if first clause\nreduces records count by n, second by m, third by o then bringing all of\nthem together will reduce the result records count by n*m*o, so it is the\ngeneral behaviour, independent of whether they are statistics or not.\n\nYou suggest:\n\n> If you're not wedded to this data representation, consider replacing that\ninteger flags column with a bunch of boolean columns. You might or might\nnot want indexes on the booleans, but > in any case ANALYZE would create\nstats that would allow decent estimates for \"WHERE boolval\".\n\nBut, did you ever think about something like this?\n\nCREATE STATISTICS ON (x&1) FROM xyz;\n\n(using the syntax similar to CREATE STATISTICS from PostgreSQL 10).\n\nSometimes It's not possibile to divide one column into many , and as I know,\nit is not worth creating an index if there are few different values in the\ntable.\n\n\nArtur Zajac \n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Wednesday, November 22, 2017 4:02 PM\nTo: Artur Zając <[email protected]>\nCc: [email protected]\nSubject: Re: Bad estimates\n\n=?iso-8859-2?Q?Artur_Zaj=B1c?= <[email protected]> writes:\n[ poor estimates for WHERE clauses like \"(gs & 1) = 1\" ]\n\nDon't hold your breath waiting for that to get better on its own.\nYou need to work with the planner, not expect it to perform magic.\nIt has no stats that would help it discover what the behavior of that sort\nof WHERE clause is; nor is there a good reason for it to think that the\nselectivity of such a clause is only 0.5 rather than something more in line\nwith the usual behavior of an equality constraint on an integer value.\n\nOne way you could attack the problem, if you're wedded to this data\nrepresentation, is to create expression indexes on the terms \"(gs & x)\"\nfor all the values of x you use. Not only would that result in better\nestimates (after an ANALYZE) but it would also open the door to satisfying\nthis type of query through an index search. A downside is that updating all\nthose indexes could make DML on the table pretty expensive.\n\nIf you're not wedded to this data representation, consider replacing that\ninteger flags column with a bunch of boolean columns. You might or might\nnot want indexes on the booleans, but in any case ANALYZE would create stats\nthat would allow decent estimates for \"WHERE boolval\".\n\n\t\t\tregards, tom lane\n\n\n\n",
"msg_date": "Wed, 22 Nov 2017 17:52:30 +0100",
"msg_from": "=?iso-8859-2?Q?Artur_Zaj=B1c?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Bad estimates"
}
] |
[
{
"msg_contents": "I configured replication with repmgr on 2 postgresql 9.6.3 nodes. Both of\nthose utilities can handle failover but I should let only one of them do\nit. So, I wanted to know who should be the one responsible for the failover\nand why ?\n\nThanks .\n\nI configured replication with repmgr on 2 postgresql 9.6.3 nodes. Both of those utilities can handle failover but I should let only one of them do it. So, I wanted to know who should be the one responsible for the failover and why ?Thanks .",
"msg_date": "Thu, 23 Nov 2017 20:14:49 +0200",
"msg_from": "Mariel Cherkassky <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgpool + repmgr - who should be responsible for failover"
}
] |
[
{
"msg_contents": "Hi team,\n\nWe are facing the below issue while logging with postgres as given below:\n\n[centos@ip-192-90-2-208 ~]$ su - postgres\nPassword:\nLast login: Thu Nov 23 16:15:45 UTC 2017 on pts/1\n-bash-4.2$ psql\npsql: could not connect to server: No such file or directory\n Is the server running locally and accepting\n connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\n-bash-4.2$\n\n\nOS: Centos\nAWS environment\n\nRegards,\nDaulat\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n\n\nHi team,\n \nWe are facing the below issue while logging with postgres as given below:\n \n[centos@ip-192-90-2-208 ~]$ su - postgres\nPassword:\nLast login: Thu Nov 23 16:15:45 UTC 2017 on pts/1\n-bash-4.2$ psql\npsql: could not connect to server: No such file or directory\n Is the server running locally and accepting\n connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\n-bash-4.2$\n \n \nOS: Centos\nAWS environment\n \nRegards,\nDaulat\n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Fri, 24 Nov 2017 03:27:28 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "Issue with postgres login"
},
{
"msg_contents": "It seems you may have just the postgresql package installed and not the\npostgresql-server package. This Linode guide may be helpful:\n\nhttps://www.linode.com/docs/databases/postgresql/how-to-install-postgresql-relational-databases-on-centos-7\n\n-B\nOn Thu, Nov 23, 2017 at 22:28 Daulat Ram <[email protected]> wrote:\n\n> Hi team,\n>\n>\n>\n> We are facing the below issue while logging with postgres as given below:\n>\n>\n>\n> [centos@ip-192-90-2-208 ~]$ su - postgres\n>\n> Password:\n>\n> *Last login: Thu Nov 23 16:15:45 UTC 2017 on pts/1*\n>\n> *-bash-4.2$ psql*\n>\n> *psql: could not connect to server: No such file or directory*\n>\n> * Is the server running locally and accepting*\n>\n> * connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?*\n>\n> -bash-4.2$\n>\n>\n>\n>\n>\n> *OS: Centos*\n>\n> *AWS environment*\n>\n>\n>\n> Regards,\n>\n> Daulat\n>\n> ------------------------------\n>\n> DISCLAIMER:\n>\n> This email message is for the sole use of the intended recipient(s) and\n> may contain confidential and privileged information. Any unauthorized\n> review, use, disclosure or distribution is prohibited. If you are not the\n> intended recipient, please contact the sender by reply email and destroy\n> all copies of the original message. Check all attachments for viruses\n> before opening them. All views or opinions presented in this e-mail are\n> those of the author and may not reflect the opinion of Cyient or those of\n> our affiliates.\n>\n-- \nThanks,\n\n-B\n\nIt seems you may have just the postgresql package installed and not the postgresql-server package. This Linode guide may be helpful:https://www.linode.com/docs/databases/postgresql/how-to-install-postgresql-relational-databases-on-centos-7-BOn Thu, Nov 23, 2017 at 22:28 Daulat Ram <[email protected]> wrote:\n\n\nHi team,\n \nWe are facing the below issue while logging with postgres as given below:\n \n[centos@ip-192-90-2-208 ~]$ su - postgres\nPassword:\nLast login: Thu Nov 23 16:15:45 UTC 2017 on pts/1\n-bash-4.2$ psql\npsql: could not connect to server: No such file or directory\n Is the server running locally and accepting\n connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\n-bash-4.2$\n \n \nOS: Centos\nAWS environment\n \nRegards,\nDaulat\n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n-- Thanks,-B",
"msg_date": "Fri, 24 Nov 2017 03:37:22 +0000",
"msg_from": "Bob Strecansky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issue with postgres login"
},
{
"msg_contents": "Hi team,\r\n\r\nBut it was working fine earlier.\r\n\r\nAnother message I have seen after starting by\r\n-bash-4.2$ /usr/pgsql-9.3/bin/pg_ctl -D /var/lib/pgsql/9.3/data/ -l logfile start\r\n< 2017-11-24 04:00:40.780 UTC >FATAL: data directory \"/var/lib/pgsql/9.3/data\" has group or world access\r\n< 2017-11-24 04:00:40.780 UTC >DETAIL: Permissions should be u=rwx (0700).\r\n\r\nFor more details please refer to trail mail\r\nRegards,\r\nDaulat\r\nFrom: Bob Strecansky [mailto:[email protected]]\r\nSent: 24 November, 2017 9:07 AM\r\nTo: Daulat Ram <[email protected]>\r\nCc: [email protected]\r\nSubject: [EXTERNAL]Re: Issue with postgres login\r\n\r\nIt seems you may have just the postgresql package installed and not the postgresql-server package. This Linode guide may be helpful:\r\n\r\nhttps://www.linode.com/docs/databases/postgresql/how-to-install-postgresql-relational-databases-on-centos-7\r\n\r\n-B\r\nOn Thu, Nov 23, 2017 at 22:28 Daulat Ram <[email protected]<mailto:[email protected]>> wrote:\r\nHi team,\r\n\r\nWe are facing the below issue while logging with postgres as given below:\r\n\r\n[centos@ip-192-90-2-208 ~]$ su - postgres\r\nPassword:\r\nLast login: Thu Nov 23 16:15:45 UTC 2017 on pts/1\r\n-bash-4.2$ psql\r\npsql: could not connect to server: No such file or directory\r\n Is the server running locally and accepting\r\n connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\r\n-bash-4.2$\r\n\r\n\r\nOS: Centos\r\nAWS environment\r\n\r\nRegards,\r\nDaulat\r\n\r\n________________________________\r\n\r\nDISCLAIMER:\r\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\r\n--\r\nThanks,\r\n\r\n-B\r\n\n\n\n\n\n\n\n\n\nHi team,\n \nBut it was working fine earlier.\n \nAnother message I have seen after starting by\n-bash-4.2$ /usr/pgsql-9.3/bin/pg_ctl -D /var/lib/pgsql/9.3/data/ -l logfile start\n\n< 2017-11-24 04:00:40.780 UTC >FATAL: data directory \"/var/lib/pgsql/9.3/data\" has group or world access\n< 2017-11-24 04:00:40.780 UTC >DETAIL: Permissions should be u=rwx (0700).\n \nFor more details please refer to trail mail\nRegards,\nDaulat\nFrom: Bob Strecansky [mailto:[email protected]]\r\n\nSent: 24 November, 2017 9:07 AM\nTo: Daulat Ram <[email protected]>\nCc: [email protected]\nSubject: [EXTERNAL]Re: Issue with postgres login\n \nIt seems you may have just the postgresql package installed and not the postgresql-server package. This Linode guide may be helpful:\n\nhttps://www.linode.com/docs/databases/postgresql/how-to-install-postgresql-relational-databases-on-centos-7\n\r\n-B\n\n\nOn Thu, Nov 23, 2017 at 22:28 Daulat Ram <[email protected]> wrote:\n\n\n\n\nHi team,\n \nWe are facing the below issue while logging with postgres as given below:\n \n[centos@ip-192-90-2-208 ~]$ su - postgres\nPassword:\nLast login: Thu Nov 23 16:15:45 UTC 2017 on pts/1\n-bash-4.2$ psql\npsql: could not connect to server: No such file or directory\n Is the server running locally and accepting\n connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\n-bash-4.2$\n \n \nOS: Centos\nAWS environment\n \nRegards,\nDaulat\n\n \n\n\n\n\r\nDISCLAIMER:\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\r\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n-- \n\n\n\nThanks,\n\r\n-B",
"msg_date": "Fri, 24 Nov 2017 04:04:39 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Re: Issue with postgres login"
},
{
"msg_contents": "It appears the permissions with your data directory have changed. According\nto the error message, you should set the appropriate permissions for\n/var/lib/pgsql/9.3/data\n\n-B\n\nOn Thu, Nov 23, 2017 at 23:04 Daulat Ram <[email protected]> wrote:\n\n> Hi team,\n>\n>\n>\n> But it was working fine earlier.\n>\n>\n>\n> Another message I have seen after starting by\n>\n> -bash-4.2$ /usr/pgsql-9.3/bin/pg_ctl -D /var/lib/pgsql/9.3/data/ -l\n> logfile start\n>\n> < 2017-11-24 04:00:40.780 UTC >FATAL: data directory\n> \"/var/lib/pgsql/9.3/data\" has group or world access\n>\n> < 2017-11-24 04:00:40.780 UTC >DETAIL: Permissions should be u=rwx (0700).\n>\n>\n>\n> For more details please refer to trail mail\n>\n> Regards,\n>\n> Daulat\n>\n> *From:* Bob Strecansky [mailto:[email protected]]\n> *Sent:* 24 November, 2017 9:07 AM\n> *To:* Daulat Ram <[email protected]>\n> *Cc:* [email protected]\n> *Subject:* [EXTERNAL]Re: Issue with postgres login\n>\n>\n>\n> It seems you may have just the postgresql package installed and not the\n> postgresql-server package. This Linode guide may be helpful:\n>\n>\n> https://www.linode.com/docs/databases/postgresql/how-to-install-postgresql-relational-databases-on-centos-7\n>\n> -B\n>\n> On Thu, Nov 23, 2017 at 22:28 Daulat Ram <[email protected]> wrote:\n>\n> Hi team,\n>\n>\n>\n> We are facing the below issue while logging with postgres as given below:\n>\n>\n>\n> [centos@ip-192-90-2-208 ~]$ su - postgres\n>\n> Password:\n>\n> *Last login: Thu Nov 23 16:15:45 UTC 2017 on pts/1*\n>\n> *-bash-4.2$ psql*\n>\n> *psql: could not connect to server: No such file or directory*\n>\n> * Is the server running locally and accepting*\n>\n> * connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?*\n>\n> -bash-4.2$\n>\n>\n>\n>\n>\n> *OS: Centos*\n>\n> *AWS environment*\n>\n>\n>\n> Regards,\n>\n> Daulat\n>\n>\n> ------------------------------\n>\n>\n> DISCLAIMER:\n>\n> This email message is for the sole use of the intended recipient(s) and\n> may contain confidential and privileged information. Any unauthorized\n> review, use, disclosure or distribution is prohibited. If you are not the\n> intended recipient, please contact the sender by reply email and destroy\n> all copies of the original message. Check all attachments for viruses\n> before opening them. All views or opinions presented in this e-mail are\n> those of the author and may not reflect the opinion of Cyient or those of\n> our affiliates.\n>\n> --\n>\n> Thanks,\n>\n> -B\n>\n-- \nThanks,\n\n-B\n\nIt appears the permissions with your data directory have changed. According to the error message, you should set the appropriate permissions for /var/lib/pgsql/9.3/data-BOn Thu, Nov 23, 2017 at 23:04 Daulat Ram <[email protected]> wrote:\n\n\nHi team,\n \nBut it was working fine earlier.\n \nAnother message I have seen after starting by\n-bash-4.2$ /usr/pgsql-9.3/bin/pg_ctl -D /var/lib/pgsql/9.3/data/ -l logfile start\n\n< 2017-11-24 04:00:40.780 UTC >FATAL: data directory \"/var/lib/pgsql/9.3/data\" has group or world access\n< 2017-11-24 04:00:40.780 UTC >DETAIL: Permissions should be u=rwx (0700).\n \nFor more details please refer to trail mail\nRegards,\nDaulat\nFrom: Bob Strecansky [mailto:[email protected]]\n\nSent: 24 November, 2017 9:07 AM\nTo: Daulat Ram <[email protected]>\nCc: [email protected]\nSubject: [EXTERNAL]Re: Issue with postgres login\n \nIt seems you may have just the postgresql package installed and not the postgresql-server package. This Linode guide may be helpful:\n\nhttps://www.linode.com/docs/databases/postgresql/how-to-install-postgresql-relational-databases-on-centos-7\n\n-B\n\n\nOn Thu, Nov 23, 2017 at 22:28 Daulat Ram <[email protected]> wrote:\n\n\n\n\nHi team,\n \nWe are facing the below issue while logging with postgres as given below:\n \n[centos@ip-192-90-2-208 ~]$ su - postgres\nPassword:\nLast login: Thu Nov 23 16:15:45 UTC 2017 on pts/1\n-bash-4.2$ psql\npsql: could not connect to server: No such file or directory\n Is the server running locally and accepting\n connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\n-bash-4.2$\n \n \nOS: Centos\nAWS environment\n \nRegards,\nDaulat\n\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n-- \n\n\n\nThanks,\n\n-B\n\n\n-- Thanks,-B",
"msg_date": "Fri, 24 Nov 2017 04:07:20 +0000",
"msg_from": "Bob Strecansky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Issue with postgres login"
},
{
"msg_contents": "Hello team,\r\n\r\nWe have the data directory under /var/lib/pgsql/9.3/data/\r\n\r\nFYI. Below are the permissions to group and user on each directory. Please do help .\r\n\r\ndrwxr-xr-x. 19 root root 267 Nov 22 08:42 var/\r\ndrwxr-xr-x. 30 root root 4096 Nov 23 09:09 lib/\r\ndrwxrwxr-x. 3 postgres postgres 129 Nov 23 16:05 pgsql/\r\ndrwxrwxr-x. 4 postgres postgres 66 Nov 24 04:00 9.3/\r\ndrwxrwxr-x. 16 postgres postgres 4096 Nov 24 03:42 data/\r\n\r\n-rwxrwxrwx. 1 postgres postgres 4 Nov 23 09:09 PG_VERSION\r\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_twophase\r\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_tblspc\r\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_snapshots\r\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_serial\r\ndrwxrwxr-x. 4 postgres postgres 36 Nov 23 09:09 pg_multixact\r\n-rwxrwxrwx. 1 postgres postgres 1636 Nov 23 09:09 pg_ident.conf\r\ndrwxrwxr-x. 3 postgres postgres 60 Nov 23 09:09 pg_xlog\r\ndrwxrwxr-x. 2 postgres postgres 18 Nov 23 09:09 pg_subtrans\r\ndrwxrwxr-x. 2 postgres postgres 18 Nov 23 09:09 pg_clog\r\ndrwxrwxr-x. 5 postgres postgres 41 Nov 23 09:09 base\r\n-rwxrwxrwx. 1 postgres postgres 59 Nov 23 09:12 postmaster.opts\r\ndrwxrwxr-x. 2 postgres postgres 18 Nov 23 09:12 pg_notify\r\ndrwxrwxr-x. 2 postgres postgres 32 Nov 23 09:12 pg_log\r\ndrwxrwxr-x. 2 postgres postgres 4096 Nov 23 09:12 global\r\n-rwxrwxrwx. 1 postgres postgres 4232 Nov 23 10:50 pg_hba.conf_backup\r\n-rwxrwxrwx. 1 postgres postgres 9273 Nov 23 13:27 pg_hba.conf\r\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 14:32 pg_stat_tmp\r\ndrwxrwxr-x. 2 postgres postgres 63 Nov 23 14:32 pg_stat\r\n-rwxr-xr-x. 1 root root 20137 Nov 23 14:41 postgresql.conf.bak_231117\r\n-rwxrwxrwx. 1 postgres postgres 20230 Nov 23 15:03 postgresql.conf_original\r\n-rwx------. 1 postgres postgres 20137 Nov 24 03:30 postgresql.conf_24_nov\r\n-rwx------. 1 postgres postgres 20124 Nov 24 03:42 postgresql.conf\r\n\r\nAlso we have change the below parameters in Postgres.conf file to resolve the issue but still not resolved.\r\n\r\nlisten_addresses = '*\r\nport = 5432\r\nmax_connections = 100\r\nunix_socket_directories = '/tmp'\r\nunix_socket_group = ''\r\nunix_socket_permissions = 0777\r\n\r\nRegards,\r\nDaulat\r\n\r\nFrom: Bob Strecansky [mailto:[email protected]]\r\nSent: 24 November, 2017 9:37 AM\r\nTo: Daulat Ram <[email protected]>\r\nCc: [email protected]\r\nSubject: [EXTERNAL]Re: Re: Issue with postgres login\r\n\r\nIt appears the permissions with your data directory have changed. According to the error message, you should set the appropriate permissions for /var/lib/pgsql/9.3/data\r\n\r\n-B\r\n\r\nOn Thu, Nov 23, 2017 at 23:04 Daulat Ram <[email protected]<mailto:[email protected]>> wrote:\r\nHi team,\r\n\r\nBut it was working fine earlier.\r\n\r\nAnother message I have seen after starting by\r\n-bash-4.2$ /usr/pgsql-9.3/bin/pg_ctl -D /var/lib/pgsql/9.3/data/ -l logfile start\r\n< 2017-11-24 04:00:40.780 UTC >FATAL: data directory \"/var/lib/pgsql/9.3/data\" has group or world access\r\n< 2017-11-24 04:00:40.780 UTC >DETAIL: Permissions should be u=rwx (0700).\r\n\r\nFor more details please refer to trail mail\r\nRegards,\r\nDaulat\r\nFrom: Bob Strecansky [mailto:[email protected]<mailto:[email protected]>]\r\nSent: 24 November, 2017 9:07 AM\r\nTo: Daulat Ram <[email protected]<mailto:[email protected]>>\r\nCc: [email protected]<mailto:[email protected]>\r\nSubject: [EXTERNAL]Re: Issue with postgres login\r\n\r\nIt seems you may have just the postgresql package installed and not the postgresql-server package. This Linode guide may be helpful:\r\n\r\nhttps://www.linode.com/docs/databases/postgresql/how-to-install-postgresql-relational-databases-on-centos-7\r\n\r\n-B\r\nOn Thu, Nov 23, 2017 at 22:28 Daulat Ram <[email protected]<mailto:[email protected]>> wrote:\r\nHi team,\r\n\r\nWe are facing the below issue while logging with postgres as given below:\r\n\r\n[centos@ip-192-90-2-208 ~]$ su - postgres\r\nPassword:\r\nLast login: Thu Nov 23 16:15:45 UTC 2017 on pts/1\r\n-bash-4.2$ psql\r\npsql: could not connect to server: No such file or directory\r\n Is the server running locally and accepting\r\n connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\r\n-bash-4.2$\r\n\r\n\r\nOS: Centos\r\nAWS environment\r\n\r\nRegards,\r\nDaulat\r\n\r\n________________________________\r\n\r\nDISCLAIMER:\r\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\r\n--\r\nThanks,\r\n\r\n-B\r\n--\r\nThanks,\r\n\r\n-B\r\n\n\n\n\n\n\n\n\n\nHello team,\n \nWe have the data directory under\r\n/var/lib/pgsql/9.3/data/\n \nFYI. Below are the permissions to group and user on each directory. Please do help .\n \ndrwxr-xr-x. 19 root root 267 Nov 22 08:42\r\nvar/\ndrwxr-xr-x. 30 root root 4096 Nov 23 09:09\r\nlib/\ndrwxrwxr-x. 3 postgres postgres 129 Nov 23 16:05\r\npgsql/\ndrwxrwxr-x. 4 postgres postgres 66 Nov 24 04:00\r\n9.3/\ndrwxrwxr-x. 16 postgres postgres 4096 Nov 24 03:42\r\ndata/\n \n-rwxrwxrwx. 1 postgres postgres 4 Nov 23 09:09 PG_VERSION\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_twophase\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_tblspc\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_snapshots\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_serial\ndrwxrwxr-x. 4 postgres postgres 36 Nov 23 09:09 pg_multixact\n-rwxrwxrwx. 1 postgres postgres 1636 Nov 23 09:09 pg_ident.conf\ndrwxrwxr-x. 3 postgres postgres 60 Nov 23 09:09 pg_xlog\ndrwxrwxr-x. 2 postgres postgres 18 Nov 23 09:09 pg_subtrans\ndrwxrwxr-x. 2 postgres postgres 18 Nov 23 09:09 pg_clog\ndrwxrwxr-x. 5 postgres postgres 41 Nov 23 09:09 base\n-rwxrwxrwx. 1 postgres postgres 59 Nov 23 09:12 postmaster.opts\ndrwxrwxr-x. 2 postgres postgres 18 Nov 23 09:12 pg_notify\ndrwxrwxr-x. 2 postgres postgres 32 Nov 23 09:12 pg_log\ndrwxrwxr-x. 2 postgres postgres 4096 Nov 23 09:12 global\n-rwxrwxrwx. 1 postgres postgres 4232 Nov 23 10:50 pg_hba.conf_backup\n-rwxrwxrwx. 1 postgres postgres 9273 Nov 23 13:27 pg_hba.conf\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 14:32 pg_stat_tmp\ndrwxrwxr-x. 2 postgres postgres 63 Nov 23 14:32 pg_stat\n-rwxr-xr-x. 1 root root 20137 Nov 23 14:41 postgresql.conf.bak_231117\n-rwxrwxrwx. 1 postgres postgres 20230 Nov 23 15:03 postgresql.conf_original\n-rwx------. 1 postgres postgres 20137 Nov 24 03:30 postgresql.conf_24_nov\n-rwx------. 1 postgres postgres 20124 Nov 24 03:42 postgresql.conf\n \nAlso we have change the below parameters in\r\nPostgres.conf file to resolve the issue but still not resolved.\n \nlisten_addresses = '* \r\n\nport = 5432 \r\n\nmax_connections = 100 \r\n\nunix_socket_directories = '/tmp' \r\n\nunix_socket_group = '' \r\n\nunix_socket_permissions = 0777 \r\n\n \nRegards,\nDaulat\n \nFrom: Bob Strecansky [mailto:[email protected]]\r\n\nSent: 24 November, 2017 9:37 AM\nTo: Daulat Ram <[email protected]>\nCc: [email protected]\nSubject: [EXTERNAL]Re: Re: Issue with postgres login\n \n\n\nIt appears the permissions with your data directory have changed. According to the error message, you should set the appropriate permissions for /var/lib/pgsql/9.3/data\n\n\n \n\n\n-B\n\n \n\n\nOn Thu, Nov 23, 2017 at 23:04 Daulat Ram <[email protected]> wrote:\n\n\n\n\nHi team,\n \nBut it was working fine earlier.\n \nAnother message I have seen after starting by\n-bash-4.2$ /usr/pgsql-9.3/bin/pg_ctl -D /var/lib/pgsql/9.3/data/ -l logfile start\n< 2017-11-24 04:00:40.780 UTC >FATAL: data directory \"/var/lib/pgsql/9.3/data\" has group or world\r\n access\n< 2017-11-24 04:00:40.780 UTC >DETAIL: Permissions should be u=rwx (0700).\n \nFor more details please refer to trail mail\nRegards,\nDaulat\nFrom: Bob Strecansky [mailto:[email protected]]\r\n\nSent: 24 November, 2017 9:07 AM\nTo: Daulat Ram <[email protected]>\nCc: [email protected]\nSubject: [EXTERNAL]Re: Issue with postgres login\n\n\n\n\n \nIt seems you may have just the postgresql package installed and not the postgresql-server package. This Linode guide may be helpful:\n\nhttps://www.linode.com/docs/databases/postgresql/how-to-install-postgresql-relational-databases-on-centos-7\n\r\n-B\n\n\nOn Thu, Nov 23, 2017 at 22:28 Daulat Ram <[email protected]> wrote:\n\n\n\n\nHi team,\n \nWe are facing the below issue while logging with postgres as given below:\n \n[centos@ip-192-90-2-208 ~]$ su - postgres\nPassword:\nLast login: Thu Nov 23 16:15:45 UTC 2017 on pts/1\n-bash-4.2$ psql\npsql: could not connect to server: No such file or directory\n Is the server running locally and accepting\n connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\n-bash-4.2$\n \n \nOS: Centos\nAWS environment\n \nRegards,\nDaulat\n\n \n\n\n\n\r\nDISCLAIMER:\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\r\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n--\r\n\n\n\n\nThanks,\n\r\n-B\n\n\n\n\n\n\n\n\n-- \n\n\n\nThanks,\n\r\n-B",
"msg_date": "Fri, 24 Nov 2017 04:23:23 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Re: Re: Issue with postgres login"
},
{
"msg_contents": "It appears our file permissions are too wide open (which correlated with\nthe error message you shared in your previous email. Changing those to 700\nshould get you going in the correct direction.\n\n-B\n\nOn Thu, Nov 23, 2017 at 23:23 Daulat Ram <[email protected]> wrote:\n\n> Hello team,\n>\n>\n>\n> We have the data directory under */var/lib/pgsql/9.3/data/*\n>\n>\n>\n> FYI. Below are the permissions to group and user on each directory. Please\n> do help .\n>\n>\n>\n> drwxr-xr-x. 19 root root 267 Nov 22 08:42 *var/*\n>\n> drwxr-xr-x. 30 root root 4096 Nov 23 09:09 *lib/*\n>\n> drwxrwxr-x. 3 postgres postgres 129 Nov 23 16:05 *pgsql/*\n>\n> drwxrwxr-x. 4 postgres postgres 66 Nov 24 04:00 *9.3/*\n>\n> drwxrwxr-x. 16 postgres postgres 4096 Nov 24 03:42 *data/*\n>\n>\n>\n> -rwxrwxrwx. 1 postgres postgres 4 Nov 23 09:09 PG_VERSION\n>\n> drwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_twophase\n>\n> drwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_tblspc\n>\n> drwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_snapshots\n>\n> drwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_serial\n>\n> drwxrwxr-x. 4 postgres postgres 36 Nov 23 09:09 pg_multixact\n>\n> -rwxrwxrwx. 1 postgres postgres 1636 Nov 23 09:09 pg_ident.conf\n>\n> drwxrwxr-x. 3 postgres postgres 60 Nov 23 09:09 pg_xlog\n>\n> drwxrwxr-x. 2 postgres postgres 18 Nov 23 09:09 pg_subtrans\n>\n> drwxrwxr-x. 2 postgres postgres 18 Nov 23 09:09 pg_clog\n>\n> drwxrwxr-x. 5 postgres postgres 41 Nov 23 09:09 base\n>\n> -rwxrwxrwx. 1 postgres postgres 59 Nov 23 09:12 postmaster.opts\n>\n> drwxrwxr-x. 2 postgres postgres 18 Nov 23 09:12 pg_notify\n>\n> drwxrwxr-x. 2 postgres postgres 32 Nov 23 09:12 pg_log\n>\n> drwxrwxr-x. 2 postgres postgres 4096 Nov 23 09:12 global\n>\n> -rwxrwxrwx. 1 postgres postgres 4232 Nov 23 10:50 pg_hba.conf_backup\n>\n> -rwxrwxrwx. 1 postgres postgres 9273 Nov 23 13:27 pg_hba.conf\n>\n> drwxrwxr-x. 2 postgres postgres 6 Nov 23 14:32 pg_stat_tmp\n>\n> drwxrwxr-x. 2 postgres postgres 63 Nov 23 14:32 pg_stat\n>\n> -rwxr-xr-x. 1 root root 20137 Nov 23 14:41\n> postgresql.conf.bak_231117\n>\n> -rwxrwxrwx. 1 postgres postgres 20230 Nov 23 15:03 postgresql.conf_original\n>\n> -rwx------. 1 postgres postgres 20137 Nov 24 03:30 postgresql.conf_24_nov\n>\n> -rwx------. 1 postgres postgres 20124 Nov 24 03:42 postgresql.conf\n>\n>\n>\n> Also we have change the below parameters in *Postgres.conf* file to\n> resolve the issue but still not resolved.\n>\n>\n>\n> *listen_addresses = '* *\n>\n> *port = 5432 *\n>\n> *max_connections = 100 *\n>\n> *unix_socket_directories = '/tmp' *\n>\n> *unix_socket_group = '' *\n>\n> *unix_socket_permissions = 0777 *\n>\n>\n>\n> Regards,\n>\n> Daulat\n>\n>\n>\n> *From:* Bob Strecansky [mailto:[email protected]]\n> *Sent:* 24 November, 2017 9:37 AM\n>\n>\n> *To:* Daulat Ram <[email protected]>\n> *Cc:* [email protected]\n>\n> *Subject:* [EXTERNAL]Re: Re: Issue with postgres login\n>\n>\n>\n> It appears the permissions with your data directory have changed.\n> According to the error message, you should set the appropriate permissions\n> for /var/lib/pgsql/9.3/data\n>\n>\n>\n> -B\n>\n>\n>\n> On Thu, Nov 23, 2017 at 23:04 Daulat Ram <[email protected]> wrote:\n>\n> Hi team,\n>\n>\n>\n> But it was working fine earlier.\n>\n>\n>\n> Another message I have seen after starting by\n>\n> -bash-4.2$ /usr/pgsql-9.3/bin/pg_ctl -D /var/lib/pgsql/9.3/data/ -l\n> logfile start\n>\n> < 2017-11-24 04:00:40.780 UTC >FATAL: data directory\n> \"/var/lib/pgsql/9.3/data\" has group or world access\n>\n> < 2017-11-24 04:00:40.780 UTC >DETAIL: Permissions should be u=rwx (0700).\n>\n>\n>\n> For more details please refer to trail mail\n>\n> Regards,\n>\n> Daulat\n>\n> *From:* Bob Strecansky [mailto:[email protected]]\n> *Sent:* 24 November, 2017 9:07 AM\n> *To:* Daulat Ram <[email protected]>\n> *Cc:* [email protected]\n> *Subject:* [EXTERNAL]Re: Issue with postgres login\n>\n>\n>\n> It seems you may have just the postgresql package installed and not the\n> postgresql-server package. This Linode guide may be helpful:\n>\n>\n> https://www.linode.com/docs/databases/postgresql/how-to-install-postgresql-relational-databases-on-centos-7\n>\n> -B\n>\n> On Thu, Nov 23, 2017 at 22:28 Daulat Ram <[email protected]> wrote:\n>\n> Hi team,\n>\n>\n>\n> We are facing the below issue while logging with postgres as given below:\n>\n>\n>\n> [centos@ip-192-90-2-208 ~]$ su - postgres\n>\n> Password:\n>\n> *Last login: Thu Nov 23 16:15:45 UTC 2017 on pts/1*\n>\n> *-bash-4.2$ psql*\n>\n> *psql: could not connect to server: No such file or directory*\n>\n> * Is the server running locally and accepting*\n>\n> * connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?*\n>\n> -bash-4.2$\n>\n>\n>\n>\n>\n> *OS: Centos*\n>\n> *AWS environment*\n>\n>\n>\n> Regards,\n>\n> Daulat\n>\n>\n> ------------------------------\n>\n>\n> DISCLAIMER:\n>\n> This email message is for the sole use of the intended recipient(s) and\n> may contain confidential and privileged information. Any unauthorized\n> review, use, disclosure or distribution is prohibited. If you are not the\n> intended recipient, please contact the sender by reply email and destroy\n> all copies of the original message. Check all attachments for viruses\n> before opening them. All views or opinions presented in this e-mail are\n> those of the author and may not reflect the opinion of Cyient or those of\n> our affiliates.\n>\n> --\n>\n> Thanks,\n>\n> -B\n>\n> --\n>\n> Thanks,\n>\n> -B\n>\n-- \nThanks,\n\n-B\n\nIt appears our file permissions are too wide open (which correlated with the error message you shared in your previous email. Changing those to 700 should get you going in the correct direction.-BOn Thu, Nov 23, 2017 at 23:23 Daulat Ram <[email protected]> wrote:\n\n\nHello team,\n \nWe have the data directory under\n/var/lib/pgsql/9.3/data/\n \nFYI. Below are the permissions to group and user on each directory. Please do help .\n \ndrwxr-xr-x. 19 root root 267 Nov 22 08:42\nvar/\ndrwxr-xr-x. 30 root root 4096 Nov 23 09:09\nlib/\ndrwxrwxr-x. 3 postgres postgres 129 Nov 23 16:05\npgsql/\ndrwxrwxr-x. 4 postgres postgres 66 Nov 24 04:00\n9.3/\ndrwxrwxr-x. 16 postgres postgres 4096 Nov 24 03:42\ndata/\n \n-rwxrwxrwx. 1 postgres postgres 4 Nov 23 09:09 PG_VERSION\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_twophase\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_tblspc\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_snapshots\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_serial\ndrwxrwxr-x. 4 postgres postgres 36 Nov 23 09:09 pg_multixact\n-rwxrwxrwx. 1 postgres postgres 1636 Nov 23 09:09 pg_ident.conf\ndrwxrwxr-x. 3 postgres postgres 60 Nov 23 09:09 pg_xlog\ndrwxrwxr-x. 2 postgres postgres 18 Nov 23 09:09 pg_subtrans\ndrwxrwxr-x. 2 postgres postgres 18 Nov 23 09:09 pg_clog\ndrwxrwxr-x. 5 postgres postgres 41 Nov 23 09:09 base\n-rwxrwxrwx. 1 postgres postgres 59 Nov 23 09:12 postmaster.opts\ndrwxrwxr-x. 2 postgres postgres 18 Nov 23 09:12 pg_notify\ndrwxrwxr-x. 2 postgres postgres 32 Nov 23 09:12 pg_log\ndrwxrwxr-x. 2 postgres postgres 4096 Nov 23 09:12 global\n-rwxrwxrwx. 1 postgres postgres 4232 Nov 23 10:50 pg_hba.conf_backup\n-rwxrwxrwx. 1 postgres postgres 9273 Nov 23 13:27 pg_hba.conf\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 14:32 pg_stat_tmp\ndrwxrwxr-x. 2 postgres postgres 63 Nov 23 14:32 pg_stat\n-rwxr-xr-x. 1 root root 20137 Nov 23 14:41 postgresql.conf.bak_231117\n-rwxrwxrwx. 1 postgres postgres 20230 Nov 23 15:03 postgresql.conf_original\n-rwx------. 1 postgres postgres 20137 Nov 24 03:30 postgresql.conf_24_nov\n-rwx------. 1 postgres postgres 20124 Nov 24 03:42 postgresql.conf\n \nAlso we have change the below parameters in\nPostgres.conf file to resolve the issue but still not resolved.\n \nlisten_addresses = '* \n\nport = 5432 \n\nmax_connections = 100 \n\nunix_socket_directories = '/tmp' \n\nunix_socket_group = '' \n\nunix_socket_permissions = 0777 \n\n \nRegards,\nDaulat\n \nFrom: Bob Strecansky [mailto:[email protected]]\n\nSent: 24 November, 2017 9:37 AM\nTo: Daulat Ram <[email protected]>\nCc: [email protected]\nSubject: [EXTERNAL]Re: Re: Issue with postgres login\n \n\n\nIt appears the permissions with your data directory have changed. According to the error message, you should set the appropriate permissions for /var/lib/pgsql/9.3/data\n\n\n \n\n\n-B\n\n \n\n\nOn Thu, Nov 23, 2017 at 23:04 Daulat Ram <[email protected]> wrote:\n\n\n\n\nHi team,\n \nBut it was working fine earlier.\n \nAnother message I have seen after starting by\n-bash-4.2$ /usr/pgsql-9.3/bin/pg_ctl -D /var/lib/pgsql/9.3/data/ -l logfile start\n< 2017-11-24 04:00:40.780 UTC >FATAL: data directory \"/var/lib/pgsql/9.3/data\" has group or world\n access\n< 2017-11-24 04:00:40.780 UTC >DETAIL: Permissions should be u=rwx (0700).\n \nFor more details please refer to trail mail\nRegards,\nDaulat\nFrom: Bob Strecansky [mailto:[email protected]]\n\nSent: 24 November, 2017 9:07 AM\nTo: Daulat Ram <[email protected]>\nCc: [email protected]\nSubject: [EXTERNAL]Re: Issue with postgres login\n\n\n\n\n \nIt seems you may have just the postgresql package installed and not the postgresql-server package. This Linode guide may be helpful:\n\nhttps://www.linode.com/docs/databases/postgresql/how-to-install-postgresql-relational-databases-on-centos-7\n\n-B\n\n\nOn Thu, Nov 23, 2017 at 22:28 Daulat Ram <[email protected]> wrote:\n\n\n\n\nHi team,\n \nWe are facing the below issue while logging with postgres as given below:\n \n[centos@ip-192-90-2-208 ~]$ su - postgres\nPassword:\nLast login: Thu Nov 23 16:15:45 UTC 2017 on pts/1\n-bash-4.2$ psql\npsql: could not connect to server: No such file or directory\n Is the server running locally and accepting\n connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\n-bash-4.2$\n \n \nOS: Centos\nAWS environment\n \nRegards,\nDaulat\n\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n--\n\n\n\n\nThanks,\n\n-B\n\n\n\n\n\n\n\n\n-- \n\n\n\nThanks,\n\n-B\n\n\n-- Thanks,-B",
"msg_date": "Fri, 24 Nov 2017 04:25:42 +0000",
"msg_from": "Bob Strecansky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Re: Issue with postgres login"
},
{
"msg_contents": "Thanks for your response. But now I am giving below error while restore the backup\r\n./pg_restore -i -h localhost -p 5432 -U postgres --role postgres -d fleotan -W -v /backup/fleotanOne11.backup >& /backup/fleotanOne32_restore.log\r\n\r\npg_restore: [archiver (db)] connection to database \"fleotan\" failed: FATAL: Ident authentication failed for user \"postgres\"\r\n\r\n\r\npga_hba.conf have :\r\n\r\n# TYPE DATABASE USER ADDRESS METHOD\r\n\r\n# \"local\" is for Unix domain socket connections only\r\n#local all all peer\r\nlocal all all trust\r\n# IPv4 local connections:\r\nhost all all 127.0.0.1/32 md5\r\nhost all all 0.0.0.0/0 md5\r\n# IPv6 local connections:\r\nhost all all ::1/128 ident\r\n# Allow replication connections from localhost, by a user with the\r\n# replication privilege.\r\n#local replication postgres peer\r\n#host replication postgres 127.0.0.1/32 ident\r\n#host replication postgres ::1/128 ident\r\n\r\nPlz suggest\r\n\r\nRegards,\r\nDaulat\r\nFrom: Bob Strecansky [mailto:[email protected]]\r\nSent: 24 November, 2017 9:56 AM\r\nTo: Daulat Ram <[email protected]>\r\nCc: [email protected]\r\nSubject: [EXTERNAL]Re: Re: Re: Issue with postgres login\r\n\r\nIt appears our file permissions are too wide open (which correlated with the error message you shared in your previous email. Changing those to 700 should get you going in the correct direction.\r\n\r\n-B\r\n\r\nOn Thu, Nov 23, 2017 at 23:23 Daulat Ram <[email protected]<mailto:[email protected]>> wrote:\r\nHello team,\r\n\r\nWe have the data directory under /var/lib/pgsql/9.3/data/\r\n\r\nFYI. Below are the permissions to group and user on each directory. Please do help .\r\n\r\ndrwxr-xr-x. 19 root root 267 Nov 22 08:42 var/\r\ndrwxr-xr-x. 30 root root 4096 Nov 23 09:09 lib/\r\ndrwxrwxr-x. 3 postgres postgres 129 Nov 23 16:05 pgsql/\r\ndrwxrwxr-x. 4 postgres postgres 66 Nov 24 04:00 9.3/\r\ndrwxrwxr-x. 16 postgres postgres 4096 Nov 24 03:42 data/\r\n\r\n-rwxrwxrwx. 1 postgres postgres 4 Nov 23 09:09 PG_VERSION\r\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_twophase\r\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_tblspc\r\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_snapshots\r\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_serial\r\ndrwxrwxr-x. 4 postgres postgres 36 Nov 23 09:09 pg_multixact\r\n-rwxrwxrwx. 1 postgres postgres 1636 Nov 23 09:09 pg_ident.conf\r\ndrwxrwxr-x. 3 postgres postgres 60 Nov 23 09:09 pg_xlog\r\ndrwxrwxr-x. 2 postgres postgres 18 Nov 23 09:09 pg_subtrans\r\ndrwxrwxr-x. 2 postgres postgres 18 Nov 23 09:09 pg_clog\r\ndrwxrwxr-x. 5 postgres postgres 41 Nov 23 09:09 base\r\n-rwxrwxrwx. 1 postgres postgres 59 Nov 23 09:12 postmaster.opts\r\ndrwxrwxr-x. 2 postgres postgres 18 Nov 23 09:12 pg_notify\r\ndrwxrwxr-x. 2 postgres postgres 32 Nov 23 09:12 pg_log\r\ndrwxrwxr-x. 2 postgres postgres 4096 Nov 23 09:12 global\r\n-rwxrwxrwx. 1 postgres postgres 4232 Nov 23 10:50 pg_hba.conf_backup\r\n-rwxrwxrwx. 1 postgres postgres 9273 Nov 23 13:27 pg_hba.conf\r\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 14:32 pg_stat_tmp\r\ndrwxrwxr-x. 2 postgres postgres 63 Nov 23 14:32 pg_stat\r\n-rwxr-xr-x. 1 root root 20137 Nov 23 14:41 postgresql.conf.bak_231117\r\n-rwxrwxrwx. 1 postgres postgres 20230 Nov 23 15:03 postgresql.conf_original\r\n-rwx------. 1 postgres postgres 20137 Nov 24 03:30 postgresql.conf_24_nov\r\n-rwx------. 1 postgres postgres 20124 Nov 24 03:42 postgresql.conf\r\n\r\nAlso we have change the below parameters in Postgres.conf file to resolve the issue but still not resolved.\r\n\r\nlisten_addresses = '*\r\nport = 5432\r\nmax_connections = 100\r\nunix_socket_directories = '/tmp'\r\nunix_socket_group = ''\r\nunix_socket_permissions = 0777\r\n\r\nRegards,\r\nDaulat\r\n\r\nFrom: Bob Strecansky [mailto:[email protected]<mailto:[email protected]>]\r\nSent: 24 November, 2017 9:37 AM\r\n\r\nTo: Daulat Ram <[email protected]<mailto:[email protected]>>\r\nCc: [email protected]<mailto:[email protected]>\r\nSubject: [EXTERNAL]Re: Re: Issue with postgres login\r\n\r\nIt appears the permissions with your data directory have changed. According to the error message, you should set the appropriate permissions for /var/lib/pgsql/9.3/data\r\n\r\n-B\r\n\r\nOn Thu, Nov 23, 2017 at 23:04 Daulat Ram <[email protected]<mailto:[email protected]>> wrote:\r\nHi team,\r\n\r\nBut it was working fine earlier.\r\n\r\nAnother message I have seen after starting by\r\n-bash-4.2$ /usr/pgsql-9.3/bin/pg_ctl -D /var/lib/pgsql/9.3/data/ -l logfile start\r\n< 2017-11-24 04:00:40.780 UTC >FATAL: data directory \"/var/lib/pgsql/9.3/data\" has group or world access\r\n< 2017-11-24 04:00:40.780 UTC >DETAIL: Permissions should be u=rwx (0700).\r\n\r\nFor more details please refer to trail mail\r\nRegards,\r\nDaulat\r\nFrom: Bob Strecansky [mailto:[email protected]<mailto:[email protected]>]\r\nSent: 24 November, 2017 9:07 AM\r\nTo: Daulat Ram <[email protected]<mailto:[email protected]>>\r\nCc: [email protected]<mailto:[email protected]>\r\nSubject: [EXTERNAL]Re: Issue with postgres login\r\n\r\nIt seems you may have just the postgresql package installed and not the postgresql-server package. This Linode guide may be helpful:\r\n\r\nhttps://www.linode.com/docs/databases/postgresql/how-to-install-postgresql-relational-databases-on-centos-7\r\n\r\n-B\r\nOn Thu, Nov 23, 2017 at 22:28 Daulat Ram <[email protected]<mailto:[email protected]>> wrote:\r\nHi team,\r\n\r\nWe are facing the below issue while logging with postgres as given below:\r\n\r\n[centos@ip-192-90-2-208 ~]$ su - postgres\r\nPassword:\r\nLast login: Thu Nov 23 16:15:45 UTC 2017 on pts/1\r\n-bash-4.2$ psql\r\npsql: could not connect to server: No such file or directory\r\n Is the server running locally and accepting\r\n connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\r\n-bash-4.2$\r\n\r\n\r\nOS: Centos\r\nAWS environment\r\n\r\nRegards,\r\nDaulat\r\n\r\n________________________________\r\n\r\nDISCLAIMER:\r\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\r\n--\r\nThanks,\r\n\r\n-B\r\n--\r\nThanks,\r\n\r\n-B\r\n--\r\nThanks,\r\n\r\n-B\r\n\n\n\n\n\n\n\n\n\nThanks for your response. But now I am giving below error while restore the backup\n./pg_restore -i -h localhost -p 5432 -U postgres --role postgres -d fleotan -W -v /backup/fleotanOne11.backup >& /backup/fleotanOne32_restore.log\n \npg_restore: [archiver (db)] connection to database \"fleotan\" failed: FATAL: Ident authentication failed for user \"postgres\"\n \n \npga_hba.conf have :\n \n# TYPE DATABASE USER ADDRESS METHOD\n \n# \"local\" is for Unix domain socket connections only\n#local all all peer\nlocal all all trust\n# IPv4 local connections:\nhost all all 127.0.0.1/32 md5\nhost all all 0.0.0.0/0 md5\n# IPv6 local connections:\nhost all all ::1/128 ident\n# Allow replication connections from localhost, by a user with the\n# replication privilege.\n#local replication postgres peer\n#host replication postgres 127.0.0.1/32 ident\n#host replication postgres ::1/128 ident\n \nPlz suggest\n \nRegards,\nDaulat\nFrom: Bob Strecansky [mailto:[email protected]]\r\n\nSent: 24 November, 2017 9:56 AM\nTo: Daulat Ram <[email protected]>\nCc: [email protected]\nSubject: [EXTERNAL]Re: Re: Re: Issue with postgres login\n \n\n\nIt appears our file permissions are too wide open (which correlated with the error message you shared in your previous email. Changing those to 700 should get you going in the correct direction.\n\n\n \n\n\n-B\n\n \n\n\nOn Thu, Nov 23, 2017 at 23:23 Daulat Ram <[email protected]> wrote:\n\n\n\n\nHello team,\n \nWe have the data directory under\r\n/var/lib/pgsql/9.3/data/\n \nFYI. Below are the permissions to group and user on each directory. Please do help .\n \ndrwxr-xr-x. 19 root root 267 Nov 22 08:42\r\nvar/\ndrwxr-xr-x. 30 root root 4096 Nov 23 09:09\r\nlib/\ndrwxrwxr-x. 3 postgres postgres 129 Nov 23 16:05\r\npgsql/\ndrwxrwxr-x. 4 postgres postgres 66 Nov 24 04:00\r\n9.3/\ndrwxrwxr-x. 16 postgres postgres 4096 Nov 24 03:42\r\ndata/\n \n-rwxrwxrwx. 1 postgres postgres 4 Nov 23 09:09 PG_VERSION\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_twophase\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_tblspc\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_snapshots\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 09:09 pg_serial\ndrwxrwxr-x. 4 postgres postgres 36 Nov 23 09:09 pg_multixact\n-rwxrwxrwx. 1 postgres postgres 1636 Nov 23 09:09 pg_ident.conf\ndrwxrwxr-x. 3 postgres postgres 60 Nov 23 09:09 pg_xlog\ndrwxrwxr-x. 2 postgres postgres 18 Nov 23 09:09 pg_subtrans\ndrwxrwxr-x. 2 postgres postgres 18 Nov 23 09:09 pg_clog\ndrwxrwxr-x. 5 postgres postgres 41 Nov 23 09:09 base\n-rwxrwxrwx. 1 postgres postgres 59 Nov 23 09:12 postmaster.opts\ndrwxrwxr-x. 2 postgres postgres 18 Nov 23 09:12 pg_notify\ndrwxrwxr-x. 2 postgres postgres 32 Nov 23 09:12 pg_log\ndrwxrwxr-x. 2 postgres postgres 4096 Nov 23 09:12 global\n-rwxrwxrwx. 1 postgres postgres 4232 Nov 23 10:50 pg_hba.conf_backup\n-rwxrwxrwx. 1 postgres postgres 9273 Nov 23 13:27 pg_hba.conf\ndrwxrwxr-x. 2 postgres postgres 6 Nov 23 14:32 pg_stat_tmp\ndrwxrwxr-x. 2 postgres postgres 63 Nov 23 14:32 pg_stat\n-rwxr-xr-x. 1 root root 20137 Nov 23 14:41 postgresql.conf.bak_231117\n-rwxrwxrwx. 1 postgres postgres 20230 Nov 23 15:03 postgresql.conf_original\n-rwx------. 1 postgres postgres 20137 Nov 24 03:30 postgresql.conf_24_nov\n-rwx------. 1 postgres postgres 20124 Nov 24 03:42 postgresql.conf\n \nAlso we have change the below parameters in\r\nPostgres.conf file to resolve the issue but still not resolved.\n \nlisten_addresses = '* \r\n\nport = 5432 \r\n\nmax_connections = 100 \r\n\nunix_socket_directories = '/tmp' \r\n\nunix_socket_group = '' \r\n\nunix_socket_permissions = 0777 \r\n\n \nRegards,\nDaulat\n \nFrom: Bob Strecansky [mailto:[email protected]]\r\n\nSent: 24 November, 2017 9:37 AM\n\n\n\n\n\nTo: Daulat Ram <[email protected]>\nCc: [email protected]\n\n\n\n\nSubject: [EXTERNAL]Re: Re: Issue with\r\n postgres login\n\n\n\n\n \n\n\nIt appears the permissions with your data directory have changed. According to the error message, you should set the appropriate permissions for /var/lib/pgsql/9.3/data\n\n\n \n\n\n-B\n\n \n\n\nOn Thu, Nov 23, 2017 at 23:04 Daulat Ram <[email protected]> wrote:\n\n\n\n\nHi team,\n \nBut it was working fine earlier.\n \nAnother message I have seen after starting by\n-bash-4.2$ /usr/pgsql-9.3/bin/pg_ctl -D /var/lib/pgsql/9.3/data/ -l logfile start\n< 2017-11-24 04:00:40.780 UTC >FATAL: data directory \"/var/lib/pgsql/9.3/data\" has group or world\r\n access\n< 2017-11-24 04:00:40.780 UTC >DETAIL: Permissions should be u=rwx (0700).\n \nFor more details please refer to trail mail\nRegards,\nDaulat\nFrom: Bob Strecansky [mailto:[email protected]]\r\n\nSent: 24 November, 2017 9:07 AM\nTo: Daulat Ram <[email protected]>\nCc: [email protected]\nSubject: [EXTERNAL]Re: Issue with postgres login\n\n\n\n\n \nIt seems you may have just the postgresql package installed and not the postgresql-server package. This Linode guide may be helpful:\n\nhttps://www.linode.com/docs/databases/postgresql/how-to-install-postgresql-relational-databases-on-centos-7\n\r\n-B\n\n\nOn Thu, Nov 23, 2017 at 22:28 Daulat Ram <[email protected]> wrote:\n\n\n\n\nHi team,\n \nWe are facing the below issue while logging with postgres as given below:\n \n[centos@ip-192-90-2-208 ~]$ su - postgres\nPassword:\nLast login: Thu Nov 23 16:15:45 UTC 2017 on pts/1\n-bash-4.2$ psql\npsql: could not connect to server: No such file or directory\n Is the server running locally and accepting\n connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\n-bash-4.2$\n \n \nOS: Centos\nAWS environment\n \nRegards,\nDaulat\n\n \n\n\n\n\r\nDISCLAIMER:\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\r\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n--\r\n\n\n\n\nThanks,\n\r\n-B\n\n\n\n\n\n\n\n\n--\r\n\n\n\n\nThanks,\n\r\n-B\n\n\n\n\n\n\n\n\n-- \n\n\n\nThanks,\n\r\n-B",
"msg_date": "Fri, 24 Nov 2017 06:49:26 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Re: Re: Re: Issue with postgres login"
}
] |
[
{
"msg_contents": "Hi there,\n\nWe are creating a new DB which will behave most like a file system, I mean,\nthere will be no complex queries or joins running in the DB. The idea is to\ngrab the WHOLE set of messages for a particular user and then filter,\norder, combine or full text search in the function itself (AWS Lambda). The\nmaximum number of messages is limited to 1.000 messages per user. So we\nexpect Postgres to have an amazing performance for this scenario.\n\nAs I am not really familiar with PG (9.6, or 10, in case RDS release it\nbefore February) I would like to share what we are planning to do for this\nDB. So if you guys could share your thoughts, that would be great! :)\n\nTable structure:\n\n\n\n· MessageID (UUID) - PK\n\n· UserCountry (ISO)\n\n· UserRole (TEXT 15)\n\n· UserID (TEXT 30) – FK (although there is no constraint)\n\n· LifeCycle (RANGE DATE? Or 2 TimeStampWithTZ? Start_date and\nend_date?)\n\n· Channel (TEXT 15)\n\n· Tags (TEXT 2000)\n\n· Menu (TEXT 200)\n\n· Icon (TEXT 500) – URL to an image which will be used as an icon;\n\n· Title (TEXT 150)\n\n· *Body (JSON – up to 10K) – Meta data describing all the data to a\nspecific type of message. The JSON changes according to the type of\nmessage. We are assuming most messages will use less than 1K for this\nfield.*\n\n· Delete (BOOLEAN) – Soft Delete\n\n· Created (Timestamp – With TZ)\n\n· CreatedBy (TEXT 50)\n\n\n\nOnly 1 table\n\n· Messages\n\n3 indexes:\n\n· MessageID PK (UUID)\n\n· Main fetch key (UserCountry + UserID) - *****\n\n· End_date (To locate old messages that can be moved to another DB\n- which will hold the old messages);\n\n\n\nSizing and worst case scenario:\n\n\n\n· 500MM messages in the main DB\n\n· 4K queries per second (by UserID) – Max time of 500ms per query.\nSimples SELECT, with no ORDER, WHERE OR GROUP BY. Just grab all the\nmessages for a particular user. MAX 1000 messages per USER.\n\n· 1K inserts per second on average (So that in 1 hour we can insert\naround 3MM messages)\n\n· 1K deletes per second on average (So that in 1 hour we can remove\naround 3MM messages)\n\n\nMy question is:\n\n\n - Can we use any kind of compression for PostgreSQL which would result\n in reduced IO and disk size?\n - We are not relying on any kind of table partitioning, is that the best\n approach for this scenario?\n - Is PG on RDS capable of delivering this type of performance while\n requiring low maintenance?\n - What about Auto Vacuum? Any suggestion how to optimize it for such a\n work load (we will insert and delete millions of rows every day).\n\nP.S.: We are going to test all this, but if we don't get the performance we\nare expecting, all optimization tips from you guys will be really\nappreciated. :)\n\nThanks\n\nHi there,We are creating a new DB which will behave most like a file system, I mean, there will be no complex queries or joins running in the DB. The idea is to grab the WHOLE set of messages for a particular user and then filter, order, combine or full text search in the function itself (AWS Lambda). The maximum number of messages is limited to 1.000 messages per user. So we expect Postgres to have an amazing performance for this scenario.As I am not really familiar with PG (9.6, or 10, in case RDS release it before February) I would like to share what we are planning to do for this DB. So if you guys could share your thoughts, that would be great! :)Table structure:\n \n· \nMessageID\n(UUID) - PK\n· UserCountry\n(ISO)\n· UserRole\n(TEXT 15)\n· \nUserID\n(TEXT 30) – FK (although there is no constraint)\n· \nLifeCycle\n(RANGE DATE? Or 2 TimeStampWithTZ? Start_date and end_date?)\n· \nChannel\n(TEXT 15)\n· \nTags\n(TEXT 2000)\n· \nMenu\n(TEXT 200)\n· \nIcon\n(TEXT 500) – URL to an image which will be used as an icon;\n· \nTitle\n(TEXT 150)\n· \nBody\n(JSON – up to 10K) – Meta data describing all the data to a specific type of\nmessage. The JSON changes according to the type of message. We are assuming most messages will use less than 1K for this field.\n· \nDelete\n(BOOLEAN) – Soft Delete\n· \nCreated\n(Timestamp – With TZ)\n· \nCreatedBy\n(TEXT 50) \n \nOnly 1\ntable\n· \nMessages\n3 indexes:\n· MessageID PK\n(UUID)\n· \nMain\nfetch key (UserCountry + UserID) - *****\n· \nEnd_date\n(To locate old messages that can be moved to another DB - which will hold the\nold messages);\n \nSizing and worst case scenario:\n \n· \n500MM\nmessages in the main DB\n· \n4K\nqueries per second (by UserID) – Max time of 500ms per query. Simples SELECT, with no ORDER, WHERE OR GROUP BY. Just grab all the messages for a particular user. MAX 1000 messages per USER.\n· \n1K\ninserts per second on average (So that in 1 hour we can insert around 3MM\nmessages)\n· \n1K\ndeletes per second on average (So that in 1 hour we can remove around 3MM\nmessages)My question is:Can we use any kind of compression for PostgreSQL which would result in reduced IO and disk size?We are not relying on any kind of table partitioning, is that the best approach for this scenario?Is PG on RDS capable of delivering this type of performance while requiring low maintenance?What about Auto Vacuum? Any suggestion how to optimize it for such a work load (we will insert and delete millions of rows every day).P.S.: We are going to test all this, but if we don't get the performance we are expecting, all optimization tips from you guys will be really appreciated. :)Thanks",
"msg_date": "Mon, 27 Nov 2017 15:58:01 -0200",
"msg_from": "Jean Baro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Half billion records in one table? RDS"
},
{
"msg_contents": "Jean Baro wrote:\n> Hi there,\n> \n> We are creating a new DB which will behave most like a file system,\n> I mean, there will be no complex queries or joins running in the DB.\n> The idea is to grab the WHOLE set of messages for a particular user\n> and then filter, order, combine or full text search in the function itself (AWS Lambda).\n> The maximum number of messages is limited to 1.000 messages per user.\n> So we expect Postgres to have an amazing performance for this scenario.\n> \n[...]\n> \n> Sizing and worst case scenario:\n> \n> · 500MM messages in the main DB\n> · 4K queries per second (by UserID) – Max time of 500ms per query. Simples SELECT,\n> with no ORDER, WHERE OR GROUP BY. Just grab all the messages for a particular user. MAX 1000 messages per USER.\n> · 1K inserts per second on average (So that in 1 hour we can insert around 3MM messages)\n> · 1K deletes per second on average (So that in 1 hour we can remove around 3MM messages)\n> \n> My question is:\n> Can we use any kind of compression for PostgreSQL which would result in reduced IO and disk size?\n> We are not relying on any kind of table partitioning, is that the best approach for this scenario?\n> Is PG on RDS capable of delivering this type of performance while requiring low maintenance?\n> What about Auto Vacuum? Any suggestion how to optimize it for such a work load\n> (we will insert and delete millions of rows every day).\n\nIt sounds like your JSON data, which are your chief concern, are\nnot processed inside the database. For that, the type \"json\" is best.\nSuch data are automatically stored in a compressed format if their\nsize exceeds 2KB. The compression is not amazingly good, but fast.\n\nIf your application removes data by deleting them from the\ntable, partitioning won't help. It is useful if data get removed\nin bulk, e.g. if you want to delete all yesterday's data at once.\n\nThe workload does not sound amazingly large, so I'd expect PostgreSQL\nto have no problems with it with decent storage and CPU power,\nbut you'd have to test that.\n\nTune autovacuum if it cannot keep up (tables get bloated).\nThe first knob to twiddle is probably lowering \"autovacuum_vacuum_cost_delay\".\nAutovacuum might be your biggest problem (only guessing).\n\nYours,\nLaurenz Albe\n\n",
"msg_date": "Mon, 27 Nov 2017 19:18:33 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Half billion records in one table? RDS"
},
{
"msg_contents": "Why not store metadata in pg and the payload in S3?\n\nOn Mon, Nov 27, 2017 at 11:58 AM Jean Baro <[email protected]> wrote:\n\n> Hi there,\n>\n> We are creating a new DB which will behave most like a file system, I\n> mean, there will be no complex queries or joins running in the DB. The idea\n> is to grab the WHOLE set of messages for a particular user and then filter,\n> order, combine or full text search in the function itself (AWS Lambda). The\n> maximum number of messages is limited to 1.000 messages per user. So we\n> expect Postgres to have an amazing performance for this scenario.\n>\n> As I am not really familiar with PG (9.6, or 10, in case RDS release it\n> before February) I would like to share what we are planning to do for this\n> DB. So if you guys could share your thoughts, that would be great! :)\n>\n> Table structure:\n>\n>\n>\n> · MessageID (UUID) - PK\n>\n> · UserCountry (ISO)\n>\n> · UserRole (TEXT 15)\n>\n> · UserID (TEXT 30) – FK (although there is no constraint)\n>\n> · LifeCycle (RANGE DATE? Or 2 TimeStampWithTZ? Start_date and\n> end_date?)\n>\n> · Channel (TEXT 15)\n>\n> · Tags (TEXT 2000)\n>\n> · Menu (TEXT 200)\n>\n> · Icon (TEXT 500) – URL to an image which will be used as an icon;\n>\n> · Title (TEXT 150)\n>\n> · *Body (JSON – up to 10K) – Meta data describing all the data to\n> a specific type of message. The JSON changes according to the type of\n> message. We are assuming most messages will use less than 1K for this\n> field.*\n>\n> · Delete (BOOLEAN) – Soft Delete\n>\n> · Created (Timestamp – With TZ)\n>\n> · CreatedBy (TEXT 50)\n>\n>\n>\n> Only 1 table\n>\n> · Messages\n>\n> 3 indexes:\n>\n> · MessageID PK (UUID)\n>\n> · Main fetch key (UserCountry + UserID) - *****\n>\n> · End_date (To locate old messages that can be moved to another\n> DB - which will hold the old messages);\n>\n>\n>\n> Sizing and worst case scenario:\n>\n>\n>\n> · 500MM messages in the main DB\n>\n> · 4K queries per second (by UserID) – Max time of 500ms per\n> query. Simples SELECT, with no ORDER, WHERE OR GROUP BY. Just grab all the\n> messages for a particular user. MAX 1000 messages per USER.\n>\n> · 1K inserts per second on average (So that in 1 hour we can\n> insert around 3MM messages)\n>\n> · 1K deletes per second on average (So that in 1 hour we can\n> remove around 3MM messages)\n>\n>\n> My question is:\n>\n>\n> - Can we use any kind of compression for PostgreSQL which would result\n> in reduced IO and disk size?\n> - We are not relying on any kind of table partitioning, is that the\n> best approach for this scenario?\n> - Is PG on RDS capable of delivering this type of performance while\n> requiring low maintenance?\n> - What about Auto Vacuum? Any suggestion how to optimize it for such a\n> work load (we will insert and delete millions of rows every day).\n>\n> P.S.: We are going to test all this, but if we don't get the performance\n> we are expecting, all optimization tips from you guys will be really\n> appreciated. :)\n>\n> Thanks\n>\n>\n>\n> --\n\nRegards,\n/Aaron\n\nWhy not store metadata in pg and the payload in S3? On Mon, Nov 27, 2017 at 11:58 AM Jean Baro <[email protected]> wrote:Hi there,We are creating a new DB which will behave most like a file system, I mean, there will be no complex queries or joins running in the DB. The idea is to grab the WHOLE set of messages for a particular user and then filter, order, combine or full text search in the function itself (AWS Lambda). The maximum number of messages is limited to 1.000 messages per user. So we expect Postgres to have an amazing performance for this scenario.As I am not really familiar with PG (9.6, or 10, in case RDS release it before February) I would like to share what we are planning to do for this DB. So if you guys could share your thoughts, that would be great! :)Table structure:\n \n· \nMessageID\n(UUID) - PK\n· UserCountry\n(ISO)\n· UserRole\n(TEXT 15)\n· \nUserID\n(TEXT 30) – FK (although there is no constraint)\n· \nLifeCycle\n(RANGE DATE? Or 2 TimeStampWithTZ? Start_date and end_date?)\n· \nChannel\n(TEXT 15)\n· \nTags\n(TEXT 2000)\n· \nMenu\n(TEXT 200)\n· \nIcon\n(TEXT 500) – URL to an image which will be used as an icon;\n· \nTitle\n(TEXT 150)\n· \nBody\n(JSON – up to 10K) – Meta data describing all the data to a specific type of\nmessage. The JSON changes according to the type of message. We are assuming most messages will use less than 1K for this field.\n· \nDelete\n(BOOLEAN) – Soft Delete\n· \nCreated\n(Timestamp – With TZ)\n· \nCreatedBy\n(TEXT 50) \n \nOnly 1\ntable\n· \nMessages\n3 indexes:\n· MessageID PK\n(UUID)\n· \nMain\nfetch key (UserCountry + UserID) - *****\n· \nEnd_date\n(To locate old messages that can be moved to another DB - which will hold the\nold messages);\n \nSizing and worst case scenario:\n \n· \n500MM\nmessages in the main DB\n· \n4K\nqueries per second (by UserID) – Max time of 500ms per query. Simples SELECT, with no ORDER, WHERE OR GROUP BY. Just grab all the messages for a particular user. MAX 1000 messages per USER.\n· \n1K\ninserts per second on average (So that in 1 hour we can insert around 3MM\nmessages)\n· \n1K\ndeletes per second on average (So that in 1 hour we can remove around 3MM\nmessages)My question is:Can we use any kind of compression for PostgreSQL which would result in reduced IO and disk size?We are not relying on any kind of table partitioning, is that the best approach for this scenario?Is PG on RDS capable of delivering this type of performance while requiring low maintenance?What about Auto Vacuum? Any suggestion how to optimize it for such a work load (we will insert and delete millions of rows every day).P.S.: We are going to test all this, but if we don't get the performance we are expecting, all optimization tips from you guys will be really appreciated. :)Thanks\n-- Regards,/Aaron",
"msg_date": "Wed, 06 Dec 2017 01:59:35 +0000",
"msg_from": "Aaron Werman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Half billion records in one table? RDS"
}
] |
[
{
"msg_contents": "Good afternoon.\n\nWe run Postgres (currently 9.2, upgrading to 9.6 shortly) in VMWare ESX\nmachines. We currently have effective_io_concurrency set to the default of\n1. I'm told that the data volume is a RAID 6 with 14 data drives and 2\nparity drives. I know that RAID10 is recommended, just working with what\nI've inherited for now (storage is high-end HP 3Par and HP recommended RAID\n6 for best performance).\n\nAnyway, I'm wondering if, in a virtualized environment with a VM datastore,\nit makes sense to set effective_io_concurrency closer to the number of data\ndrives?\n\nI'd also be interested in hearing how others have configured their\nPostgreSQL instances for VMs (if there's anything special to think about).\n\nDon.\n\n-- \nDon Seiler\nwww.seiler.us\n\nGood afternoon.We run Postgres (currently 9.2, upgrading to 9.6 shortly) in VMWare ESX machines. We currently have effective_io_concurrency set to the default of 1. I'm told that the data volume is a RAID 6 with 14 data drives and 2 parity drives. I know that RAID10 is recommended, just working with what I've inherited for now (storage is high-end HP 3Par and HP recommended RAID 6 for best performance).Anyway, I'm wondering if, in a virtualized environment with a VM datastore, it makes sense to set effective_io_concurrency closer to the number of data drives?I'd also be interested in hearing how others have configured their PostgreSQL instances for VMs (if there's anything special to think about).Don.-- Don Seilerwww.seiler.us",
"msg_date": "Mon, 27 Nov 2017 12:23:46 -0600",
"msg_from": "Don Seiler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Setting effective_io_concurrency in VM?"
},
{
"msg_contents": "On Mon, Nov 27, 2017 at 11:23 AM, Don Seiler <[email protected]> wrote:\n> Good afternoon.\n>\n> We run Postgres (currently 9.2, upgrading to 9.6 shortly) in VMWare ESX\n> machines. We currently have effective_io_concurrency set to the default of\n> 1. I'm told that the data volume is a RAID 6 with 14 data drives and 2\n> parity drives. I know that RAID10 is recommended, just working with what\n> I've inherited for now (storage is high-end HP 3Par and HP recommended RAID\n> 6 for best performance).\n>\n> Anyway, I'm wondering if, in a virtualized environment with a VM datastore,\n> it makes sense to set effective_io_concurrency closer to the number of data\n> drives?\n>\n> I'd also be interested in hearing how others have configured their\n> PostgreSQL instances for VMs (if there's anything special to think about).\n\nGenerally VMs are never going to be as fast as running on bare metal\netc. You can adjust it and test it with something simple like pgbench\nwith various settings for -c (concurrency) and see where it peaks etc\nwith the setting. This will at least get you into the ball park.\n\nA while back we needed fast machines with LOTS of storage (7TB data\ndrives with 5TB of data on them) and the only way to stuff that many\n800GB SSDs into a single machine was to use RAID-5 with a spare (I\nlobbied for RAID6 but was overidden eh...) We were able to achieve\nover 15k TPS in pgbench with a 400GB data store on those boxes. The\nsecret was to turn off the cache in the RAID controller and cranl up\neffective io concurrency to something around 10 (not sure, it's been a\nwhile).\n\ntl;dr: Only way to know is to benchmark it. I'd guess that somewhere\nbetween 10 and 20 is going to get the best throughput but that's just\na guess. Benchmark it and let us know!\n\n",
"msg_date": "Mon, 27 Nov 2017 11:40:19 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting effective_io_concurrency in VM?"
},
{
"msg_contents": "El 27 nov. 2017 15:24, \"Don Seiler\" <[email protected]> escribió:\n\nGood afternoon.\n\nWe run Postgres (currently 9.2, upgrading to 9.6 shortly) in VMWare ESX\nmachines. We currently have effective_io_concurrency set to the default of\n1. I'm told that the data volume is a RAID 6 with 14 data drives and 2\nparity drives. I know that RAID10 is recommended, just working with what\nI've inherited for now (storage is high-end HP 3Par and HP recommended RAID\n6 for best performance).\n\nAnyway, I'm wondering if, in a virtualized environment with a VM datastore,\nit makes sense to set effective_io_concurrency closer to the number of data\ndrives?\n\nI'd also be interested in hearing how others have configured their\nPostgreSQL instances for VMs (if there's anything special to think about).\n\n\n\nIf the storage was exclusively for the Postgres box I'd try\neffective_io_concurrency somewhere between 8 and 12. Since it is probably\nnot, it will depend on the load the other VMs exert on the storage.\nAssuming the storage isnt already stressed and you need the extra IOPS, you\ncould test values between 4 and 8. You can of course be a lousy team player\nand have PG paralelize as much as it can, but this eventually will piss\noff the storage or vmware manager, which is never good as they can limit\nyour IO throughput at the virtualization or storage layers.\n\nCheers.\n\nEl 27 nov. 2017 15:24, \"Don Seiler\" <[email protected]> escribió:Good afternoon.We run Postgres (currently 9.2, upgrading to 9.6 shortly) in VMWare ESX machines. We currently have effective_io_concurrency set to the default of 1. I'm told that the data volume is a RAID 6 with 14 data drives and 2 parity drives. I know that RAID10 is recommended, just working with what I've inherited for now (storage is high-end HP 3Par and HP recommended RAID 6 for best performance).Anyway, I'm wondering if, in a virtualized environment with a VM datastore, it makes sense to set effective_io_concurrency closer to the number of data drives?I'd also be interested in hearing how others have configured their PostgreSQL instances for VMs (if there's anything special to think about).If the storage was exclusively for the Postgres box I'd try effective_io_concurrency somewhere between 8 and 12. Since it is probably not, it will depend on the load the other VMs exert on the storage. Assuming the storage isnt already stressed and you need the extra IOPS, you could test values between 4 and 8. You can of course be a lousy team player and have PG paralelize as much as it can, but this eventually will piss off the storage or vmware manager, which is never good as they can limit your IO throughput at the virtualization or storage layers.Cheers.",
"msg_date": "Mon, 27 Nov 2017 18:06:29 -0300",
"msg_from": "Fernando Hevia <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting effective_io_concurrency in VM?"
},
{
"msg_contents": "Whats the guest OS? I have been able to get Oracle to perform just as well\non Virtuals as it does on Physicals. I suspect the settings are pretty\nsimilar.\n\nOn Mon, Nov 27, 2017 at 3:06 PM, Fernando Hevia <[email protected]> wrote:\n\n>\n>\n> El 27 nov. 2017 15:24, \"Don Seiler\" <[email protected]> escribió:\n>\n> Good afternoon.\n>\n> We run Postgres (currently 9.2, upgrading to 9.6 shortly) in VMWare ESX\n> machines. We currently have effective_io_concurrency set to the default of\n> 1. I'm told that the data volume is a RAID 6 with 14 data drives and 2\n> parity drives. I know that RAID10 is recommended, just working with what\n> I've inherited for now (storage is high-end HP 3Par and HP recommended RAID\n> 6 for best performance).\n>\n> Anyway, I'm wondering if, in a virtualized environment with a VM\n> datastore, it makes sense to set effective_io_concurrency closer to the\n> number of data drives?\n>\n> I'd also be interested in hearing how others have configured their\n> PostgreSQL instances for VMs (if there's anything special to think about).\n>\n>\n>\n> If the storage was exclusively for the Postgres box I'd try\n> effective_io_concurrency somewhere between 8 and 12. Since it is probably\n> not, it will depend on the load the other VMs exert on the storage.\n> Assuming the storage isnt already stressed and you need the extra IOPS, you\n> could test values between 4 and 8. You can of course be a lousy team player\n> and have PG paralelize as much as it can, but this eventually will piss\n> off the storage or vmware manager, which is never good as they can limit\n> your IO throughput at the virtualization or storage layers.\n>\n> Cheers.\n>\n>\n>\n>\n\n\n-- \nAndrew W. Kerber\n\n'If at first you dont succeed, dont take up skydiving.'\n\nWhats the guest OS? I have been able to get Oracle to perform just as well on Virtuals as it does on Physicals. I suspect the settings are pretty similar.On Mon, Nov 27, 2017 at 3:06 PM, Fernando Hevia <[email protected]> wrote:El 27 nov. 2017 15:24, \"Don Seiler\" <[email protected]> escribió:Good afternoon.We run Postgres (currently 9.2, upgrading to 9.6 shortly) in VMWare ESX machines. We currently have effective_io_concurrency set to the default of 1. I'm told that the data volume is a RAID 6 with 14 data drives and 2 parity drives. I know that RAID10 is recommended, just working with what I've inherited for now (storage is high-end HP 3Par and HP recommended RAID 6 for best performance).Anyway, I'm wondering if, in a virtualized environment with a VM datastore, it makes sense to set effective_io_concurrency closer to the number of data drives?I'd also be interested in hearing how others have configured their PostgreSQL instances for VMs (if there's anything special to think about).If the storage was exclusively for the Postgres box I'd try effective_io_concurrency somewhere between 8 and 12. Since it is probably not, it will depend on the load the other VMs exert on the storage. Assuming the storage isnt already stressed and you need the extra IOPS, you could test values between 4 and 8. You can of course be a lousy team player and have PG paralelize as much as it can, but this eventually will piss off the storage or vmware manager, which is never good as they can limit your IO throughput at the virtualization or storage layers.Cheers.\n\n-- Andrew W. Kerber'If at first you dont succeed, dont take up skydiving.'",
"msg_date": "Mon, 27 Nov 2017 15:44:16 -0600",
"msg_from": "Andrew Kerber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting effective_io_concurrency in VM?"
},
{
"msg_contents": "On Mon, Nov 27, 2017 at 3:44 PM, Andrew Kerber <[email protected]>\nwrote:\n\n> Whats the guest OS? I have been able to get Oracle to perform just as\n> well on Virtuals as it does on Physicals. I suspect the settings are\n> pretty similar.\n>\n\nGuest OS is CentOS 6 and CentOS 7 depending on which DB host we're looking\nat. I'd be interested in learning for either case.\n\nDon.\n\n-- \nDon Seiler\nwww.seiler.us\n\nOn Mon, Nov 27, 2017 at 3:44 PM, Andrew Kerber <[email protected]> wrote:Whats the guest OS? I have been able to get Oracle to perform just as well on Virtuals as it does on Physicals. I suspect the settings are pretty similar.Guest OS is CentOS 6 and CentOS 7 depending on which DB host we're looking at. I'd be interested in learning for either case. Don.-- Don Seilerwww.seiler.us",
"msg_date": "Mon, 27 Nov 2017 16:03:50 -0600",
"msg_from": "Don Seiler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Setting effective_io_concurrency in VM?"
},
{
"msg_contents": "On Mon, Nov 27, 2017 at 10:40 AM, Scott Marlowe <[email protected]>\nwrote:\n\n>\n> Generally VMs are never going to be as fast as running on bare metal\n> etc. You can adjust it and test it with something simple like pgbench\n> with various settings for -c (concurrency) and see where it peaks etc\n> with the setting. This will at least get you into the ball park.\n>\n\nNone of the built-in workloads for pgbench cares a whit about\neffective_io_concurrency. He would have to come up with some custom\ntransactions to exercise that feature. (Or use the tool people use to run\nthe TPCH benchmark, rather than using pgbench's built in transactions)\n\nI think the best overall advice would be to configure it the same as you\nwould if it were not a VM. There may be cases where you diverge from that,\nbut I think each one would require extensive investigation and\nexperimentation, so can't be turned into a rule of thumb.\n\nCheers,\n\nJeff\n\nOn Mon, Nov 27, 2017 at 10:40 AM, Scott Marlowe <[email protected]> wrote:\nGenerally VMs are never going to be as fast as running on bare metal\netc. You can adjust it and test it with something simple like pgbench\nwith various settings for -c (concurrency) and see where it peaks etc\nwith the setting. This will at least get you into the ball park.None of the built-in workloads for pgbench cares a whit about effective_io_concurrency. He would have to come up with some custom transactions to exercise that feature. (Or use the tool people use to run the TPCH benchmark, rather than using pgbench's built in transactions)I think the best overall advice would be to configure it the same as you would if it were not a VM. There may be cases where you diverge from that, but I think each one would require extensive investigation and experimentation, so can't be turned into a rule of thumb.Cheers,Jeff",
"msg_date": "Mon, 27 Nov 2017 15:18:58 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting effective_io_concurrency in VM?"
},
{
"msg_contents": "Hi,\n\nOn 2017-11-27 11:40:19 -0700, Scott Marlowe wrote:\n> tl;dr: Only way to know is to benchmark it. I'd guess that somewhere\n> between 10 and 20 is going to get the best throughput but that's just\n> a guess. Benchmark it and let us know!\n\nFWIW, for SSDs my previous experiments suggest that the sweet spot is\nmore likely to be an order of magnitude or two bigger. Depends a bit on\nyour workload (including size of scans and concurrency) obviously.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 27 Nov 2017 15:57:24 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting effective_io_concurrency in VM?"
},
{
"msg_contents": "On 28/11/17 07:40, Scott Marlowe wrote:\n\n> On Mon, Nov 27, 2017 at 11:23 AM, Don Seiler <[email protected]> wrote:\n>> Good afternoon.\n>>\n>> We run Postgres (currently 9.2, upgrading to 9.6 shortly) in VMWare ESX\n>> machines. We currently have effective_io_concurrency set to the default of\n>> 1. I'm told that the data volume is a RAID 6 with 14 data drives and 2\n>> parity drives. I know that RAID10 is recommended, just working with what\n>> I've inherited for now (storage is high-end HP 3Par and HP recommended RAID\n>> 6 for best performance).\n>>\n>> Anyway, I'm wondering if, in a virtualized environment with a VM datastore,\n>> it makes sense to set effective_io_concurrency closer to the number of data\n>> drives?\n>>\n>> I'd also be interested in hearing how others have configured their\n>> PostgreSQL instances for VMs (if there's anything special to think about).\n> Generally VMs are never going to be as fast as running on bare metal\n> etc. You can adjust it and test it with something simple like pgbench\n> with various settings for -c (concurrency) and see where it peaks etc\n> with the setting. This will at least get you into the ball park.\n>\n> A while back we needed fast machines with LOTS of storage (7TB data\n> drives with 5TB of data on them) and the only way to stuff that many\n> 800GB SSDs into a single machine was to use RAID-5 with a spare (I\n> lobbied for RAID6 but was overidden eh...) We were able to achieve\n> over 15k TPS in pgbench with a 400GB data store on those boxes. The\n> secret was to turn off the cache in the RAID controller and cranl up\n> effective io concurrency to something around 10 (not sure, it's been a\n> while).\n>\n> tl;dr: Only way to know is to benchmark it. I'd guess that somewhere\n> between 10 and 20 is going to get the best throughput but that's just\n> a guess. Benchmark it and let us know!\n\nReasonably modern Linux hosts with Linux guests using Libvirt/KVM should \nbe able to get bare metal performance for moderate numbers of cpus (<=8 \nlast time we benchmarked). It certainly *used* to be the case that \nvirtualization sucked for databases, but not so much now.\n\nThe advice to benhmark, however - is golden :-)\n\nCheers\n\nMark\n\n",
"msg_date": "Fri, 8 Dec 2017 17:51:08 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting effective_io_concurrency in VM?"
}
] |
[
{
"msg_contents": "I'm on PostgreSQL 9.6.5 and getting an awkwardly bad plan chosen for my\nquery.\n\nI want to do:\n\nselect investments.id, cim.yield\nFROM contributions\nJOIN investments ON contributions.investment_id = investments.id\nJOIN contribution_investment_metrics_view cim ON cim.investment_id =\ninvestments.id\nWHERE contributions.id IN ('\\x58c9c0d3ee944c48b32f814d', '\\x11')\nWhere contribution_investment_metrics_view is morally\n\nselect investment_id, first(val) from (select * from contribution_metrics\nUNION ALL select * from investment_metrics) group by id\n\nTypically, querying this view is very fast since I have indexes in both\ncomponent queries, leading to a very tight plan:\n\nSort Key: \"*SELECT* 1\".metric\n-> Subquery Scan on \"*SELECT* 1\" (cost=14.68..14.68 rows=1 width=26)\n(actual time=0.043..0.044 rows=2 loops=1)\n -> Sort (cost=14.68..14.68 rows=1 width=42) (actual\ntime=0.042..0.043 rows=2 loops=1)\n Sort Key: cm.metric, cm.last_update_on DESC\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=0.14..14.68 rows=1 width=42) (actual\ntime=0.032..0.034 rows=2 loops=1)\n -> Index Scan using contributions_investment_id_idx on\ncontributions (cost=0.08..4.77 rows=2 width=26) (actual time=0.026..0.027\nrows=1 loops=1)\n Index Cond: (investment_id = $1)\n -> Index Only Scan using\ncontribution_metrics_contribution_id_metric_last_update_on_idx on\ncontribution_metrics cm (cost=0.06..4.95 rows=2 width=34) (actual\ntime=0.005..0.006 r\n Index Cond: (contribution_id = contributions.id)\n Heap Fetches: 2\n-> Subquery Scan on \"*SELECT* 2\" (cost=0.08..5.86 rows=3 width=26)\n(actual time=0.008..0.008 rows=3 loops=1)\n -> Index Only Scan using\ninvestment_metrics_investment_id_metric_last_updated_on_idx on\ninvestment_metrics im (cost=0.08..5.85 rows=3 width=42) (actual\ntime=0.008..0.008 rows=3 loops=1)\n Index Cond: (investment_id = $1)\n Heap Fetches: 3\n\nUnfortunately, when I try to query this view in the larger query above, I\nget a *much* worse plan for this view, leading to >1000x degradation in\nperformance:\n\n-> Append (cost=10329.18..26290.92 rows=482027 width=26) (actual\ntime=90.157..324.544 rows=482027 loops=1)\n -> Subquery Scan on \"*SELECT* 1\" (cost=10329.18..10349.44 rows=5788\nwidth=26) (actual time=90.157..91.207 rows=5788 loops=1)\n -> Sort (cost=10329.18..10332.08 rows=5788 width=42) (actual\ntime=90.156..90.567 rows=5788 loops=1)\n Sort Key: contributions_1.investment_id, cm.metric,\ncm.last_update_on DESC\n Sort Method: quicksort Memory: 645kB\n -> Hash Join (cost=105.62..10256.84 rows=5788 width=42)\n(actual time=1.924..85.913 rows=5788 loops=1)\n Hash Cond: (contributions_1.id = cm.contribution_id)\n -> Seq Scan on contributions contributions_1\n(cost=0.00..9694.49 rows=351495 width=26) (actual time=0.003..38.794\nrows=351495 loops=1)\n -> Hash (cost=85.36..85.36 rows=5788 width=34)\n(actual time=1.907..1.907 rows=5788 loops=1)\n Buckets: 8192 Batches: 1 Memory Usage: 453kB\n -> Seq Scan on contribution_metrics cm\n(cost=0.00..85.36 rows=5788 width=34) (actual time=0.003..0.936 rows=5788\nloops=1)\n -> Subquery Scan on \"*SELECT* 2\" (cost=0.08..15941.48 rows=476239\nwidth=26) (actual time=0.017..203.006 rows=476239 loops=1)\n -> Index Only Scan using\ninvestment_metrics_investment_id_metric_last_updated_on_idx1 on\ninvestment_metrics im (cost=0.08..14512.76 rows=476239 width=42) (actual\ntime=0.016..160.410 rows=476239 l\n Heap Fetches: 476239\n\nI've played around with a number of solutions (including lateral joins) and\nthe closest I can come is:\n\nselect investment_id\nfrom contribution_investment_metrics\nwhere investment_id = (\n select investments.id\n from investments\n join contributions on investments.id = contributions.investment_id\n where contributions.id = '\\x58c9c0d3ee944c48b32f814d'\n)\n\nThis doesn't really work for my purposes, since I want to project columns\nfrom contributions and investments and I want to run this query on \"up to a\nhandful\" contributions at once (maybe more than one, never more than 100).\n\nI'm on PostgreSQL 9.6.5.\nSchema and full explain analyzes:\nhttps://gist.github.com/awreece/28c359c6d834717ab299665022b19fd6\nI don't think it's relevant, but since\nhttps://wiki.postgresql.org/wiki/SlowQueryQuestions asks -- I'm running in\nHeroku.\n\nWhat are my options here? Currently, I'm planning to avoid these bad plans\nby using a less straightforward query for the view:\n\nSELECT\n coalesce(contrib.id, cm.contribution_id) AS contribution_id,\n coalesce(cm.yield, im.yield) AS yield,\n coalesce(cm.term, im.term) AS term\nFROM contributions contrib\nJOIN investment_metrics_view im ON im.investment_id = contrib.investment_id\nFULL OUTER JOIN contribution_metrics_view cm ON cm.contribution_id =\ncontrib.id\n\nBest,\n~Alex Reece\n\nI'm on PostgreSQL 9.6.5 and getting an awkwardly bad plan chosen for my query.I want to do:\nselect investments.id, cim.yieldFROM contributions JOIN investments ON contributions.investment_id = investments.id JOIN contribution_investment_metrics_view cim ON cim.investment_id = investments.id WHERE contributions.id IN ('\\x58c9c0d3ee944c48b32f814d', '\\x11')\nWhere contribution_investment_metrics_view is morally select investment_id, first(val) from (select * from contribution_metrics UNION ALL select * from investment_metrics) group by idTypically, querying this view is very fast since I have indexes in both component queries, leading to a very tight plan:Sort Key: \"*SELECT* 1\".metric-> Subquery Scan on \"*SELECT* 1\" (cost=14.68..14.68 rows=1 width=26) (actual time=0.043..0.044 rows=2 loops=1) -> Sort (cost=14.68..14.68 rows=1 width=42) (actual time=0.042..0.043 rows=2 loops=1) Sort Key: cm.metric, cm.last_update_on DESC Sort Method: quicksort Memory: 25kB -> Nested Loop (cost=0.14..14.68 rows=1 width=42) (actual time=0.032..0.034 rows=2 loops=1) -> Index Scan using contributions_investment_id_idx on contributions (cost=0.08..4.77 rows=2 width=26) (actual time=0.026..0.027 rows=1 loops=1) Index Cond: (investment_id = $1) -> Index Only Scan using contribution_metrics_contribution_id_metric_last_update_on_idx on contribution_metrics cm (cost=0.06..4.95 rows=2 width=34) (actual time=0.005..0.006 r Index Cond: (contribution_id = contributions.id) Heap Fetches: 2-> Subquery Scan on \"*SELECT* 2\" (cost=0.08..5.86 rows=3 width=26) (actual time=0.008..0.008 rows=3 loops=1) -> Index Only Scan using investment_metrics_investment_id_metric_last_updated_on_idx on investment_metrics im (cost=0.08..5.85 rows=3 width=42) (actual time=0.008..0.008 rows=3 loops=1) Index Cond: (investment_id = $1) Heap Fetches: 3Unfortunately, when I try to query this view in the larger query above, I get a much worse plan for this view, leading to >1000x degradation in performance:-> Append (cost=10329.18..26290.92 rows=482027 width=26) (actual time=90.157..324.544 rows=482027 loops=1) -> Subquery Scan on \"*SELECT* 1\" (cost=10329.18..10349.44 rows=5788 width=26) (actual time=90.157..91.207 rows=5788 loops=1) -> Sort (cost=10329.18..10332.08 rows=5788 width=42) (actual time=90.156..90.567 rows=5788 loops=1) Sort Key: contributions_1.investment_id, cm.metric, cm.last_update_on DESC Sort Method: quicksort Memory: 645kB -> Hash Join (cost=105.62..10256.84 rows=5788 width=42) (actual time=1.924..85.913 rows=5788 loops=1) Hash Cond: (contributions_1.id = cm.contribution_id) -> Seq Scan on contributions contributions_1 (cost=0.00..9694.49 rows=351495 width=26) (actual time=0.003..38.794 rows=351495 loops=1) -> Hash (cost=85.36..85.36 rows=5788 width=34) (actual time=1.907..1.907 rows=5788 loops=1) Buckets: 8192 Batches: 1 Memory Usage: 453kB -> Seq Scan on contribution_metrics cm (cost=0.00..85.36 rows=5788 width=34) (actual time=0.003..0.936 rows=5788 loops=1) -> Subquery Scan on \"*SELECT* 2\" (cost=0.08..15941.48 rows=476239 width=26) (actual time=0.017..203.006 rows=476239 loops=1) -> Index Only Scan using investment_metrics_investment_id_metric_last_updated_on_idx1 on investment_metrics im (cost=0.08..14512.76 rows=476239 width=42) (actual time=0.016..160.410 rows=476239 l Heap Fetches: 476239I've played around with a number of solutions (including lateral joins) and the closest I can come is:select investment_idfrom contribution_investment_metricswhere investment_id = ( select investments.id from investments join contributions on investments.id = contributions.investment_id where contributions.id = '\\x58c9c0d3ee944c48b32f814d')This doesn't really work for my purposes, since I want to project columns from contributions and investments and I want to run this query on \"up to a handful\" contributions at once (maybe more than one, never more than 100).I'm on PostgreSQL 9.6.5.Schema and full explain analyzes: https://gist.github.com/awreece/28c359c6d834717ab299665022b19fd6I don't think it's relevant, but since https://wiki.postgresql.org/wiki/SlowQueryQuestions asks -- I'm running in Heroku.What are my options here? Currently, I'm planning to avoid these bad plans by using a less straightforward query for the view:SELECT coalesce(contrib.id, cm.contribution_id) AS contribution_id, coalesce(cm.yield, im.yield) AS yield, coalesce(cm.term, im.term) AS term FROM contributions contribJOIN investment_metrics_view im ON im.investment_id = contrib.investment_id FULL OUTER JOIN contribution_metrics_view cm ON cm.contribution_id = contrib.idBest,~Alex Reece",
"msg_date": "Tue, 28 Nov 2017 10:13:38 +0000",
"msg_from": "Alex Reece <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad plan chosen for union all"
},
{
"msg_contents": "I managed to reduce my test case: the following query does not take\nadvantage of the index on contribution metrics.\n\nexplain select cim.yield\nfrom earnings\nJOIN contributions on contributions.id = earnings.note_id\nJOIN\n(\nSELECT contribution_id,\n max(CASE metrics.name WHEN 'Yield'::text THEN projected ELSE\nNULL::double precision END) AS yield\nfrom contribution_metrics\nJOIN metrics ON metrics.id = metric\ngroup by contribution_id\n) cim ON cim.contribution_id = contributions.id\nWHERE earnings.id = '\\x595400456c1f1400116b3843';\n\nI got this:\n\n Hash Join (cost=125.02..147.03 rows=1 width=8) (actual time=4.781..4.906\nrows=1 loops=1)\n Hash Cond: (contribution_metrics.contribution_id = contributions.id)\n -> HashAggregate (cost=116.86..126.64 rows=3261 width=21) (actual\ntime=4.157..4.600 rows=3261 loops=1)\n Group Key: contribution_metrics.contribution_id\n -> Hash Join (cost=1.11..108.18 rows=5788 width=33) (actual\ntime=0.021..2.425 rows=5788 loops=1)\n Hash Cond: (contribution_metrics.metric = metrics.id)\n -> Seq Scan on contribution_metrics (cost=0.00..85.36\nrows=5788 width=34) (actual time=0.006..0.695 rows=5788 loops=1)\n -> Hash (cost=1.05..1.05 rows=17 width=25) (actual\ntime=0.009..0.009 rows=17 loops=1)\n -> Seq Scan on metrics (cost=0.00..1.05 rows=17\nwidth=25) (actual time=0.002..0.005 rows=17 loops=1)\n -> Hash (cost=8.15..8.15 rows=1 width=26) (actual time=0.022..0.022\nrows=1 loops=1)\n -> Nested Loop (cost=0.14..8.15 rows=1 width=26) (actual\ntime=0.019..0.020 rows=1 loops=1)\n -> Index Scan using earnings_pkey on earnings\n(cost=0.06..4.06 rows=1 width=13) (actual time=0.009..0.009 rows=1 loops=1)\n Index Cond: (id = '\\x595400456c1f1400116b3843'::bytea)\n -> Index Only Scan using contributions_pkey on\ncontributions (cost=0.08..4.09 rows=1 width=13) (actual time=0.008..0.009\nrows=1 loops=1)\n Index Cond: (id = earnings.note_id)\n Planning time: 0.487 ms\n Execution time: 4.975 ms\n\nBut I expected it to be equivalent to the plan from this query:\n\nselect cim.yield from (\nselect contribution_id,\nmax(CASE metrics.name WHEN 'Yield'::text THEN projected ELSE NULL::double\nprecision END) AS yield\nfrom contribution_metrics JOIN metrics ON metrics.id = metric group by\ncontribution_id\n) cim where cim.contribution_id = (\nselect contributions.id from contributions\njoin earnings on earnings.note_id = contributions.id\nwhere earnings.id = '\\x595400456c1f1400116b3843')\n\nWhich gives me _this_ plan, that correctly uses the index on\ncontribution_metrics.\n\n Subquery Scan on cim (cost=9.32..14.23 rows=2 width=8) (actual\ntime=0.108..0.108 rows=1 loops=1)\n InitPlan 1 (returns $1)\n -> Nested Loop (cost=0.14..8.15 rows=1 width=13) (actual\ntime=0.054..0.055 rows=1 loops=1)\n -> Index Scan using earnings_pkey on earnings\n(cost=0.06..4.06 rows=1 width=13) (actual time=0.025..0.026 rows=1 loops=1)\n Index Cond: (id = '\\x595400456c1f1400116b3843'::bytea)\n -> Index Only Scan using contributions_pkey on contributions\n(cost=0.08..4.09 rows=1 width=13) (actual time=0.026..0.026 rows=1 loops=1)\n Index Cond: (id = earnings.note_id)\n -> GroupAggregate (cost=1.17..6.07 rows=2 width=21) (actual\ntime=0.108..0.108 rows=1 loops=1)\n Group Key: contribution_metrics.contribution_id\n -> Hash Join (cost=1.17..6.07 rows=2 width=33) (actual\ntime=0.100..0.101 rows=2 loops=1)\n Hash Cond: (contribution_metrics.metric = metrics.id)\n -> Index Scan using\ncontribution_metrics_contribution_id_metric_last_update_on_idx1 on\ncontribution_metrics ( cost=0.06..4.95 rows=2 width=34) (actual time\n Index Cond: (contribution_id = $1)\n -> Hash (cost=1.05..1.05 rows=17 width=25) (actual\ntime=0.012..0.012 rows=17 loops=1)\n -> Seq Scan on metrics (cost=0.00..1.05 rows=17\nwidth=25) (actual time=0.004..0.006 rows=17 loops=1)\n Planning time: 0.396 ms\n Execution time: 0.165 ms\n\nschema here:\nhttps://gist.github.com/awreece/aeacbc818277c7c6d99477645e7fcd03\n\nBest,\n~Alex\n\n\n\nOn Tue, Nov 28, 2017 at 2:13 AM Alex Reece <[email protected]> wrote:\n\n> I'm on PostgreSQL 9.6.5 and getting an awkwardly bad plan chosen for my\n> query.\n>\n> I want to do:\n>\n> select investments.id, cim.yield\n> FROM contributions\n> JOIN investments ON contributions.investment_id = investments.id\n> JOIN contribution_investment_metrics_view cim ON cim.investment_id =\n> investments.id\n> WHERE contributions.id IN ('\\x58c9c0d3ee944c48b32f814d', '\\x11')\n> Where contribution_investment_metrics_view is morally\n>\n> select investment_id, first(val) from (select * from contribution_metrics\n> UNION ALL select * from investment_metrics) group by id\n>\n> Typically, querying this view is very fast since I have indexes in both\n> component queries, leading to a very tight plan:\n>\n> Sort Key: \"*SELECT* 1\".metric\n> -> Subquery Scan on \"*SELECT* 1\" (cost=14.68..14.68 rows=1 width=26)\n> (actual time=0.043..0.044 rows=2 loops=1)\n> -> Sort (cost=14.68..14.68 rows=1 width=42) (actual\n> time=0.042..0.043 rows=2 loops=1)\n> Sort Key: cm.metric, cm.last_update_on DESC\n> Sort Method: quicksort Memory: 25kB\n> -> Nested Loop (cost=0.14..14.68 rows=1 width=42) (actual\n> time=0.032..0.034 rows=2 loops=1)\n> -> Index Scan using contributions_investment_id_idx on\n> contributions (cost=0.08..4.77 rows=2 width=26) (actual time=0.026..0.027\n> rows=1 loops=1)\n> Index Cond: (investment_id = $1)\n> -> Index Only Scan using\n> contribution_metrics_contribution_id_metric_last_update_on_idx on\n> contribution_metrics cm (cost=0.06..4.95 rows=2 width=34) (actual\n> time=0.005..0.006 r\n> Index Cond: (contribution_id = contributions.id)\n> Heap Fetches: 2\n> -> Subquery Scan on \"*SELECT* 2\" (cost=0.08..5.86 rows=3 width=26)\n> (actual time=0.008..0.008 rows=3 loops=1)\n> -> Index Only Scan using\n> investment_metrics_investment_id_metric_last_updated_on_idx on\n> investment_metrics im (cost=0.08..5.85 rows=3 width=42) (actual\n> time=0.008..0.008 rows=3 loops=1)\n> Index Cond: (investment_id = $1)\n> Heap Fetches: 3\n>\n> Unfortunately, when I try to query this view in the larger query above, I\n> get a *much* worse plan for this view, leading to >1000x degradation in\n> performance:\n>\n> -> Append (cost=10329.18..26290.92 rows=482027 width=26) (actual\n> time=90.157..324.544 rows=482027 loops=1)\n> -> Subquery Scan on \"*SELECT* 1\" (cost=10329.18..10349.44\n> rows=5788 width=26) (actual time=90.157..91.207 rows=5788 loops=1)\n> -> Sort (cost=10329.18..10332.08 rows=5788 width=42) (actual\n> time=90.156..90.567 rows=5788 loops=1)\n> Sort Key: contributions_1.investment_id, cm.metric,\n> cm.last_update_on DESC\n> Sort Method: quicksort Memory: 645kB\n> -> Hash Join (cost=105.62..10256.84 rows=5788\n> width=42) (actual time=1.924..85.913 rows=5788 loops=1)\n> Hash Cond: (contributions_1.id =\n> cm.contribution_id)\n> -> Seq Scan on contributions contributions_1\n> (cost=0.00..9694.49 rows=351495 width=26) (actual time=0.003..38.794\n> rows=351495 loops=1)\n> -> Hash (cost=85.36..85.36 rows=5788 width=34)\n> (actual time=1.907..1.907 rows=5788 loops=1)\n> Buckets: 8192 Batches: 1 Memory Usage:\n> 453kB\n> -> Seq Scan on contribution_metrics cm\n> (cost=0.00..85.36 rows=5788 width=34) (actual time=0.003..0.936 rows=5788\n> loops=1)\n> -> Subquery Scan on \"*SELECT* 2\" (cost=0.08..15941.48 rows=476239\n> width=26) (actual time=0.017..203.006 rows=476239 loops=1)\n> -> Index Only Scan using\n> investment_metrics_investment_id_metric_last_updated_on_idx1 on\n> investment_metrics im (cost=0.08..14512.76 rows=476239 width=42) (actual\n> time=0.016..160.410 rows=476239 l\n> Heap Fetches: 476239\n>\n> I've played around with a number of solutions (including lateral joins)\n> and the closest I can come is:\n>\n> select investment_id\n> from contribution_investment_metrics\n> where investment_id = (\n> select investments.id\n> from investments\n> join contributions on investments.id = contributions.investment_id\n> where contributions.id = '\\x58c9c0d3ee944c48b32f814d'\n> )\n>\n> This doesn't really work for my purposes, since I want to project columns\n> from contributions and investments and I want to run this query on \"up to a\n> handful\" contributions at once (maybe more than one, never more than 100).\n>\n> I'm on PostgreSQL 9.6.5.\n> Schema and full explain analyzes:\n> https://gist.github.com/awreece/28c359c6d834717ab299665022b19fd6\n> I don't think it's relevant, but since\n> https://wiki.postgresql.org/wiki/SlowQueryQuestions asks -- I'm running\n> in Heroku.\n>\n> What are my options here? Currently, I'm planning to avoid these bad plans\n> by using a less straightforward query for the view:\n>\n> SELECT\n> coalesce(contrib.id, cm.contribution_id) AS contribution_id,\n> coalesce(cm.yield, im.yield) AS yield,\n> coalesce(cm.term, im.term) AS term\n> FROM contributions contrib\n> JOIN investment_metrics_view im ON im.investment_id =\n> contrib.investment_id\n> FULL OUTER JOIN contribution_metrics_view cm ON cm.contribution_id =\n> contrib.id\n>\n> Best,\n> ~Alex Reece\n>\n\nI managed to reduce my test case: the following query does not take advantage of the index on contribution metrics. explain select cim.yield from earnings JOIN contributions on contributions.id = earnings.note_id JOIN ( SELECT contribution_id, max(CASE metrics.name WHEN 'Yield'::text THEN projected ELSE NULL::double precision END) AS yield from contribution_metrics JOIN metrics ON metrics.id = metric group by contribution_id ) cim ON cim.contribution_id = contributions.id WHERE earnings.id = '\\x595400456c1f1400116b3843';I got this: Hash Join (cost=125.02..147.03 rows=1 width=8) (actual time=4.781..4.906 rows=1 loops=1) Hash Cond: (contribution_metrics.contribution_id = contributions.id) -> HashAggregate (cost=116.86..126.64 rows=3261 width=21) (actual time=4.157..4.600 rows=3261 loops=1) Group Key: contribution_metrics.contribution_id -> Hash Join (cost=1.11..108.18 rows=5788 width=33) (actual time=0.021..2.425 rows=5788 loops=1) Hash Cond: (contribution_metrics.metric = metrics.id) -> Seq Scan on contribution_metrics (cost=0.00..85.36 rows=5788 width=34) (actual time=0.006..0.695 rows=5788 loops=1) -> Hash (cost=1.05..1.05 rows=17 width=25) (actual time=0.009..0.009 rows=17 loops=1) -> Seq Scan on metrics (cost=0.00..1.05 rows=17 width=25) (actual time=0.002..0.005 rows=17 loops=1) -> Hash (cost=8.15..8.15 rows=1 width=26) (actual time=0.022..0.022 rows=1 loops=1) -> Nested Loop (cost=0.14..8.15 rows=1 width=26) (actual time=0.019..0.020 rows=1 loops=1) -> Index Scan using earnings_pkey on earnings (cost=0.06..4.06 rows=1 width=13) (actual time=0.009..0.009 rows=1 loops=1) Index Cond: (id = '\\x595400456c1f1400116b3843'::bytea) -> Index Only Scan using contributions_pkey on contributions (cost=0.08..4.09 rows=1 width=13) (actual time=0.008..0.009 rows=1 loops=1) Index Cond: (id = earnings.note_id) Planning time: 0.487 ms Execution time: 4.975 msBut I expected it to be equivalent to the plan from this query: select cim.yield from ( select contribution_id, max(CASE metrics.name WHEN 'Yield'::text THEN projected ELSE NULL::double precision END) AS yield from contribution_metrics JOIN metrics ON metrics.id = metric group by contribution_id ) cim where cim.contribution_id = ( select contributions.id from contributions join earnings on earnings.note_id = contributions.id where earnings.id = '\\x595400456c1f1400116b3843')Which gives me _this_ plan, that correctly uses the index on contribution_metrics. Subquery Scan on cim (cost=9.32..14.23 rows=2 width=8) (actual time=0.108..0.108 rows=1 loops=1) InitPlan 1 (returns $1) -> Nested Loop (cost=0.14..8.15 rows=1 width=13) (actual time=0.054..0.055 rows=1 loops=1) -> Index Scan using earnings_pkey on earnings (cost=0.06..4.06 rows=1 width=13) (actual time=0.025..0.026 rows=1 loops=1) Index Cond: (id = '\\x595400456c1f1400116b3843'::bytea) -> Index Only Scan using contributions_pkey on contributions (cost=0.08..4.09 rows=1 width=13) (actual time=0.026..0.026 rows=1 loops=1) Index Cond: (id = earnings.note_id) -> GroupAggregate (cost=1.17..6.07 rows=2 width=21) (actual time=0.108..0.108 rows=1 loops=1) Group Key: contribution_metrics.contribution_id -> Hash Join (cost=1.17..6.07 rows=2 width=33) (actual time=0.100..0.101 rows=2 loops=1) Hash Cond: (contribution_metrics.metric = metrics.id) -> Index Scan using contribution_metrics_contribution_id_metric_last_update_on_idx1 on contribution_metrics ( cost=0.06..4.95 rows=2 width=34) (actual time Index Cond: (contribution_id = $1) -> Hash (cost=1.05..1.05 rows=17 width=25) (actual time=0.012..0.012 rows=17 loops=1) -> Seq Scan on metrics (cost=0.00..1.05 rows=17 width=25) (actual time=0.004..0.006 rows=17 loops=1) Planning time: 0.396 ms Execution time: 0.165 msschema here: https://gist.github.com/awreece/aeacbc818277c7c6d99477645e7fcd03Best,~AlexOn Tue, Nov 28, 2017 at 2:13 AM Alex Reece <[email protected]> wrote:I'm on PostgreSQL 9.6.5 and getting an awkwardly bad plan chosen for my query.I want to do:\nselect investments.id, cim.yieldFROM contributions JOIN investments ON contributions.investment_id = investments.id JOIN contribution_investment_metrics_view cim ON cim.investment_id = investments.id WHERE contributions.id IN ('\\x58c9c0d3ee944c48b32f814d', '\\x11')\nWhere contribution_investment_metrics_view is morally select investment_id, first(val) from (select * from contribution_metrics UNION ALL select * from investment_metrics) group by idTypically, querying this view is very fast since I have indexes in both component queries, leading to a very tight plan:Sort Key: \"*SELECT* 1\".metric-> Subquery Scan on \"*SELECT* 1\" (cost=14.68..14.68 rows=1 width=26) (actual time=0.043..0.044 rows=2 loops=1) -> Sort (cost=14.68..14.68 rows=1 width=42) (actual time=0.042..0.043 rows=2 loops=1) Sort Key: cm.metric, cm.last_update_on DESC Sort Method: quicksort Memory: 25kB -> Nested Loop (cost=0.14..14.68 rows=1 width=42) (actual time=0.032..0.034 rows=2 loops=1) -> Index Scan using contributions_investment_id_idx on contributions (cost=0.08..4.77 rows=2 width=26) (actual time=0.026..0.027 rows=1 loops=1) Index Cond: (investment_id = $1) -> Index Only Scan using contribution_metrics_contribution_id_metric_last_update_on_idx on contribution_metrics cm (cost=0.06..4.95 rows=2 width=34) (actual time=0.005..0.006 r Index Cond: (contribution_id = contributions.id) Heap Fetches: 2-> Subquery Scan on \"*SELECT* 2\" (cost=0.08..5.86 rows=3 width=26) (actual time=0.008..0.008 rows=3 loops=1) -> Index Only Scan using investment_metrics_investment_id_metric_last_updated_on_idx on investment_metrics im (cost=0.08..5.85 rows=3 width=42) (actual time=0.008..0.008 rows=3 loops=1) Index Cond: (investment_id = $1) Heap Fetches: 3Unfortunately, when I try to query this view in the larger query above, I get a much worse plan for this view, leading to >1000x degradation in performance:-> Append (cost=10329.18..26290.92 rows=482027 width=26) (actual time=90.157..324.544 rows=482027 loops=1) -> Subquery Scan on \"*SELECT* 1\" (cost=10329.18..10349.44 rows=5788 width=26) (actual time=90.157..91.207 rows=5788 loops=1) -> Sort (cost=10329.18..10332.08 rows=5788 width=42) (actual time=90.156..90.567 rows=5788 loops=1) Sort Key: contributions_1.investment_id, cm.metric, cm.last_update_on DESC Sort Method: quicksort Memory: 645kB -> Hash Join (cost=105.62..10256.84 rows=5788 width=42) (actual time=1.924..85.913 rows=5788 loops=1) Hash Cond: (contributions_1.id = cm.contribution_id) -> Seq Scan on contributions contributions_1 (cost=0.00..9694.49 rows=351495 width=26) (actual time=0.003..38.794 rows=351495 loops=1) -> Hash (cost=85.36..85.36 rows=5788 width=34) (actual time=1.907..1.907 rows=5788 loops=1) Buckets: 8192 Batches: 1 Memory Usage: 453kB -> Seq Scan on contribution_metrics cm (cost=0.00..85.36 rows=5788 width=34) (actual time=0.003..0.936 rows=5788 loops=1) -> Subquery Scan on \"*SELECT* 2\" (cost=0.08..15941.48 rows=476239 width=26) (actual time=0.017..203.006 rows=476239 loops=1) -> Index Only Scan using investment_metrics_investment_id_metric_last_updated_on_idx1 on investment_metrics im (cost=0.08..14512.76 rows=476239 width=42) (actual time=0.016..160.410 rows=476239 l Heap Fetches: 476239I've played around with a number of solutions (including lateral joins) and the closest I can come is:select investment_idfrom contribution_investment_metricswhere investment_id = ( select investments.id from investments join contributions on investments.id = contributions.investment_id where contributions.id = '\\x58c9c0d3ee944c48b32f814d')This doesn't really work for my purposes, since I want to project columns from contributions and investments and I want to run this query on \"up to a handful\" contributions at once (maybe more than one, never more than 100).I'm on PostgreSQL 9.6.5.Schema and full explain analyzes: https://gist.github.com/awreece/28c359c6d834717ab299665022b19fd6I don't think it's relevant, but since https://wiki.postgresql.org/wiki/SlowQueryQuestions asks -- I'm running in Heroku.What are my options here? Currently, I'm planning to avoid these bad plans by using a less straightforward query for the view:SELECT coalesce(contrib.id, cm.contribution_id) AS contribution_id, coalesce(cm.yield, im.yield) AS yield, coalesce(cm.term, im.term) AS term FROM contributions contribJOIN investment_metrics_view im ON im.investment_id = contrib.investment_id FULL OUTER JOIN contribution_metrics_view cm ON cm.contribution_id = contrib.idBest,~Alex Reece",
"msg_date": "Wed, 29 Nov 2017 03:39:22 +0000",
"msg_from": "Alex Reece <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad plan chosen for union all"
},
{
"msg_contents": "Alex Reece <[email protected]> writes:\n> I managed to reduce my test case: the following query does not take\n> advantage of the index on contribution metrics.\n\nYeah. What you're wishing is that the planner would push a join\ncondition down into a subquery, but it won't do that at present.\nDoing so would require generating \"parameterized paths\" for subqueries.\nWhile I do not think there's any fundamental technical reason anymore\nthat we couldn't do so, there's considerable risk of wasting a lot of\nplanner cycles chasing unprofitable plan alternatives. Anyway it was\ntotally impractical before 9.6's upper-planner-pathification changes,\nand not all of the dust has settled from that rewrite.\n\n> But I expected it to be equivalent to the plan from this query:\n\nThe difference here is that, from the perspective of the outer query,\nthe WHERE condition is a restriction clause on the \"cim\" relation,\nnot a join clause. So it will get pushed down into the subquery\nwithout creating any join order constraints on the outer query.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 28 Nov 2017 23:43:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad plan chosen for union all"
},
{
"msg_contents": "One more thing. Given this:\n\n\n> The difference here is that, from the perspective of the outer query,\n> the WHERE condition is a restriction clause on the \"cim\" relation,\n> not a join clause. So it will get pushed down into the subquery\n> without creating any join order constraints on the outer query.\n\n\nI expected the lateral form of the query to properly use the indexes. Sure\nenough, this correctly uses the index:\n\nexplain select cim.yield\nfrom earnings\nJOIN contributions on contributions.id = earnings.note_id\nJOIN LATERAL\n(\n SELECT contribution_id,\n max(CASE metrics.name WHEN 'Yield'::text THEN projected ELSE\nNULL::double precision END) AS yield\n from contribution_metrics\n JOIN metrics ON metrics.id = metric WHERE contributions.id =\ncontribution_id\n group by contribution_id\n) cim ON true\nWHERE earnings.id = '\\x595400456c1f1400116b3843'\n\nHowever, when I try to wrap that subquery query again (e.g. as I would need\nto if it were a view), it doesn't restrict:\n\nselect cim.yield\nfrom earnings\n\nJOIN contributions on contributions.id = earnings.note_id\nJOIN LATERAL\n(\n select * from\n (\n SELECT contribution_id,\n max(CASE metrics.name WHEN 'Yield'::text THEN projected ELSE\nNULL::double precision END) AS yield\n from contribution_metrics\n JOIN metrics ON metrics.id = metric\n\n group by contribution_id\n ) my_view WHERE contribution_id = contributions.id\n) cim ON true\nWHERE earnings.id = '\\x595400456c1f1400116b3843'\n\nIs there a way I can get the restriction to be pushed down into my subquery\nin this lateral form?\n\nBest,\n~Alex\n\nOne more thing. Given this: The difference here is that, from the perspective of the outer query,\nthe WHERE condition is a restriction clause on the \"cim\" relation,\nnot a join clause. So it will get pushed down into the subquery\nwithout creating any join order constraints on the outer query.I expected the lateral form of the query to properly use the indexes. Sure enough, this correctly uses the index:explain select cim.yieldfrom earningsJOIN contributions on contributions.id = earnings.note_idJOIN LATERAL ( SELECT contribution_id, max(CASE metrics.name WHEN 'Yield'::text THEN projected ELSE NULL::double precision END) AS yield from contribution_metrics JOIN metrics ON metrics.id = metric WHERE contributions.id = contribution_id group by contribution_id) cim ON trueWHERE earnings.id = '\\x595400456c1f1400116b3843'However, when I try to wrap that subquery query again (e.g. as I would need to if it were a view), it doesn't restrict:select cim.yieldfrom earnings JOIN contributions on contributions.id = earnings.note_idJOIN LATERAL ( select * from ( SELECT contribution_id, max(CASE metrics.name WHEN 'Yield'::text THEN projected ELSE NULL::double precision END) AS yield from contribution_metrics JOIN metrics ON metrics.id = metric group by contribution_id ) my_view WHERE contribution_id = contributions.id) cim ON trueWHERE earnings.id = '\\x595400456c1f1400116b3843'Is there a way I can get the restriction to be pushed down into my subquery in this lateral form?Best,~Alex",
"msg_date": "Wed, 29 Nov 2017 05:31:22 +0000",
"msg_from": "Alex Reece <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad plan chosen for union all"
}
] |
[
{
"msg_contents": "Getting following error when running the pg_routing function. The text\nvalue consists over 1.8gig is there something I can tweak to handle the\nlarge size in function?\n\nERROR: invalid memory alloc request size 1080000000\nCONTEXT: PL/=pgSQL function pgr_xxxx(text,anyarray,boolean) line 3 at\nRETURN QUERY\n\nB\n\nGetting following error when running the pg_routing function. The text value consists over 1.8gig is there something I can tweak to handle the large size in function?ERROR: invalid memory alloc request size 1080000000CONTEXT: PL/=pgSQL function pgr_xxxx(text,anyarray,boolean) line 3 at RETURN QUERYB",
"msg_date": "Tue, 28 Nov 2017 10:56:24 -0800",
"msg_from": "bima p <[email protected]>",
"msg_from_op": true,
"msg_subject": "Invalid mem alloc request on function"
}
] |
Subsets and Splits