threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Good Morning,\n\nWe are a team of Graduate students work at the IBM Runtime Labs at University of New Brunswick. We are working on development of a query compiler for postgresql. We are already done with tuple deformation but we are not able to get the expected speedup compared to the one without jitting. We are looking at the IR for any scope of optimization, but at the same time we are profiling the queires.\n\nI would like to have your views if perf the best suitable tool for profiling the queries in postgresql or do we have any other tool or shared libraries that share statistics about all the backend functions which we could utilize.\n\nPlease share your thoughts.\n\n\nHave a good day:)\nDebajyoti\n\n\n\n\n\n\n\n\n\n\nGood Morning,\n \nWe are a team of Graduate students work at the IBM Runtime Labs at University of New Brunswick. We are working on development of a query compiler for postgresql. We are already done with tuple deformation but we are not able to get the\n expected speedup compared to the one without jitting. We are looking at the IR for any scope of optimization, but at the same time we are profiling the queires.\n\n \nI would like to have your views if perf the best suitable tool for profiling the queries in postgresql or do we have any other tool or shared libraries that share statistics about all the backend functions which we could utilize.\n \nPlease share your thoughts.\n \n \nHave a good dayJ\nDebajyoti",
"msg_date": "Sat, 24 Oct 2020 22:33:08 +0000",
"msg_from": "Debajyoti Datta <[email protected]>",
"msg_from_op": true,
"msg_subject": "Profiling tool for postgresql queries "
}
] |
[
{
"msg_contents": "Hi Performance Guys,\n\nI hope you can help me. I am joining two tables, that have a foreign key relationship. So I expect the optimizer to estimate the number of the resulting rows to be the same as the number of the returned rows of one of the tables. But the estimate is way too low.\n\nI have built a test case, where the problem is easily to be seen.\n\nTestcase:\n-- create a large table with one column with only 3 possible values, the other rows are only there to increase the selectivity\ncreate table fact (low_card integer, anydata1 integer, anydata2 integer);\ninsert into fact (low_card, anydata1, anydata2) select floor(random()*3+1),floor(random()*1000+1),floor(random()*100+1) from generate_series(1,10000);\n\n-- create a smaller table with only unique values to be referenced by foreign key\ncreate table dim as (select distinct low_card, anydata1, anydata2 from fact);\ncreate unique index on dim (low_card, anydata1, anydata2);\nalter table fact add constraint fk foreign key (low_card, anydata1, anydata2) references dim (low_card, anydata1, anydata2);\n\nanalyze fact;\nanalyze dim;\n\nAnd here comes the query:\nexplain analyze\nselect count(*) from fact inner join dim on (fact.low_card=dim.low_card and fact.anydata1=dim.anydata1 and fact.anydata2=dim.anydata2)\nwhere fact.low_card=1;\n\nAggregate (cost=424.11..424.12 rows=1 width=8) (actual time=7.899..7.903 rows=1 loops=1)\n -> Hash Join (cost=226.27..423.82 rows=115 width=0) (actual time=3.150..7.511 rows=3344 loops=1) <=========== With the FK, the estimation should be 3344, but it is 115 rows\n Hash Cond: ((fact.anydata1 = dim.anydata1) AND (fact.anydata2 = dim.anydata2))\n -> Seq Scan on fact (cost=0.00..180.00 rows=3344 width=12) (actual time=0.025..2.289 rows=3344 loops=1)\n Filter: (low_card = 1)\n Rows Removed by Filter: 6656\n -> Hash (cost=176.89..176.89 rows=3292 width=12) (actual time=3.105..3.107 rows=3292 loops=1)\n Buckets: 4096 Batches: 1 Memory Usage: 174kB\n -> Seq Scan on dim (cost=0.00..176.89 rows=3292 width=12) (actual time=0.014..2.103 rows=3292 loops=1)\n Filter: (low_card = 1)\n Rows Removed by Filter: 6539\nPlanning Time: 0.619 ms\nExecution Time: 7.973 ms\n\n\nMy problem is, that I am joining a lot more tables in reality and since the row estimates are so low, the optimizer goes for nested loops, leading to inacceptable execution times.\n\nQuestion: How can I get the optimizer to use the information about the foreign key relationship and get accurate estimates?\n\nSigrid Ehrenreich\n\n\n\n\n\n\n\n\n\n\nHi Performance Guys,\n \nI hope you can help me. I am joining two tables, that have a foreign key relationship. So I expect the optimizer to estimate the number of the resulting rows to be the same as the number of the returned rows of one of\n the tables. But the estimate is way too low.\n \nI have built a test case, where the problem is easily to be seen.\n \nTestcase:\n-- create a large table with one column with only 3 possible values, the other rows are only there to increase the selectivity\ncreate table fact (low_card integer, anydata1 integer, anydata2 integer);\ninsert into fact (low_card, anydata1, anydata2) select floor(random()*3+1),floor(random()*1000+1),floor(random()*100+1) from generate_series(1,10000);\n \n-- create a smaller table with only unique values to be referenced by foreign key\ncreate table dim as (select distinct low_card, anydata1, anydata2 from fact);\ncreate unique index on dim (low_card, anydata1, anydata2);\nalter table fact add constraint fk foreign key (low_card, anydata1, anydata2) references dim (low_card, anydata1, anydata2);\n \nanalyze fact;\nanalyze dim;\n \nAnd here comes the query:\nexplain analyze\nselect count(*) from fact inner join dim on (fact.low_card=dim.low_card and fact.anydata1=dim.anydata1 and fact.anydata2=dim.anydata2)\nwhere fact.low_card=1;\n \nAggregate (cost=424.11..424.12 rows=1 width=8) (actual time=7.899..7.903 rows=1 loops=1)\n -> Hash Join (cost=226.27..423.82 rows=115 width=0) (actual time=3.150..7.511 rows=3344 loops=1) \n<=========== With the FK, the estimation should be 3344, but it is 115 rows\n Hash Cond: ((fact.anydata1 = dim.anydata1) AND (fact.anydata2 = dim.anydata2)) \n\n -> Seq Scan on fact (cost=0.00..180.00 rows=3344 width=12) (actual time=0.025..2.289 rows=3344 loops=1)\n\n Filter: (low_card = 1)\n Rows Removed by Filter: 6656\n -> Hash (cost=176.89..176.89 rows=3292 width=12) (actual time=3.105..3.107 rows=3292 loops=1)\n Buckets: 4096 Batches: 1 Memory Usage: 174kB\n -> Seq Scan on dim (cost=0.00..176.89 rows=3292 width=12) (actual time=0.014..2.103 rows=3292 loops=1)\n Filter: (low_card = 1)\n Rows Removed by Filter: 6539\nPlanning Time: 0.619 ms\nExecution Time: 7.973 ms\n \n \nMy problem is, that I am joining a lot more tables in reality and since the row estimates are so low, the optimizer goes for nested loops, leading to inacceptable execution times.\n \nQuestion: How can I get the optimizer to use the information about the foreign key relationship and get accurate estimates?\n \nSigrid Ehrenreich",
"msg_date": "Mon, 26 Oct 2020 15:58:05 +0000",
"msg_from": "\"Ehrenreich, Sigrid\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres Optimizer ignores information about foreign key\n relationship, severly misestimating number of returned rows in join"
},
{
"msg_contents": "On Mon, Oct 26, 2020 at 03:58:05PM +0000, Ehrenreich, Sigrid wrote:\n> Hi Performance Guys,\n> \n> I hope you can help me. I am joining two tables, that have a foreign key relationship. So I expect the optimizer to estimate the number of the resulting rows to be the same as the number of the returned rows of one of the tables. But the estimate is way too low.\n> \n> I have built a test case, where the problem is easily to be seen.\n\nI reproduced the problem on v14dev.\n\nNote the different estimates between these:\n\npostgres=# explain analyze SELECT * FROM fact INNER JOIN dim USING (low_card,anydata1,anydata2) WHERE fact.low_card=2;\n Hash Join (cost=161.58..358.85 rows=112 width=12) (actual time=8.707..15.717 rows=3289 loops=1)\n\npostgres=# explain analyze SELECT * FROM fact INNER JOIN dim USING (low_card,anydata1,anydata2) WHERE fact.low_card BETWEEN 2 AND 2;\n Hash Join (cost=324.71..555.61 rows=3289 width=12) (actual time=15.966..23.394 rows=3289 loops=1)\n\nI think because low_card has an equality comparison in addition to the equijoin,\nit's being disqualified from the planner's mechanism to consider FKs in join\nselectivity.\nhttps://doxygen.postgresql.org/costsize_8c_source.html#l05024\n\nI don't know enough about this to help more than that.\n\n\n",
"msg_date": "Mon, 26 Oct 2020 15:34:13 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Optimizer ignores information about foreign key\n relationship, severely misestimating number of returned rows in join"
},
{
"msg_contents": "On Tue, 27 Oct 2020 at 06:54, Ehrenreich, Sigrid <[email protected]> wrote:\n> -> Hash Join (cost=226.27..423.82 rows=115 width=0) (actual time=3.150..7.511 rows=3344 loops=1) <=========== With the FK, the estimation should be 3344, but it is 115 rows\n\nI'd have expected this to find the foreign key and have the join\nselectivity of 1.0, but I see it does not due to the fact that one of\nthe EquivalenceClass has a constant due to the fact.low_card = 1 qual.\n\nIn build_join_rel() we call build_joinrel_restrictlist() to get the\njoin quals that need to be evaluated at the join level, but we only\nget the fact.anydata1=dim.anydata1 and fact.anydata2=dim.anydata2\nquals there. The low_card qual gets pushed down to the scan level on\neach side of the join, so no need for it to get evaluated at the join\nlevel. Later in build_join_rel() we do set_joinrel_size_estimates().\nThe restrictlist with just the two quals is what we pass to\nget_foreign_key_join_selectivity(). Only two of the foreign key\ncolumns are matched there, therefore we don't class that as a match\nand just leave it up to the normal selectivity functions.\n\nI feel like we could probably do better there and perhaps somehow\ncount ECs with ec_has_const as matched, but there seems to be some\nassumptions later in get_foreign_key_join_selectivity() where we\ndetermine the selectivity based on the base rel's tuple count. We'd\nneed to account for how many rows remainder after filtering the ECs\nwith ec_has_const == true, else we'd be doing the wrong thing. That\nneeds more thought than I have time for right now.\n\nYour case would work if the foreign key had been on just anydata1 and\nanydata2, but there's not much chance of that working without a unique\nindex on those two columns.\n\nExtended statistics won't help you here either since they're currently\nnot used for join estimations.\n\nDavid\n\n\n",
"msg_date": "Tue, 27 Oct 2020 11:25:45 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Optimizer ignores information about foreign key\n relationship, severly misestimating number of returned rows in join"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> On Tue, 27 Oct 2020 at 06:54, Ehrenreich, Sigrid <[email protected]> wrote:\n>> -> Hash Join (cost=226.27..423.82 rows=115 width=0) (actual time=3.150..7.511 rows=3344 loops=1) <=========== With the FK, the estimation should be 3344, but it is 115 rows\n\n> I'd have expected this to find the foreign key and have the join\n> selectivity of 1.0, but I see it does not due to the fact that one of\n> the EquivalenceClass has a constant due to the fact.low_card = 1 qual.\n\nRight.\n\n> I feel like we could probably do better there and perhaps somehow\n> count ECs with ec_has_const as matched, but there seems to be some\n> assumptions later in get_foreign_key_join_selectivity() where we\n> determine the selectivity based on the base rel's tuple count. We'd\n> need to account for how many rows remainder after filtering the ECs\n> with ec_has_const == true, else we'd be doing the wrong thing. That\n> needs more thought than I have time for right now.\n\nYeah, I'm fooling with a patch for that now. The basic problem is\nthat the selectivity of the x = constant clauses has already been\nfactored into the sizes of both join input relations, so we're\ndouble-counting it if we just apply the existing FK-based\nselectivity estimate. I think though that we can recover the\nselectivity associated with that qual on the FK side (or should\nit be the PK side?) and cancel it out of the FK selectivity.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 26 Oct 2020 18:54:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Optimizer ignores information about foreign key\n relationship, severly misestimating number of returned rows in join"
},
{
"msg_contents": "Hi Tom,\n\nA patch would be very much appreciated.\nWe are currently running on Version 12, but could upgrade to 13, if necessary.\n\nCould you send me a notification if you managed to program a patch for that?\n\nRegards,\nSigrid\n\n-----Original Message-----\nFrom: Tom Lane <[email protected]> \nSent: Monday, October 26, 2020 11:54 PM\nTo: David Rowley <[email protected]>\nCc: Ehrenreich, Sigrid <[email protected]>; [email protected]\nSubject: Re: Postgres Optimizer ignores information about foreign key relationship, severly misestimating number of returned rows in join\n\nDavid Rowley <[email protected]> writes:\n> On Tue, 27 Oct 2020 at 06:54, Ehrenreich, Sigrid <[email protected]> wrote:\n>> -> Hash Join (cost=226.27..423.82 rows=115 width=0) (actual time=3.150..7.511 rows=3344 loops=1) <=========== With the FK, the estimation should be 3344, but it is 115 rows\n\n> I'd have expected this to find the foreign key and have the join\n> selectivity of 1.0, but I see it does not due to the fact that one of\n> the EquivalenceClass has a constant due to the fact.low_card = 1 qual.\n\nRight.\n\n> I feel like we could probably do better there and perhaps somehow\n> count ECs with ec_has_const as matched, but there seems to be some\n> assumptions later in get_foreign_key_join_selectivity() where we\n> determine the selectivity based on the base rel's tuple count. We'd\n> need to account for how many rows remainder after filtering the ECs\n> with ec_has_const == true, else we'd be doing the wrong thing. That\n> needs more thought than I have time for right now.\n\nYeah, I'm fooling with a patch for that now. The basic problem is\nthat the selectivity of the x = constant clauses has already been\nfactored into the sizes of both join input relations, so we're\ndouble-counting it if we just apply the existing FK-based\nselectivity estimate. I think though that we can recover the\nselectivity associated with that qual on the FK side (or should\nit be the PK side?) and cancel it out of the FK selectivity.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Oct 2020 07:50:19 +0000",
"msg_from": "\"Ehrenreich, Sigrid\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Postgres Optimizer ignores information about foreign key\n relationship, severly misestimating number of returned rows in join"
},
{
"msg_contents": "\"Ehrenreich, Sigrid\" <[email protected]> writes:\n> A patch would be very much appreciated.\n> We are currently running on Version 12, but could upgrade to 13, if necessary.\n> Could you send me a notification if you managed to program a patch for that?\n\nI've pushed a patch for this to HEAD, but current thinking is that we\nwill not be back-patching it. Still, if you're desperate you could\nconsider running a custom build of v13 with that patch --- a quick\ncheck suggests that it would back-patch easily. v12 would be a\nslightly harder lift (I see one hunk doesn't apply) but probably\nnot by much.\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=patch;h=ad1c36b0709e47cdb3cc4abd6c939fe64279b63f\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Oct 2020 12:55:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Optimizer ignores information about foreign key\n relationship, severly misestimating number of returned rows in join"
},
{
"msg_contents": "Hi Tom,\r\n\r\nThanks a lot for your help!\r\n\r\nIf it is in the HEAD, does it mean, it will be included in v14?\r\n\r\nI'll have to see, if we dare building our own v13 version with the patch.\r\n(I would love to, because I am simply thrilled to pieces, having a patch made by you for us 😉)\r\n\r\nRegards,\r\nSigrid\r\n\r\n-----Original Message-----\r\nFrom: Tom Lane <[email protected]> \r\nSent: Wednesday, October 28, 2020 5:55 PM\r\nTo: Ehrenreich, Sigrid <[email protected]>\r\nCc: David Rowley <[email protected]>; [email protected]\r\nSubject: Re: Postgres Optimizer ignores information about foreign key relationship, severly misestimating number of returned rows in join\r\n\r\n\"Ehrenreich, Sigrid\" <[email protected]> writes:\r\n> A patch would be very much appreciated.\r\n> We are currently running on Version 12, but could upgrade to 13, if necessary.\r\n> Could you send me a notification if you managed to program a patch for that?\r\n\r\nI've pushed a patch for this to HEAD, but current thinking is that we\r\nwill not be back-patching it. Still, if you're desperate you could\r\nconsider running a custom build of v13 with that patch --- a quick\r\ncheck suggests that it would back-patch easily. v12 would be a\r\nslightly harder lift (I see one hunk doesn't apply) but probably\r\nnot by much.\r\n\r\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=patch;h=ad1c36b0709e47cdb3cc4abd6c939fe64279b63f\r\n\r\n\t\t\tregards, tom lane\r\n",
"msg_date": "Thu, 29 Oct 2020 06:31:50 +0000",
"msg_from": "\"Ehrenreich, Sigrid\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Postgres Optimizer ignores information about foreign key\n relationship, severly misestimating number of returned rows in join"
},
{
"msg_contents": "On Thu, Oct 29, 2020 at 9:43 AM Ehrenreich, Sigrid\n<[email protected]> wrote:\n>\n> Hi Tom,\n>\n> Thanks a lot for your help!\n>\n> If it is in the HEAD, does it mean, it will be included in v14?\n\n\nYes, that's precisely what it means. Unless someone finds something\nbad with it and it has to be removed of course, but in principle it\nmeans that it will be in 14.\n\n//Magnus\n\n\n>\n>\n> I'll have to see, if we dare building our own v13 version with the patch.\n> (I would love to, because I am simply thrilled to pieces, having a patch made by you for us )\n>\n> Regards,\n> Sigrid\n>\n> -----Original Message-----\n> From: Tom Lane <[email protected]>\n> Sent: Wednesday, October 28, 2020 5:55 PM\n> To: Ehrenreich, Sigrid <[email protected]>\n> Cc: David Rowley <[email protected]>; [email protected]\n> Subject: Re: Postgres Optimizer ignores information about foreign key relationship, severly misestimating number of returned rows in join\n>\n> \"Ehrenreich, Sigrid\" <[email protected]> writes:\n> > A patch would be very much appreciated.\n> > We are currently running on Version 12, but could upgrade to 13, if necessary.\n> > Could you send me a notification if you managed to program a patch for that?\n>\n> I've pushed a patch for this to HEAD, but current thinking is that we\n> will not be back-patching it. Still, if you're desperate you could\n> consider running a custom build of v13 with that patch --- a quick\n> check suggests that it would back-patch easily. v12 would be a\n> slightly harder lift (I see one hunk doesn't apply) but probably\n> not by much.\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=patch;h=ad1c36b0709e47cdb3cc4abd6c939fe64279b63f\n>\n> regards, tom lane\n\n\n",
"msg_date": "Thu, 29 Oct 2020 10:57:16 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Optimizer ignores information about foreign key\n relationship, severly misestimating number of returned rows in join"
},
{
"msg_contents": "Thanks!\r\n\r\nRegards,\r\nSigrid\r\n\r\n-----Original Message-----\r\nFrom: Magnus Hagander <[email protected]> \r\nSent: Thursday, October 29, 2020 10:57 AM\r\nTo: Ehrenreich, Sigrid <[email protected]>\r\nCc: Tom Lane <[email protected]>; David Rowley <[email protected]>; [email protected]\r\nSubject: Re: Postgres Optimizer ignores information about foreign key relationship, severly misestimating number of returned rows in join\r\n\r\nOn Thu, Oct 29, 2020 at 9:43 AM Ehrenreich, Sigrid\r\n<[email protected]> wrote:\r\n>\r\n> Hi Tom,\r\n>\r\n> Thanks a lot for your help!\r\n>\r\n> If it is in the HEAD, does it mean, it will be included in v14?\r\n\r\n\r\nYes, that's precisely what it means. Unless someone finds something\r\nbad with it and it has to be removed of course, but in principle it\r\nmeans that it will be in 14.\r\n\r\n//Magnus\r\n\r\n\r\n>\r\n>\r\n> I'll have to see, if we dare building our own v13 version with the patch.\r\n> (I would love to, because I am simply thrilled to pieces, having a patch made by you for us )\r\n>\r\n> Regards,\r\n> Sigrid\r\n>\r\n> -----Original Message-----\r\n> From: Tom Lane <[email protected]>\r\n> Sent: Wednesday, October 28, 2020 5:55 PM\r\n> To: Ehrenreich, Sigrid <[email protected]>\r\n> Cc: David Rowley <[email protected]>; [email protected]\r\n> Subject: Re: Postgres Optimizer ignores information about foreign key relationship, severly misestimating number of returned rows in join\r\n>\r\n> \"Ehrenreich, Sigrid\" <[email protected]> writes:\r\n> > A patch would be very much appreciated.\r\n> > We are currently running on Version 12, but could upgrade to 13, if necessary.\r\n> > Could you send me a notification if you managed to program a patch for that?\r\n>\r\n> I've pushed a patch for this to HEAD, but current thinking is that we\r\n> will not be back-patching it. Still, if you're desperate you could\r\n> consider running a custom build of v13 with that patch --- a quick\r\n> check suggests that it would back-patch easily. v12 would be a\r\n> slightly harder lift (I see one hunk doesn't apply) but probably\r\n> not by much.\r\n>\r\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=patch;h=ad1c36b0709e47cdb3cc4abd6c939fe64279b63f\r\n>\r\n> regards, tom lane\r\n",
"msg_date": "Thu, 29 Oct 2020 12:12:09 +0000",
"msg_from": "\"Ehrenreich, Sigrid\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Postgres Optimizer ignores information about foreign key\n relationship, severly misestimating number of returned rows in join"
}
] |
[
{
"msg_contents": "I'm trying to understand a bad estimate by the planner, and what I can do about it. The anonymized plan is here: https://explain.depesz.com/s/0MDz\n\nThe item I'm focused on is node 23. The estimate is for 7 rows, actual is 896 (multiplied by 1062 loops). I'm confused about two things in this node.\n\nThe first is Postgres' estimate. The condition for this index scan contains three expressions --\n\n(five_uniform = zulu_five.five_uniform) AND\n(whiskey_mike = juliet_india.whiskey_mike) AND\n(bravo = 'mike'::text)\n\nThe columns in the first two expressions (five_uniform and whiskey_mike) are NOT NULL, and have foreign key constraints to their respective tables (zulu_five.five_uniform and juliet_india.whiskey_mike). The planner can know in advance that 100% of the rows in the table will satisfy those criteria.\n\nFor the third expression (bravo = 'mike'), Postgres has excellent statistics. The estimated frequency of 'mike' is 2.228%, actual frequency is 2.242%, so Postgres' estimate is only off by a tiny amount (0.014%).\n\nFrom what I understand, the planner has all the information it needs to make a very accurate estimate here, but it's off by quite a lot. What information am I failing to give to the planner?\n\nMy second point of confusion is related. There are 564,071 rows in the source table (xray_india, aliased as papa) that satisfy the condition bravo = 'mike'. EXPLAIN reports the actual number of rows returned as 896*1062 ~= 951,552. I understand that the number reported by EXPLAIN is often a bit bigger, but this discrepancy is much larger than I'm expecting. What am I missing here?\n\nThanks,\nPhilip\n\n",
"msg_date": "Mon, 26 Oct 2020 12:50:38 -0400",
"msg_from": "Philip Semanchuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Understanding bad estimate (related to FKs?)"
},
{
"msg_contents": "On Mon, Oct 26, 2020 at 12:50:38PM -0400, Philip Semanchuk wrote:\n> I'm trying to understand a bad estimate by the planner, and what I can do about it. The anonymized plan is here: https://explain.depesz.com/s/0MDz\n\nWhat postgres version ?\nSince 9.6(?) FKs affect estimates.\n\n> The item I'm focused on is node 23. The estimate is for 7 rows, actual is 896 (multiplied by 1062 loops). I'm confused about two things in this node.\n> \n> The first is Postgres' estimate. The condition for this index scan contains three expressions --\n> \n> (five_uniform = zulu_five.five_uniform) AND\n> (whiskey_mike = juliet_india.whiskey_mike) AND\n> (bravo = 'mike'::text)\n> \n> The columns in the first two expressions (five_uniform and whiskey_mike) are NOT NULL, and have foreign key constraints to their respective tables (zulu_five.five_uniform and juliet_india.whiskey_mike). The planner can know in advance that 100% of the rows in the table will satisfy those criteria.\n> \n> For the third expression (bravo = 'mike'), Postgres has excellent statistics. The estimated frequency of 'mike' is 2.228%, actual frequency is 2.242%, so Postgres' estimate is only off by a tiny amount (0.014%).\n> \n> From what I understand, the planner has all the information it needs to make a very accurate estimate here, but it's off by quite a lot. What information am I failing to give to the planner?\n> \n> My second point of confusion is related. There are 564,071 rows in the source table (xray_india, aliased as papa) that satisfy the condition bravo = 'mike'. EXPLAIN reports the actual number of rows returned as 896*1062 ~= 951,552. I understand that the number reported by EXPLAIN is often a bit bigger, but this discrepancy is much larger than I'm expecting. What am I missing here?\n\n\n",
"msg_date": "Mon, 26 Oct 2020 12:04:05 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Understanding bad estimate (related to FKs?)"
},
{
"msg_contents": "\n\n> On Oct 26, 2020, at 1:04 PM, Justin Pryzby <[email protected]> wrote:\n> \n> On Mon, Oct 26, 2020 at 12:50:38PM -0400, Philip Semanchuk wrote:\n>> I'm trying to understand a bad estimate by the planner, and what I can do about it. The anonymized plan is here: https://explain.depesz.com/s/0MDz\n> \n> What postgres version ?\n> Since 9.6(?) FKs affect estimates.\n\nWe’re using 11.6 (under AWS Aurora).\n\n\n> \n>> The item I'm focused on is node 23. The estimate is for 7 rows, actual is 896 (multiplied by 1062 loops). I'm confused about two things in this node.\n>> \n>> The first is Postgres' estimate. The condition for this index scan contains three expressions --\n>> \n>> (five_uniform = zulu_five.five_uniform) AND\n>> (whiskey_mike = juliet_india.whiskey_mike) AND\n>> (bravo = 'mike'::text)\n>> \n>> The columns in the first two expressions (five_uniform and whiskey_mike) are NOT NULL, and have foreign key constraints to their respective tables (zulu_five.five_uniform and juliet_india.whiskey_mike). The planner can know in advance that 100% of the rows in the table will satisfy those criteria.\n>> \n>> For the third expression (bravo = 'mike'), Postgres has excellent statistics. The estimated frequency of 'mike' is 2.228%, actual frequency is 2.242%, so Postgres' estimate is only off by a tiny amount (0.014%).\n>> \n>> From what I understand, the planner has all the information it needs to make a very accurate estimate here, but it's off by quite a lot. What information am I failing to give to the planner?\n>> \n>> My second point of confusion is related. There are 564,071 rows in the source table (xray_india, aliased as papa) that satisfy the condition bravo = 'mike'. EXPLAIN reports the actual number of rows returned as 896*1062 ~= 951,552. I understand that the number reported by EXPLAIN is often a bit bigger, but this discrepancy is much larger than I'm expecting. What am I missing here?\n\n\n\n",
"msg_date": "Mon, 26 Oct 2020 13:14:12 -0400",
"msg_from": "Philip Semanchuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Understanding bad estimate (related to FKs?)"
},
{
"msg_contents": "On Mon, Oct 26, 2020 at 11:14 AM Philip Semanchuk <\[email protected]> wrote:\n\n> >> The item I'm focused on is node 23. The estimate is for 7 rows, actual\n> is 896 (multiplied by 1062 loops). I'm confused about two things in this\n> node.\n> >>\n> >> The first is Postgres' estimate. The condition for this index scan\n> contains three expressions --\n> >>\n> >> (five_uniform = zulu_five.five_uniform) AND\n> >> (whiskey_mike = juliet_india.whiskey_mike) AND\n> >> (bravo = 'mike'::text)\n>\n\nAre the columns correlated? Have you tried to create extended statistics\nand see if the estimate changes? I believe that extended stats will not\ndirectly help with joins though, only group bys and perhaps choosing an\nindex scan vs table scan when comparing the correlated columns to static\nvalues rather than joining up tables. Wouldn't be much effort to try it\nthough.\n\nOn Mon, Oct 26, 2020 at 11:14 AM Philip Semanchuk <[email protected]> wrote:>> The item I'm focused on is node 23. The estimate is for 7 rows, actual is 896 (multiplied by 1062 loops). I'm confused about two things in this node.\n>> \n>> The first is Postgres' estimate. The condition for this index scan contains three expressions --\n>> \n>> (five_uniform = zulu_five.five_uniform) AND\n>> (whiskey_mike = juliet_india.whiskey_mike) AND\n>> (bravo = 'mike'::text)Are the columns correlated? Have you tried to create extended statistics and see if the estimate changes? I believe that extended stats will not directly help with joins though, only group bys and perhaps choosing an index scan vs table scan when comparing the correlated columns to static values rather than joining up tables. Wouldn't be much effort to try it though.",
"msg_date": "Mon, 26 Oct 2020 11:20:01 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Understanding bad estimate (related to FKs?)"
},
{
"msg_contents": "\n\n> On Oct 26, 2020, at 1:20 PM, Michael Lewis <[email protected]> wrote:\n> \n> On Mon, Oct 26, 2020 at 11:14 AM Philip Semanchuk <[email protected]> wrote:\n> >> The item I'm focused on is node 23. The estimate is for 7 rows, actual is 896 (multiplied by 1062 loops). I'm confused about two things in this node.\n> >> \n> >> The first is Postgres' estimate. The condition for this index scan contains three expressions --\n> >> \n> >> (five_uniform = zulu_five.five_uniform) AND\n> >> (whiskey_mike = juliet_india.whiskey_mike) AND\n> >> (bravo = 'mike'::text)\n> \n> Are the columns correlated? Have you tried to create extended statistics and see if the estimate changes? I believe that extended stats will not directly help with joins though, only group bys and perhaps choosing an index scan vs table scan when comparing the correlated columns to static values rather than joining up tables. Wouldn't be much effort to try it though.\n\n\nThere’s not a lot of correlation between whiskey_mike and bravo --\nstxkind stxndistinct stxdependencies\n['d', 'f'] {\"7, 12\": 42} {\"12 => 7\": 0.000274}\n\nThose stats didn’t help the planner. \n\nI should have mentioned that five_uniform has ~63k unique values, whereas whiskey_mike has only 3, and bravo only 19.\n\nCheers\nPhilip\n\n",
"msg_date": "Mon, 26 Oct 2020 14:55:46 -0400",
"msg_from": "Philip Semanchuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Understanding bad estimate (related to FKs?)"
},
{
"msg_contents": "On Mon, Oct 26, 2020 at 11:20:01AM -0600, Michael Lewis wrote:\n> On Mon, Oct 26, 2020 at 11:14 AM Philip Semanchuk <[email protected]> wrote:\n> \n> > >> The item I'm focused on is node 23. The estimate is for 7 rows, actual\n> > is 896 (multiplied by 1062 loops). I'm confused about two things in this\n> > node.\n> > >>\n> > >> The first is Postgres' estimate. The condition for this index scan\n> > contains three expressions --\n> > >>\n> > >> (five_uniform = zulu_five.five_uniform) AND\n> > >> (whiskey_mike = juliet_india.whiskey_mike) AND\n> > >> (bravo = 'mike'::text)\n> >\n> \n> Are the columns correlated?\n\nI guess it shouldn't matter, since the FKs should remove all but one of the\nconditions.\n\nMaybe you saw this other thread, which I tentatively think also affects your\ncase (equijoin with nonjoin condition)\nhttps://www.postgresql.org/message-id/AM6PR02MB5287A0ADD936C1FA80973E72AB190%40AM6PR02MB5287.eurprd02.prod.outlook.com\n\n\n\n",
"msg_date": "Wed, 28 Oct 2020 20:13:43 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Understanding bad estimate (related to FKs?)"
},
{
"msg_contents": "\n\n> On Oct 28, 2020, at 9:13 PM, Justin Pryzby <[email protected]> wrote:\n> \n> On Mon, Oct 26, 2020 at 11:20:01AM -0600, Michael Lewis wrote:\n>> On Mon, Oct 26, 2020 at 11:14 AM Philip Semanchuk <[email protected]> wrote:\n>> \n>>>>> The item I'm focused on is node 23. The estimate is for 7 rows, actual\n>>> is 896 (multiplied by 1062 loops). I'm confused about two things in this\n>>> node.\n>>>>> \n>>>>> The first is Postgres' estimate. The condition for this index scan\n>>> contains three expressions --\n>>>>> \n>>>>> (five_uniform = zulu_five.five_uniform) AND\n>>>>> (whiskey_mike = juliet_india.whiskey_mike) AND\n>>>>> (bravo = 'mike'::text)\n>>> \n>> \n>> Are the columns correlated?\n> \n> I guess it shouldn't matter, since the FKs should remove all but one of the\n> conditions.\n\nYes, I had the same expectation. I thought Postgres would calculate the selectivity as 1.0 * 1.0 * whatever estimate it has for the frequency of ‘mike’, but since the frequency estimate is very accurate but the planner’s estimate is not, there’s something else going on. \n\n> Maybe you saw this other thread, which I tentatively think also affects your\n> case (equijoin with nonjoin condition)\n> https://www.postgresql.org/message-id/AM6PR02MB5287A0ADD936C1FA80973E72AB190%40AM6PR02MB5287.eurprd02.prod.outlook.com\n\nYes, thank you, I read that thread with interest. I tried your clever trick using BETWEEN, but it didn’t change the plan. Does that suggest there’s some other cause for the planner’s poor estimate?\n\nCheers\nPhilip\n\n\n\n\n\n\n",
"msg_date": "Thu, 29 Oct 2020 11:25:48 -0400",
"msg_from": "Philip Semanchuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Understanding bad estimate (related to FKs?)"
},
{
"msg_contents": "On Thu, Oct 29, 2020 at 11:25:48AM -0400, Philip Semanchuk wrote:\n>\n>\n>> On Oct 28, 2020, at 9:13 PM, Justin Pryzby <[email protected]>\n>> wrote:\n>>\n>> On Mon, Oct 26, 2020 at 11:20:01AM -0600, Michael Lewis wrote:\n>>> On Mon, Oct 26, 2020 at 11:14 AM Philip Semanchuk\n>>> <[email protected]> wrote:\n>>>\n>>>>>> The item I'm focused on is node 23. The estimate is for 7 rows,\n>>>>>> actual\n>>>> is 896 (multiplied by 1062 loops). I'm confused about two things in\n>>>> this node.\n>>>>>>\n>>>>>> The first is Postgres' estimate. The condition for this index\n>>>>>> scan\n>>>> contains three expressions --\n>>>>>>\n>>>>>> (five_uniform = zulu_five.five_uniform) AND (whiskey_mike =\n>>>>>> juliet_india.whiskey_mike) AND (bravo = 'mike'::text)\n>>>>\n>>>\n>>> Are the columns correlated?\n>>\n>> I guess it shouldn't matter, since the FKs should remove all but one\n>> of the conditions.\n>\n>Yes, I had the same expectation. I thought Postgres would calculate the\n>selectivity as 1.0 * 1.0 * whatever estimate it has for the frequency\n>of ‘mike’, but since the frequency estimate is very accurate but the\n>planner’s estimate is not, there’s something else going on.\n>\n\nWell, this is quite a bit more complicated, I'm afraid :-( The clauses\ninclude parameters passed from the nodes above the index scan. So even\nif we had extended stats on the table, we couldn't use them as that\nrequires (Var op Const) conditions. So this likely ends up with a\nproduct of estimates for each clause, and even then we can't use any\nparticular value so we probably end up with something like 1/ndistinct\nor something like that. So if the values actually passed to the index\nscan are more common and/or if the columns are somehow correlated, it's\nnot surprising we end up with an overestimate.\n\n>> Maybe you saw this other thread, which I tentatively think also\n>> affects your case (equijoin with nonjoin condition)\n>> https://www.postgresql.org/message-id/AM6PR02MB5287A0ADD936C1FA80973E72AB190%40AM6PR02MB5287.eurprd02.prod.outlook.com\n>\n>Yes, thank you, I read that thread with interest. I tried your clever\n>trick using BETWEEN, but it didn’t change the plan. Does that suggest\n>there’s some other cause for the planner’s poor estimate?\n>\n\nI don't think that's related - to hit that bug, there would have to be\nimplied conditions pushed-down to the scan level. And there's nothing\nlike that in this case.\n\nFWIW I don't think this has anything to do with join cardinality\nestimation - at least not for the node 23.\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 29 Oct 2020 23:48:45 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Understanding bad estimate (related to FKs?)"
},
{
"msg_contents": "\n\n> On Oct 29, 2020, at 6:48 PM, Tomas Vondra <[email protected]> wrote:\n> \n> On Thu, Oct 29, 2020 at 11:25:48AM -0400, Philip Semanchuk wrote:\n>> \n>> \n>>> On Oct 28, 2020, at 9:13 PM, Justin Pryzby <[email protected]>\n>>> wrote:\n>>> \n>>> On Mon, Oct 26, 2020 at 11:20:01AM -0600, Michael Lewis wrote:\n>>>> On Mon, Oct 26, 2020 at 11:14 AM Philip Semanchuk\n>>>> <[email protected]> wrote:\n>>>> \n>>>>>>> The item I'm focused on is node 23. The estimate is for 7 rows,\n>>>>>>> actual\n>>>>> is 896 (multiplied by 1062 loops). I'm confused about two things in\n>>>>> this node.\n>>>>>>> \n>>>>>>> The first is Postgres' estimate. The condition for this index\n>>>>>>> scan\n>>>>> contains three expressions --\n>>>>>>> \n>>>>>>> (five_uniform = zulu_five.five_uniform) AND (whiskey_mike =\n>>>>>>> juliet_india.whiskey_mike) AND (bravo = 'mike'::text)\n>>>>> \n>>>> \n>>>> Are the columns correlated?\n>>> \n>>> I guess it shouldn't matter, since the FKs should remove all but one\n>>> of the conditions.\n>> \n>> Yes, I had the same expectation. I thought Postgres would calculate the\n>> selectivity as 1.0 * 1.0 * whatever estimate it has for the frequency\n>> of ‘mike’, but since the frequency estimate is very accurate but the\n>> planner’s estimate is not, there’s something else going on.\n>> \n> \n> Well, this is quite a bit more complicated, I'm afraid :-( The clauses\n> include parameters passed from the nodes above the index scan. So even\n> if we had extended stats on the table, we couldn't use them as that\n> requires (Var op Const) conditions. So this likely ends up with a\n> product of estimates for each clause, and even then we can't use any\n> particular value so we probably end up with something like 1/ndistinct\n> or something like that. So if the values actually passed to the index\n> scan are more common and/or if the columns are somehow correlated, it's\n> not surprising we end up with an overestimate.\n\nI appreciate the insight. 1/ndistinct is exactly right. In pg_stats, five_uniform’s ndistinct = 26326, and whiskey_mike’s ndistinct = 3. The estimated frequency of bravo = ‘mike’ is .02228. There are 25156157 rows in the source table, so we have: \n\n25156157 * (1/26326.0) * (1/3.0) * .02228 = 7.0966494209\n\nHence the estimate of 7 rows returned.\n\nIt's interesting that five_uniform’s estimated ndistinct is low by > 50% (actual = 62958). Paradoxically, if I manually set ndistinct to the correct value of 62958, the estimate gets worse (3 rows instead of 7). \n\nSuggestions for fixing this are of course welcome. :-)\n\nOn a related topic, are there any in depth guides to the planner that I could read? I can (and have) read the source code and it’s been informative, but something higher level than the source code would help.\n\nThanks so much\nPhilip \n\n\n\n\n\n\n\n\n",
"msg_date": "Fri, 30 Oct 2020 10:56:54 -0400",
"msg_from": "Philip Semanchuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Understanding bad estimate (related to FKs?)"
},
{
"msg_contents": "Hi,\n\nLe ven. 30 oct. 2020 à 15:57, Philip Semanchuk <[email protected]>\na écrit :\n\n>\n>\n> > On Oct 29, 2020, at 6:48 PM, Tomas Vondra <[email protected]>\n> wrote:\n> >\n> > On Thu, Oct 29, 2020 at 11:25:48AM -0400, Philip Semanchuk wrote:\n> >>\n> >>\n> >>> On Oct 28, 2020, at 9:13 PM, Justin Pryzby <[email protected]>\n> >>> wrote:\n> >>>\n> >>> On Mon, Oct 26, 2020 at 11:20:01AM -0600, Michael Lewis wrote:\n> >>>> On Mon, Oct 26, 2020 at 11:14 AM Philip Semanchuk\n> >>>> <[email protected]> wrote:\n> >>>>\n> >>>>>>> The item I'm focused on is node 23. The estimate is for 7 rows,\n> >>>>>>> actual\n> >>>>> is 896 (multiplied by 1062 loops). I'm confused about two things in\n> >>>>> this node.\n> >>>>>>>\n> >>>>>>> The first is Postgres' estimate. The condition for this index\n> >>>>>>> scan\n> >>>>> contains three expressions --\n> >>>>>>>\n> >>>>>>> (five_uniform = zulu_five.five_uniform) AND (whiskey_mike =\n> >>>>>>> juliet_india.whiskey_mike) AND (bravo = 'mike'::text)\n> >>>>>\n> >>>>\n> >>>> Are the columns correlated?\n> >>>\n> >>> I guess it shouldn't matter, since the FKs should remove all but one\n> >>> of the conditions.\n> >>\n> >> Yes, I had the same expectation. I thought Postgres would calculate the\n> >> selectivity as 1.0 * 1.0 * whatever estimate it has for the frequency\n> >> of ‘mike’, but since the frequency estimate is very accurate but the\n> >> planner’s estimate is not, there’s something else going on.\n> >>\n> >\n> > Well, this is quite a bit more complicated, I'm afraid :-( The clauses\n> > include parameters passed from the nodes above the index scan. So even\n> > if we had extended stats on the table, we couldn't use them as that\n> > requires (Var op Const) conditions. So this likely ends up with a\n> > product of estimates for each clause, and even then we can't use any\n> > particular value so we probably end up with something like 1/ndistinct\n> > or something like that. So if the values actually passed to the index\n> > scan are more common and/or if the columns are somehow correlated, it's\n> > not surprising we end up with an overestimate.\n>\n> I appreciate the insight. 1/ndistinct is exactly right. In pg_stats,\n> five_uniform’s ndistinct = 26326, and whiskey_mike’s ndistinct = 3. The\n> estimated frequency of bravo = ‘mike’ is .02228. There are 25156157 rows in\n> the source table, so we have:\n>\n> 25156157 * (1/26326.0) * (1/3.0) * .02228 = 7.0966494209\n>\n> Hence the estimate of 7 rows returned.\n>\n> It's interesting that five_uniform’s estimated ndistinct is low by > 50%\n> (actual = 62958). Paradoxically, if I manually set ndistinct to the correct\n> value of 62958, the estimate gets worse (3 rows instead of 7).\n>\n> Suggestions for fixing this are of course welcome. :-)\n>\n> On a related topic, are there any in depth guides to the planner that I\n> could read? I can (and have) read the source code and it’s been\n> informative, but something higher level than the source code would help.\n>\n>\nYou may already know this, but there's a bunch of documents up there:\nhttps://wiki.postgresql.org/wiki/Using_EXPLAIN\n\nI'm also working on a project to better document this. I'm just at the\nbeginning, writing it all, in english (which isn't my native language), so\nit takes time. I already have most of it in french in various\ndocuments/formats, but it takes time to go through all of these, summarize\nthem, and translate them. Anyway, work in progress as they say. You can\nhave a look at it there:\nhttps://pgplanner.readthedocs.io/en/latest/index.html. Any comment/help is\nvery welcome.\n\n\n-- \nGuillaume.\n\nHi,Le ven. 30 oct. 2020 à 15:57, Philip Semanchuk <[email protected]> a écrit :\n\n> On Oct 29, 2020, at 6:48 PM, Tomas Vondra <[email protected]> wrote:\n> \n> On Thu, Oct 29, 2020 at 11:25:48AM -0400, Philip Semanchuk wrote:\n>> \n>> \n>>> On Oct 28, 2020, at 9:13 PM, Justin Pryzby <[email protected]>\n>>> wrote:\n>>> \n>>> On Mon, Oct 26, 2020 at 11:20:01AM -0600, Michael Lewis wrote:\n>>>> On Mon, Oct 26, 2020 at 11:14 AM Philip Semanchuk\n>>>> <[email protected]> wrote:\n>>>> \n>>>>>>> The item I'm focused on is node 23. The estimate is for 7 rows,\n>>>>>>> actual\n>>>>> is 896 (multiplied by 1062 loops). I'm confused about two things in\n>>>>> this node.\n>>>>>>> \n>>>>>>> The first is Postgres' estimate. The condition for this index\n>>>>>>> scan\n>>>>> contains three expressions --\n>>>>>>> \n>>>>>>> (five_uniform = zulu_five.five_uniform) AND (whiskey_mike =\n>>>>>>> juliet_india.whiskey_mike) AND (bravo = 'mike'::text)\n>>>>> \n>>>> \n>>>> Are the columns correlated?\n>>> \n>>> I guess it shouldn't matter, since the FKs should remove all but one\n>>> of the conditions.\n>> \n>> Yes, I had the same expectation. I thought Postgres would calculate the\n>> selectivity as 1.0 * 1.0 * whatever estimate it has for the frequency\n>> of ‘mike’, but since the frequency estimate is very accurate but the\n>> planner’s estimate is not, there’s something else going on.\n>> \n> \n> Well, this is quite a bit more complicated, I'm afraid :-( The clauses\n> include parameters passed from the nodes above the index scan. So even\n> if we had extended stats on the table, we couldn't use them as that\n> requires (Var op Const) conditions. So this likely ends up with a\n> product of estimates for each clause, and even then we can't use any\n> particular value so we probably end up with something like 1/ndistinct\n> or something like that. So if the values actually passed to the index\n> scan are more common and/or if the columns are somehow correlated, it's\n> not surprising we end up with an overestimate.\n\nI appreciate the insight. 1/ndistinct is exactly right. In pg_stats, five_uniform’s ndistinct = 26326, and whiskey_mike’s ndistinct = 3. The estimated frequency of bravo = ‘mike’ is .02228. There are 25156157 rows in the source table, so we have: \n\n25156157 * (1/26326.0) * (1/3.0) * .02228 = 7.0966494209\n\nHence the estimate of 7 rows returned.\n\nIt's interesting that five_uniform’s estimated ndistinct is low by > 50% (actual = 62958). Paradoxically, if I manually set ndistinct to the correct value of 62958, the estimate gets worse (3 rows instead of 7). \n\nSuggestions for fixing this are of course welcome. :-)\n\nOn a related topic, are there any in depth guides to the planner that I could read? I can (and have) read the source code and it’s been informative, but something higher level than the source code would help.\nYou may already know this, but there's a bunch of documents up there: https://wiki.postgresql.org/wiki/Using_EXPLAINI'm also working on a project to better document this. I'm just at the beginning, writing it all, in english (which isn't my native language), so it takes time. I already have most of it in french in various documents/formats, but it takes time to go through all of these, summarize them, and translate them. Anyway, work in progress as they say. You can have a look at it there: https://pgplanner.readthedocs.io/en/latest/index.html. Any comment/help is very welcome.-- Guillaume.",
"msg_date": "Sat, 31 Oct 2020 14:53:48 +0100",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Understanding bad estimate (related to FKs?)"
},
{
"msg_contents": "\n\n> On Oct 31, 2020, at 9:53 AM, Guillaume Lelarge <[email protected]> wrote:\n> \n> Hi,\n> \n> Le ven. 30 oct. 2020 à 15:57, Philip Semanchuk <[email protected]> a écrit :\n> \n> \n> > On Oct 29, 2020, at 6:48 PM, Tomas Vondra <[email protected]> wrote:\n> > \n> > On Thu, Oct 29, 2020 at 11:25:48AM -0400, Philip Semanchuk wrote:\n> >> \n> >> \n> >>> On Oct 28, 2020, at 9:13 PM, Justin Pryzby <[email protected]>\n> >>> wrote:\n> >>> \n> >>> On Mon, Oct 26, 2020 at 11:20:01AM -0600, Michael Lewis wrote:\n> >>>> On Mon, Oct 26, 2020 at 11:14 AM Philip Semanchuk\n> >>>> <[email protected]> wrote:\n> >>>> \n> >>>>>>> The item I'm focused on is node 23. The estimate is for 7 rows,\n> >>>>>>> actual\n> >>>>> is 896 (multiplied by 1062 loops). I'm confused about two things in\n> >>>>> this node.\n> >>>>>>> \n> >>>>>>> The first is Postgres' estimate. The condition for this index\n> >>>>>>> scan\n> >>>>> contains three expressions --\n> >>>>>>> \n> >>>>>>> (five_uniform = zulu_five.five_uniform) AND (whiskey_mike =\n> >>>>>>> juliet_india.whiskey_mike) AND (bravo = 'mike'::text)\n> >>>>> \n> >>>> \n> >>>> Are the columns correlated?\n> >>> \n> >>> I guess it shouldn't matter, since the FKs should remove all but one\n> >>> of the conditions.\n> >> \n> >> Yes, I had the same expectation. I thought Postgres would calculate the\n> >> selectivity as 1.0 * 1.0 * whatever estimate it has for the frequency\n> >> of ‘mike’, but since the frequency estimate is very accurate but the\n> >> planner’s estimate is not, there’s something else going on.\n> >> \n> > \n> > Well, this is quite a bit more complicated, I'm afraid :-( The clauses\n> > include parameters passed from the nodes above the index scan. So even\n> > if we had extended stats on the table, we couldn't use them as that\n> > requires (Var op Const) conditions. So this likely ends up with a\n> > product of estimates for each clause, and even then we can't use any\n> > particular value so we probably end up with something like 1/ndistinct\n> > or something like that. So if the values actually passed to the index\n> > scan are more common and/or if the columns are somehow correlated, it's\n> > not surprising we end up with an overestimate.\n> \n> I appreciate the insight. 1/ndistinct is exactly right. In pg_stats, five_uniform’s ndistinct = 26326, and whiskey_mike’s ndistinct = 3. The estimated frequency of bravo = ‘mike’ is .02228. There are 25156157 rows in the source table, so we have: \n> \n> 25156157 * (1/26326.0) * (1/3.0) * .02228 = 7.0966494209\n> \n> Hence the estimate of 7 rows returned.\n> \n> It's interesting that five_uniform’s estimated ndistinct is low by > 50% (actual = 62958). Paradoxically, if I manually set ndistinct to the correct value of 62958, the estimate gets worse (3 rows instead of 7). \n> \n> Suggestions for fixing this are of course welcome. :-)\n> \n> On a related topic, are there any in depth guides to the planner that I could read? I can (and have) read the source code and it’s been informative, but something higher level than the source code would help.\n> \n> \n> You may already know this, but there's a bunch of documents up there: https://wiki.postgresql.org/wiki/Using_EXPLAIN\n\n\nBien merci, yes, I've visited most of those links and learned an enormous amount from them. I've downloaded many of them for re-reading, including yours. :-) It's helpful to be reminded of them again.\n\nEXPLAIN ANALYZE tells me what choices the planner made, but it doesn't tell me why the planner made those choices. For instance, Tomas Vondra's post enabled me to calculate how the planner arrived at its estimate of 7 rows for one node of my query. I would prefer not to reverse engineer the planner's calculation, but instead have the planner just tell me. \n\nIf I was able to combine that information with a summary of the planner's algorithm (a lot to ask for!), then I could understand how the planner chose its plan.\n\nThe query I asked about in the original post of this thread has 13 relations in it. IIUC, that's 13! or > 6 billion possible plans. How did the planner pick one plan out of 6 billion? I'm curious, both for practical purposes (I want my query to run well) and also because it's fascinating.\n\nCheers\nPhilip\n\n\n\n",
"msg_date": "Mon, 2 Nov 2020 14:09:03 -0500",
"msg_from": "Philip Semanchuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Understanding bad estimate (related to FKs?)"
},
{
"msg_contents": "On Mon, Nov 02, 2020 at 02:09:03PM -0500, Philip Semanchuk wrote:\n> Bien merci, yes, I've visited most of those links and learned an enormous amount from them. I've downloaded many of them for re-reading, including yours. :-) It's helpful to be reminded of them again.\n> \n> EXPLAIN ANALYZE tells me what choices the planner made, but it doesn't tell me why the planner made those choices. For instance, Tomas Vondra's post enabled me to calculate how the planner arrived at its estimate of 7 rows for one node of my query. I would prefer not to reverse engineer the planner's calculation, but instead have the planner just tell me. \n> \n> If I was able to combine that information with a summary of the planner's algorithm (a lot to ask for!), then I could understand how the planner chose its plan.\n> \n> The query I asked about in the original post of this thread has 13 relations in it. IIUC, that's 13! or > 6 billion possible plans. How did the planner pick one plan out of 6 billion? I'm curious, both for practical purposes (I want my query to run well) and also because it's fascinating.\n\nI hestitate to suggest it, but maybe you'd want to use \n\n./configure CFLAGS='-DOPTIMIZER_DEBUG=1'\n\nwhich will print out costs of each plan node considered.\n\nYou could also read in selfuncs.c and costsize.c and related parts of the\nsource.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 2 Nov 2020 13:53:39 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Understanding bad estimate (related to FKs?)"
},
{
"msg_contents": "Philip Semanchuk <[email protected]> writes:\n> The query I asked about in the original post of this thread has 13 relations in it. IIUC, that's 13! or > 6 billion possible plans. How did the planner pick one plan out of 6 billion? I'm curious, both for practical purposes (I want my query to run well) and also because it's fascinating.\n\nThe twenty-thousand-foot overview is\n\nhttps://www.postgresql.org/docs/devel/planner-optimizer.html\n\nand then ten-thousand-foot level is the planner README file,\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob_plain;f=src/backend/optimizer/README;hb=HEAD\n\nand then you pretty much gotta start reading code. You could also dig\ninto various planner expository talks that people have given at PG\nconferences. I don't have links at hand, but there have been several.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Nov 2020 15:08:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Understanding bad estimate (related to FKs?)"
},
{
"msg_contents": "Hi,\n\nLe lun. 2 nov. 2020 à 20:09, Philip Semanchuk <[email protected]>\na écrit :\n\n>\n>\n> > On Oct 31, 2020, at 9:53 AM, Guillaume Lelarge <[email protected]>\n> wrote:\n> >\n> > Hi,\n> >\n> > Le ven. 30 oct. 2020 à 15:57, Philip Semanchuk <\n> [email protected]> a écrit :\n> >\n> >\n> > > On Oct 29, 2020, at 6:48 PM, Tomas Vondra <\n> [email protected]> wrote:\n> > >\n> > > On Thu, Oct 29, 2020 at 11:25:48AM -0400, Philip Semanchuk wrote:\n> > >>\n> > >>\n> > >>> On Oct 28, 2020, at 9:13 PM, Justin Pryzby <[email protected]>\n> > >>> wrote:\n> > >>>\n> > >>> On Mon, Oct 26, 2020 at 11:20:01AM -0600, Michael Lewis wrote:\n> > >>>> On Mon, Oct 26, 2020 at 11:14 AM Philip Semanchuk\n> > >>>> <[email protected]> wrote:\n> > >>>>\n> > >>>>>>> The item I'm focused on is node 23. The estimate is for 7 rows,\n> > >>>>>>> actual\n> > >>>>> is 896 (multiplied by 1062 loops). I'm confused about two things in\n> > >>>>> this node.\n> > >>>>>>>\n> > >>>>>>> The first is Postgres' estimate. The condition for this index\n> > >>>>>>> scan\n> > >>>>> contains three expressions --\n> > >>>>>>>\n> > >>>>>>> (five_uniform = zulu_five.five_uniform) AND (whiskey_mike =\n> > >>>>>>> juliet_india.whiskey_mike) AND (bravo = 'mike'::text)\n> > >>>>>\n> > >>>>\n> > >>>> Are the columns correlated?\n> > >>>\n> > >>> I guess it shouldn't matter, since the FKs should remove all but one\n> > >>> of the conditions.\n> > >>\n> > >> Yes, I had the same expectation. I thought Postgres would calculate\n> the\n> > >> selectivity as 1.0 * 1.0 * whatever estimate it has for the frequency\n> > >> of ‘mike’, but since the frequency estimate is very accurate but the\n> > >> planner’s estimate is not, there’s something else going on.\n> > >>\n> > >\n> > > Well, this is quite a bit more complicated, I'm afraid :-( The clauses\n> > > include parameters passed from the nodes above the index scan. So even\n> > > if we had extended stats on the table, we couldn't use them as that\n> > > requires (Var op Const) conditions. So this likely ends up with a\n> > > product of estimates for each clause, and even then we can't use any\n> > > particular value so we probably end up with something like 1/ndistinct\n> > > or something like that. So if the values actually passed to the index\n> > > scan are more common and/or if the columns are somehow correlated, it's\n> > > not surprising we end up with an overestimate.\n> >\n> > I appreciate the insight. 1/ndistinct is exactly right. In pg_stats,\n> five_uniform’s ndistinct = 26326, and whiskey_mike’s ndistinct = 3. The\n> estimated frequency of bravo = ‘mike’ is .02228. There are 25156157 rows in\n> the source table, so we have:\n> >\n> > 25156157 * (1/26326.0) * (1/3.0) * .02228 = 7.0966494209\n> >\n> > Hence the estimate of 7 rows returned.\n> >\n> > It's interesting that five_uniform’s estimated ndistinct is low by > 50%\n> (actual = 62958). Paradoxically, if I manually set ndistinct to the correct\n> value of 62958, the estimate gets worse (3 rows instead of 7).\n> >\n> > Suggestions for fixing this are of course welcome. :-)\n> >\n> > On a related topic, are there any in depth guides to the planner that I\n> could read? I can (and have) read the source code and it’s been\n> informative, but something higher level than the source code would help.\n> >\n> >\n> > You may already know this, but there's a bunch of documents up there:\n> https://wiki.postgresql.org/wiki/Using_EXPLAIN\n>\n>\n> Bien merci, yes, I've visited most of those links and learned an enormous\n> amount from them. I've downloaded many of them for re-reading, including\n> yours. :-) It's helpful to be reminded of them again.\n>\n> EXPLAIN ANALYZE tells me what choices the planner made, but it doesn't\n> tell me why the planner made those choices. For instance, Tomas Vondra's\n> post enabled me to calculate how the planner arrived at its estimate of 7\n> rows for one node of my query. I would prefer not to reverse engineer the\n> planner's calculation, but instead have the planner just tell me.\n>\n> If I was able to combine that information with a summary of the planner's\n> algorithm (a lot to ask for!), then I could understand how the planner\n> chose its plan.\n>\n> The query I asked about in the original post of this thread has 13\n> relations in it. IIUC, that's 13! or > 6 billion possible plans. How did\n> the planner pick one plan out of 6 billion? I'm curious, both for practical\n> purposes (I want my query to run well) and also because it's fascinating.\n>\n>\nI understand, and I mostly agree, especially on the fascinating side of the\nplanner.\n\nAnyway, that's what I'm working on right now. It will take a lot of time,\nand it will probably contain a lot of errors at the beginning, but I'll be\nhappy to fix them.\n\n\n-- \nGuillaume.\n\nHi,Le lun. 2 nov. 2020 à 20:09, Philip Semanchuk <[email protected]> a écrit :\n\n> On Oct 31, 2020, at 9:53 AM, Guillaume Lelarge <[email protected]> wrote:\n> \n> Hi,\n> \n> Le ven. 30 oct. 2020 à 15:57, Philip Semanchuk <[email protected]> a écrit :\n> \n> \n> > On Oct 29, 2020, at 6:48 PM, Tomas Vondra <[email protected]> wrote:\n> > \n> > On Thu, Oct 29, 2020 at 11:25:48AM -0400, Philip Semanchuk wrote:\n> >> \n> >> \n> >>> On Oct 28, 2020, at 9:13 PM, Justin Pryzby <[email protected]>\n> >>> wrote:\n> >>> \n> >>> On Mon, Oct 26, 2020 at 11:20:01AM -0600, Michael Lewis wrote:\n> >>>> On Mon, Oct 26, 2020 at 11:14 AM Philip Semanchuk\n> >>>> <[email protected]> wrote:\n> >>>> \n> >>>>>>> The item I'm focused on is node 23. The estimate is for 7 rows,\n> >>>>>>> actual\n> >>>>> is 896 (multiplied by 1062 loops). I'm confused about two things in\n> >>>>> this node.\n> >>>>>>> \n> >>>>>>> The first is Postgres' estimate. The condition for this index\n> >>>>>>> scan\n> >>>>> contains three expressions --\n> >>>>>>> \n> >>>>>>> (five_uniform = zulu_five.five_uniform) AND (whiskey_mike =\n> >>>>>>> juliet_india.whiskey_mike) AND (bravo = 'mike'::text)\n> >>>>> \n> >>>> \n> >>>> Are the columns correlated?\n> >>> \n> >>> I guess it shouldn't matter, since the FKs should remove all but one\n> >>> of the conditions.\n> >> \n> >> Yes, I had the same expectation. I thought Postgres would calculate the\n> >> selectivity as 1.0 * 1.0 * whatever estimate it has for the frequency\n> >> of ‘mike’, but since the frequency estimate is very accurate but the\n> >> planner’s estimate is not, there’s something else going on.\n> >> \n> > \n> > Well, this is quite a bit more complicated, I'm afraid :-( The clauses\n> > include parameters passed from the nodes above the index scan. So even\n> > if we had extended stats on the table, we couldn't use them as that\n> > requires (Var op Const) conditions. So this likely ends up with a\n> > product of estimates for each clause, and even then we can't use any\n> > particular value so we probably end up with something like 1/ndistinct\n> > or something like that. So if the values actually passed to the index\n> > scan are more common and/or if the columns are somehow correlated, it's\n> > not surprising we end up with an overestimate.\n> \n> I appreciate the insight. 1/ndistinct is exactly right. In pg_stats, five_uniform’s ndistinct = 26326, and whiskey_mike’s ndistinct = 3. The estimated frequency of bravo = ‘mike’ is .02228. There are 25156157 rows in the source table, so we have: \n> \n> 25156157 * (1/26326.0) * (1/3.0) * .02228 = 7.0966494209\n> \n> Hence the estimate of 7 rows returned.\n> \n> It's interesting that five_uniform’s estimated ndistinct is low by > 50% (actual = 62958). Paradoxically, if I manually set ndistinct to the correct value of 62958, the estimate gets worse (3 rows instead of 7). \n> \n> Suggestions for fixing this are of course welcome. :-)\n> \n> On a related topic, are there any in depth guides to the planner that I could read? I can (and have) read the source code and it’s been informative, but something higher level than the source code would help.\n> \n> \n> You may already know this, but there's a bunch of documents up there: https://wiki.postgresql.org/wiki/Using_EXPLAIN\n\n\nBien merci, yes, I've visited most of those links and learned an enormous amount from them. I've downloaded many of them for re-reading, including yours. :-) It's helpful to be reminded of them again.\n\nEXPLAIN ANALYZE tells me what choices the planner made, but it doesn't tell me why the planner made those choices. For instance, Tomas Vondra's post enabled me to calculate how the planner arrived at its estimate of 7 rows for one node of my query. I would prefer not to reverse engineer the planner's calculation, but instead have the planner just tell me. \n\nIf I was able to combine that information with a summary of the planner's algorithm (a lot to ask for!), then I could understand how the planner chose its plan.\n\nThe query I asked about in the original post of this thread has 13 relations in it. IIUC, that's 13! or > 6 billion possible plans. How did the planner pick one plan out of 6 billion? I'm curious, both for practical purposes (I want my query to run well) and also because it's fascinating.\nI understand, and I mostly agree, especially on the fascinating side of the planner. Anyway, that's what I'm working on right now. It will take a lot of time, and it will probably contain a lot of errors at the beginning, but I'll be happy to fix them.-- Guillaume.",
"msg_date": "Mon, 2 Nov 2020 21:48:52 +0100",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Understanding bad estimate (related to FKs?)"
},
{
"msg_contents": ">\n> The query I asked about in the original post of this thread has 13\n> relations in it. IIUC, that's 13! or > 6 billion possible plans. How did\n> the planner pick one plan out of 6 billion? I'm curious, both for practical\n> purposes (I want my query to run well) and also because it's fascinating.\n>\n\nHave you increased geqo_threshold and join_collapse_limit from the defaults?\n\nThe query I asked about in the original post of this thread has 13 relations in it. IIUC, that's 13! or > 6 billion possible plans. How did the planner pick one plan out of 6 billion? I'm curious, both for practical purposes (I want my query to run well) and also because it's fascinating.Have you increased geqo_threshold and join_collapse_limit from the defaults?",
"msg_date": "Mon, 2 Nov 2020 16:09:52 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Understanding bad estimate (related to FKs?)"
},
{
"msg_contents": "On Mon, Nov 02, 2020 at 03:08:12PM -0500, Tom Lane wrote:\n>Philip Semanchuk <[email protected]> writes:\n>> The query I asked about in the original post of this thread has 13 relations in it. IIUC, that's 13! or > 6 billion possible plans. How did the planner pick one plan out of 6 billion? I'm curious, both for practical purposes (I want my query to run well) and also because it's fascinating.\n>\n>The twenty-thousand-foot overview is\n>\n>https://www.postgresql.org/docs/devel/planner-optimizer.html\n>\n>and then ten-thousand-foot level is the planner README file,\n>\n>https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob_plain;f=src/backend/optimizer/README;hb=HEAD\n>\n>and then you pretty much gotta start reading code. You could also dig\n>into various planner expository talks that people have given at PG\n>conferences. I don't have links at hand, but there have been several.\n>\n\nYeah. The jump from high-level overviews to reading source code is a bit\nbrutal, though ...\n\n\nFWIW a short list of relevant talks I'm aware of & would recommend:\n\n* Explaining the Postgres Query Optimizer [Bruce Momjian]\n https://www.postgresql.org/files/developer/tour.pdf\n\n* Intro to Postgres Planner Hacking [Melanie Plageman]\n https://www.pgcon.org/2019/schedule/events/1379.en.html\n\n* Learning to Hack on Postgres Planner [Melanie Plageman]\n https://www.pgcon.org/2019/schedule/attachments/540_debugging_planner_pgcon2019_v4.pdf\n\n* What’s in a Plan? [Robert Haas]\n https://www.postgresql.eu/events/pgconfeu2019/schedule/session/2741-whats-in-a-plan/\n \n* A Tour of PostgreSQL Internals [Tom Lane]\n https://www.postgresql.org/files/developer/tour.pdf\n\n* Inside thePostgreSQL Query Optimizer [Neil Conway]\n http://www.neilconway.org/talks/optimizer/optimizer.pdf\n\nSome are a bit dated, but the overall principles don't change much.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 3 Nov 2020 04:17:14 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Understanding bad estimate (related to FKs?)"
},
{
"msg_contents": "\n\n> On Nov 2, 2020, at 6:09 PM, Michael Lewis <[email protected]> wrote:\n> \n> The query I asked about in the original post of this thread has 13 relations in it. IIUC, that's 13! or > 6 billion possible plans. How did the planner pick one plan out of 6 billion? I'm curious, both for practical purposes (I want my query to run well) and also because it's fascinating.\n> \n> Have you increased geqo_threshold and join_collapse_limit from the defaults?\n\n\nYes, thanks you, I should have said that. We avoid the GEQO, so geqo_threshold=25, and join_collapse_limit=from_collapse_limit=24. We tend to have long running queries, so we’re happy to pay a few seconds of extra planner cost to increase the likelihood of getting a better plan.\n\nCheers\nPhilip\n\n",
"msg_date": "Tue, 3 Nov 2020 10:07:24 -0500",
"msg_from": "Philip Semanchuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Understanding bad estimate (related to FKs?)"
},
{
"msg_contents": "\n\n> On Nov 2, 2020, at 10:17 PM, Tomas Vondra <[email protected]> wrote:\n> \n> On Mon, Nov 02, 2020 at 03:08:12PM -0500, Tom Lane wrote:\n>> Philip Semanchuk <[email protected]> writes:\n>>> The query I asked about in the original post of this thread has 13 relations in it. IIUC, that's 13! or > 6 billion possible plans. How did the planner pick one plan out of 6 billion? I'm curious, both for practical purposes (I want my query to run well) and also because it's fascinating.\n>> \n>> The twenty-thousand-foot overview is\n>> \n>> https://www.postgresql.org/docs/devel/planner-optimizer.html\n>> \n>> and then ten-thousand-foot level is the planner README file,\n>> \n>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob_plain;f=src/backend/optimizer/README;hb=HEAD\n>> \n>> and then you pretty much gotta start reading code. You could also dig\n>> into various planner expository talks that people have given at PG\n>> conferences. I don't have links at hand, but there have been several.\n>> \n> \n> Yeah. The jump from high-level overviews to reading source code is a bit\n> brutal, though ...\n> \n> \n> FWIW a short list of relevant talks I'm aware of & would recommend:\n> \n> * Explaining the Postgres Query Optimizer [Bruce Momjian]\n> https://www.postgresql.org/files/developer/tour.pdf\n> \n> * Intro to Postgres Planner Hacking [Melanie Plageman]\n> https://www.pgcon.org/2019/schedule/events/1379.en.html\n> \n> * Learning to Hack on Postgres Planner [Melanie Plageman]\n> https://www.pgcon.org/2019/schedule/attachments/540_debugging_planner_pgcon2019_v4.pdf\n> \n> * What’s in a Plan? [Robert Haas]\n> https://www.postgresql.eu/events/pgconfeu2019/schedule/session/2741-whats-in-a-plan/\n> * A Tour of PostgreSQL Internals [Tom Lane]\n> https://www.postgresql.org/files/developer/tour.pdf\n> \n> * Inside thePostgreSQL Query Optimizer [Neil Conway]\n> http://www.neilconway.org/talks/optimizer/optimizer.pdf\n> \n> Some are a bit dated, but the overall principles don't change much.\n\n\nThank you so much to Tomas V, Tom L, Guillaume, Justin, and Michael for all the suggestions and direction. I really appreciate your time & wisdom (not to mention your contributions to Postgres!)\n\nCheers\nPhilip\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Tue, 3 Nov 2020 10:27:38 -0500",
"msg_from": "Philip Semanchuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Understanding bad estimate (related to FKs?)"
}
] |
[
{
"msg_contents": "Hello,\nI have a large table (about 13 million rows) full of customer order information. Since most of that information is for orders that have already been fulfilled, I have a partial index to help quickly zero in on rows that have not been fulfilled. This works well, but I noticed today when joining with another large table using its primary key that even though the planner was using my partial index, it decided to do a merge join to the second large table instead of the nested loop I would have expected.\n\nLooking at it in more detail, I found that the planner is assuming that I'll get millions of rows back even when I do a simple query that does an index scan on my partial index:\n\n=> \\d orderitems_committed_unfulfilled \nIndex \"public.orderitems_committed_unfulfilled\"\n Column | Type | Key? | Definition \n--------+--------+------+------------\n id | bigint | yes | id\nbtree, for table \"public.orderitems\", predicate (LEAST(committed, quantity) > fulfilled)\n\n=> explain (analyze, buffers) select oi.id from orderitems oi where LEAST(oi.committed, oi.quantity) > oi.fulfilled;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using orderitems_committed_unfulfilled on orderitems oi (cost=0.41..31688.23 rows=2861527 width=8) (actual time=0.039..2.092 rows=2274 loops=1)\n Heap Fetches: 2493\n Buffers: shared hit=1883\n Planning Time: 0.110 ms\n Execution Time: 2.255 ms\n(5 rows)\n\nSo nice and quick, but the planner thought it would get back 2861527 rows instead of the 2274 I actually have. That explains why it thought it would make sense to do a merge join with my other large table instead of the nested loop over the 2k rows. I would have expected the planner to know that there's no way it'll get back 2 million rows though, given that:\n\n=> select relname, relpages, reltuples from pg_class where relname = 'orderitems_committed_unfulfilled';\n relname | relpages | reltuples \n----------------------------------+----------+-----------\n orderitems_committed_unfulfilled | 3051 | 2112\n(1 row)\n\nIt knows there's only 2k-ish of them from the index. The 2 mil number is the same as what the planner expects if I disable using indexes and it does a seq scan, so I'm assuming it's just the guess from the column statistics and the planner is not using the size of the partial index.\n\nI'm running:\n=> select version();\n version \n-------------------------------------------------------------------------------------------\n PostgreSQL 12.4 on x86_64-pc-linux-gnu, compiled by gcc, a 97d579287 p 0be2109a97, 64-bit\n(1 row)\n\nI'm wondering if the behavior that I'm seeing is expected in 12.4, and if so if it changes in a later version or if I should file an enhancement request? Or if it's not expected is there's something I'm doing wrong, or should file a bug?\n\nThanks for your time.\n\n-- \n Olivier Poquet\n [email protected]\n\n\n",
"msg_date": "Wed, 28 Oct 2020 18:46:12 -0400",
"msg_from": "\"Olivier Poquet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?query_plan_using_partial_index_expects_a_much_larger_number_of?=\n =?UTF-8?Q?_rows_than_is_possible?="
},
{
"msg_contents": "\"Olivier Poquet\" <[email protected]> writes:\n> Looking at it in more detail, I found that the planner is assuming that I'll get millions of rows back even when I do a simple query that does an index scan on my partial index:\n\nWe don't look at partial-index predicates when trying to estimate the\nselectivity of a WHERE clause. It's not clear to me whether that'd be\na useful thing to do, or whether it could be shoehorned into the system\neasily. (One big problem is that while the index size could provide\nan upper bound, it's not apparent how to combine that knowledge with\nselectivities of unrelated conditions. Also, it's riskier to extrapolate\na current rowcount estimate from stale relpages/reltuples data for an\nindex than it is for a table, because the index is less likely to scale\nup linearly.)\n\nIf this particular query is performance-critical, you might consider\nmaterializing the condition, that is something like\n\ncreate table orderitems (\n ... ,\n committed_unfulfilled bool GENERATED ALWAYS AS\n (LEAST(committed, quantity) > fulfilled) STORED\n);\n\nand then your queries and your partial-index predicate must look\nlike \"WHERE committed_unfulfilled\". Having done this, ANALYZE\nwould gather stats on the values of that column and the WHERE\nclauses would be estimated accurately.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Oct 2020 19:30:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re:\n =?UTF-8?Q?query_plan_using_partial_index_expects_a_much_larger_number_of?=\n =?UTF-8?Q?_rows_than_is_possible?="
},
{
"msg_contents": "Thanks Tom,\nThat makes perfect sense. \n\nI'd already gone the route of materializing the condition but I didn't even realize that generated columns was an option (I'd done the same with triggers instead). So thanks a lot of that too!\n\n-- \n Olivier Poquet\n [email protected]\n\nOn Wed, Oct 28, 2020, at 7:30 PM, Tom Lane wrote:\n> \"Olivier Poquet\" <[email protected]> writes:\n> > Looking at it in more detail, I found that the planner is assuming that I'll get millions of rows back even when I do a simple query that does an index scan on my partial index:\n> \n> We don't look at partial-index predicates when trying to estimate the\n> selectivity of a WHERE clause. It's not clear to me whether that'd be\n> a useful thing to do, or whether it could be shoehorned into the system\n> easily. (One big problem is that while the index size could provide\n> an upper bound, it's not apparent how to combine that knowledge with\n> selectivities of unrelated conditions. Also, it's riskier to extrapolate\n> a current rowcount estimate from stale relpages/reltuples data for an\n> index than it is for a table, because the index is less likely to scale\n> up linearly.)\n> \n> If this particular query is performance-critical, you might consider\n> materializing the condition, that is something like\n> \n> create table orderitems (\n> ... ,\n> committed_unfulfilled bool GENERATED ALWAYS AS\n> (LEAST(committed, quantity) > fulfilled) STORED\n> );\n> \n> and then your queries and your partial-index predicate must look\n> like \"WHERE committed_unfulfilled\". Having done this, ANALYZE\n> would gather stats on the values of that column and the WHERE\n> clauses would be estimated accurately.\n> \n> \t\t\tregards, tom lane\n>\n\n\n",
"msg_date": "Wed, 28 Oct 2020 20:53:09 -0400",
"msg_from": "\"Olivier Poquet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_query_plan_using_partial_index_expects_a_much_larger_numbe?=\n =?UTF-8?Q?r_of_rows_than_is_possible?="
},
{
"msg_contents": "On Wed, Oct 28, 2020 at 5:30 PM Tom Lane <[email protected]> wrote:\n\n> \"Olivier Poquet\" <[email protected]> writes:\n> > Looking at it in more detail, I found that the planner is assuming that\n> I'll get millions of rows back even when I do a simple query that does an\n> index scan on my partial index:\n>\n> We don't look at partial-index predicates when trying to estimate the\n> selectivity of a WHERE clause. It's not clear to me whether that'd be\n> a useful thing to do, or whether it could be shoehorned into the system\n> easily. (One big problem is that while the index size could provide\n> an upper bound, it's not apparent how to combine that knowledge with\n> selectivities of unrelated conditions. Also, it's riskier to extrapolate\n> a current rowcount estimate from stale relpages/reltuples data for an\n> index than it is for a table, because the index is less likely to scale\n> up linearly.)\n>\n> regards, tom lane\n>\n\n\nAren't there custom stats created for functional indexes? Would it be\nfeasible to create those for partial indexes as well, maybe only\noptionally? I assume there may be giant gaps with that notion.\n\nOn Wed, Oct 28, 2020 at 5:30 PM Tom Lane <[email protected]> wrote:\"Olivier Poquet\" <[email protected]> writes:\n> Looking at it in more detail, I found that the planner is assuming that I'll get millions of rows back even when I do a simple query that does an index scan on my partial index:\n\nWe don't look at partial-index predicates when trying to estimate the\nselectivity of a WHERE clause. It's not clear to me whether that'd be\na useful thing to do, or whether it could be shoehorned into the system\neasily. (One big problem is that while the index size could provide\nan upper bound, it's not apparent how to combine that knowledge with\nselectivities of unrelated conditions. Also, it's riskier to extrapolate\na current rowcount estimate from stale relpages/reltuples data for an\nindex than it is for a table, because the index is less likely to scale\nup linearly.)\n\n regards, tom lane\nAren't there custom stats created for functional indexes? Would it be feasible to create those for partial indexes as well, maybe only optionally? I assume there may be giant gaps with that notion.",
"msg_date": "Thu, 29 Oct 2020 09:08:11 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query plan using partial index expects a much larger number of\n rows than is possible"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nI would like to join a partitioned table and have the joined columns in the where clause to be used for partition pruning.\r\nFrom some readings in the internet, I conclude that this was not possible in v12. I hoped for the “improvements in partition pruning” in v13, but it seems to me, that it is still not possible, or is it and I am missing something here?\r\n\r\nMy testcase:\r\ncreate table fact (part_key integer) partition by range (part_key);\r\ncreate table fact_100 partition of fact for values from (1) to (101);\r\ncreate table fact_200 partition of fact for values from (101) to (201);\r\n\r\ninsert into fact (part_key) select floor(random()*100+1) from generate_series(1,10000);\r\ninsert into fact (part_key) select floor(random()*100+101) from generate_series(1,10000);\r\n\r\ncreate table dim as (select distinct part_key from fact);\r\ncreate unique index on dim (part_key);\r\n\r\nanalyze fact;\r\nanalyze dim;\r\n\r\n-- Statement\r\nexplain SELECT\r\ncount(*)\r\nFROM\r\ndim INNER JOIN fact ON (dim.part_key=fact.part_key)\r\nWHERE dim.part_key >= 110 and dim.part_key <= 160;\r\n\r\nPlan shows me, that all partitions are scanned:\r\nAggregate (cost=461.00..461.01 rows=1 width=8)\r\n -> Hash Join (cost=4.64..448.25 rows=5100 width=0)\r\n Hash Cond: (fact.part_key = dim.part_key)\r\n -> Append (cost=0.00..390.00 rows=20000 width=4)\r\n -> Seq Scan on fact_100 fact_1 (cost=0.00..145.00 rows=10000 width=4) ⇐==== unnecessarily scanned\r\n -> Seq Scan on fact_200 fact_2 (cost=0.00..145.00 rows=10000 width=4)\r\n -> Hash (cost=4.00..4.00 rows=51 width=4)\r\n -> Seq Scan on dim (cost=0.00..4.00 rows=51 width=4)\r\n Filter: ((part_key >= 110) AND (part_key <= 160))\r\n\r\n\r\nI know, that I could get rid of this problem, by rewriting the query to include the partitioned table in the where clause like this:\r\nWHERE fact.part_key >= 210 and fact.part_key <= 260\r\nPartition pruning happens very nicely then.\r\n\r\nUnfortunately this is not an option for us, because the code in our case is generated by some third party software (sigh).\r\n\r\nDo you have any suggestions, what else I could do? (Or maybe you could add it as a new feature for v14 😉)?\r\n\r\nRegards,\r\nSigrid\r\n",
"msg_date": "Tue, 3 Nov 2020 13:20:10 +0000",
"msg_from": "\"Ehrenreich, Sigrid\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partition pruning with joins"
},
{
"msg_contents": "On Tue, 2020-11-03 at 13:20 +0000, Ehrenreich, Sigrid wrote:\n> I would like to join a partitioned table and have the joined columns in the where clause to be used for partition pruning.\n> From some readings in the internet, I conclude that this was not possible in v12. I hoped for the\n> “improvements in partition pruning” in v13, but it seems to me, that it is still not possible, or is it and I am missing something here?\n> \n> My testcase:\n> create table fact (part_key integer) partition by range (part_key);\n> create table fact_100 partition of fact for values from (1) to (101);\n> create table fact_200 partition of fact for values from (101) to (201);\n> \n> insert into fact (part_key) select floor(random()*100+1) from generate_series(1,10000);\n> insert into fact (part_key) select floor(random()*100+101) from generate_series(1,10000);\n> \n> create table dim as (select distinct part_key from fact);\n> create unique index on dim (part_key);\n> \n> analyze fact;\n> analyze dim;\n> \n> -- Statement\n> explain SELECT\n> count(*)\n> FROM\n> dim INNER JOIN fact ON (dim.part_key=fact.part_key)\n> WHERE dim.part_key >= 110 and dim.part_key <= 160;\n> \n> Plan shows me, that all partitions are scanned:\n> Aggregate (cost=461.00..461.01 rows=1 width=8)\n> -> Hash Join (cost=4.64..448.25 rows=5100 width=0)\n> Hash Cond: (fact.part_key = dim.part_key)\n> -> Append (cost=0.00..390.00 rows=20000 width=4)\n> -> Seq Scan on fact_100 fact_1 (cost=0.00..145.00 rows=10000 width=4) ⇐==== unnecessarily scanned\n> -> Seq Scan on fact_200 fact_2 (cost=0.00..145.00 rows=10000 width=4)\n> -> Hash (cost=4.00..4.00 rows=51 width=4)\n> -> Seq Scan on dim (cost=0.00..4.00 rows=51 width=4)\n> Filter: ((part_key >= 110) AND (part_key <= 160))\n\nOne thing you could try is to partition \"dim\" just like \"fact\" and\nset \"enable_partitionwise_join = on\".\n\nI didn't test it, but that might do the trick.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Tue, 03 Nov 2020 16:45:21 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition pruning with joins"
},
{
"msg_contents": "Hi Laurenz,\r\n\r\nThat trick did it!\r\nGreat idea!\r\n\r\nI have tested it not only successfully with my little testcase, but with our real world data and it works there as well.\r\n\r\nThanks a lot for your help!\r\n\r\nRegards,\r\nSigrid\r\n\r\n-----Original Message-----\r\nFrom: Laurenz Albe <[email protected]> \r\nSent: Tuesday, November 3, 2020 4:45 PM\r\nTo: Ehrenreich, Sigrid <[email protected]>; [email protected]\r\nSubject: Re: Partition pruning with joins\r\n\r\nOn Tue, 2020-11-03 at 13:20 +0000, Ehrenreich, Sigrid wrote:\r\n> I would like to join a partitioned table and have the joined columns in the where clause to be used for partition pruning.\r\n> From some readings in the internet, I conclude that this was not possible in v12. I hoped for the\r\n> “improvements in partition pruning” in v13, but it seems to me, that it is still not possible, or is it and I am missing something here?\r\n> \r\n> My testcase:\r\n> create table fact (part_key integer) partition by range (part_key);\r\n> create table fact_100 partition of fact for values from (1) to (101);\r\n> create table fact_200 partition of fact for values from (101) to (201);\r\n> \r\n> insert into fact (part_key) select floor(random()*100+1) from generate_series(1,10000);\r\n> insert into fact (part_key) select floor(random()*100+101) from generate_series(1,10000);\r\n> \r\n> create table dim as (select distinct part_key from fact);\r\n> create unique index on dim (part_key);\r\n> \r\n> analyze fact;\r\n> analyze dim;\r\n> \r\n> -- Statement\r\n> explain SELECT\r\n> count(*)\r\n> FROM\r\n> dim INNER JOIN fact ON (dim.part_key=fact.part_key)\r\n> WHERE dim.part_key >= 110 and dim.part_key <= 160;\r\n> \r\n> Plan shows me, that all partitions are scanned:\r\n> Aggregate (cost=461.00..461.01 rows=1 width=8)\r\n> -> Hash Join (cost=4.64..448.25 rows=5100 width=0)\r\n> Hash Cond: (fact.part_key = dim.part_key)\r\n> -> Append (cost=0.00..390.00 rows=20000 width=4)\r\n> -> Seq Scan on fact_100 fact_1 (cost=0.00..145.00 rows=10000 width=4) ⇐==== unnecessarily scanned\r\n> -> Seq Scan on fact_200 fact_2 (cost=0.00..145.00 rows=10000 width=4)\r\n> -> Hash (cost=4.00..4.00 rows=51 width=4)\r\n> -> Seq Scan on dim (cost=0.00..4.00 rows=51 width=4)\r\n> Filter: ((part_key >= 110) AND (part_key <= 160))\r\n\r\nOne thing you could try is to partition \"dim\" just like \"fact\" and\r\nset \"enable_partitionwise_join = on\".\r\n\r\nI didn't test it, but that might do the trick.\r\n\r\nYours,\r\nLaurenz Albe\r\n-- \r\nCybertec | https://www.cybertec-postgresql.com\r\n\r\n",
"msg_date": "Wed, 4 Nov 2020 08:47:52 +0000",
"msg_from": "\"Ehrenreich, Sigrid\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Partition pruning with joins"
},
{
"msg_contents": "On Wed, 4 Nov 2020 at 02:20, Ehrenreich, Sigrid <[email protected]> wrote:\n>\n> -- Statement\n> explain SELECT\n> count(*)\n> FROM\n> dim INNER JOIN fact ON (dim.part_key=fact.part_key)\n> WHERE dim.part_key >= 110 and dim.part_key <= 160;\n>\n> Plan shows me, that all partitions are scanned:\n> Aggregate (cost=461.00..461.01 rows=1 width=8)\n> -> Hash Join (cost=4.64..448.25 rows=5100 width=0)\n> Hash Cond: (fact.part_key = dim.part_key)\n> -> Append (cost=0.00..390.00 rows=20000 width=4)\n> -> Seq Scan on fact_100 fact_1 (cost=0.00..145.00 rows=10000 width=4) ⇐==== unnecessarily scanned\n> -> Seq Scan on fact_200 fact_2 (cost=0.00..145.00 rows=10000 width=4)\n> -> Hash (cost=4.00..4.00 rows=51 width=4)\n> -> Seq Scan on dim (cost=0.00..4.00 rows=51 width=4)\n> Filter: ((part_key >= 110) AND (part_key <= 160))\n>\n>\n> I know, that I could get rid of this problem, by rewriting the query to include the partitioned table in the where clause like this:\n> WHERE fact.part_key >= 210 and fact.part_key <= 260\n> Partition pruning happens very nicely then.\n\nIt sounds like what I mentioned in [1] would be the best way to\noptimise this. Unfortunately, the idea didn't get much support and I\ndidn't pursue it any further.\n\nI think it would also be possible to perform run-time pruning on each\nrow from the inner side of the join and bitmap-OR the matching\npartitions each time then just scan those ones when performing the\nprobe phase of the hash join. However, in practice, I'm not too sure\nhow that could be made to work well since nodeHashjoin.c would have to\nbe in charge of collecting the matching partitions when building the\nhash table, but nodeAppend.c would have to be in charge of skipping\npartitions that can't have any matches. I'm unsure how exactly the\nhash join would communicate that to the Append. Traditionally, each\nnode is oblivious to its children nodes.\n\nI imaging the overheads of performing run-time pruning for each inner\nrow might be quite high for this. We do something like this for\nparameterised nested loop joins, but generally those are only chosen\nwhen the number of lookups is relatively small. And with Nested Loop,\nthe lookups are almost certainly much more expensive than a hash\nprobe, since they'll require scanning an index from the root each time\nwe get a new parameter.\n\n> Unfortunately this is not an option for us, because the code in our case is generated by some third party software (sigh).\n>\n> Do you have any suggestions, what else I could do? (Or maybe you could add it as a new feature for v14 )?\n\nIf there was some way to make a parameterised nested loop more\nfavourable, then that might help you. That would require an index on\nfact(part_key), but I imagine it's unlikely to work for you as the\nbenefits of run-time pruning are not really costed into the plan, so\nit may be unlikely that you'll coax the planner into choosing that\nplan. It may be an inferior plan anyway.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/30810.1449335261%40sss.pgh.pa.us#906319f5e212fc3a6a682f16da079f04\n\n\n",
"msg_date": "Thu, 5 Nov 2020 09:13:22 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition pruning with joins"
},
{
"msg_contents": "Hi David,\r\n\r\nThanks a lot for your response.\r\n\r\n> If there was some way to make a parameterised nested loop more favourable, then that might help you. \r\nSetting enable_hashjoin=OFF sped up the execution, but unfortunately we have not means to inject this into to application ☹\r\n\r\n> That would require an index on fact(part_key),\r\nI have tried this with our real data, but the index is not used (because another condition in our where clause is chosen to filter the data instead of the part_key). I have left this out of my posted testcase, to keep down the complexity of the testcase (I hope this is understandable, English is not my native language 😉).\r\n\r\nRegards,\r\nSigrid\r\n\r\n-----Original Message-----\r\nFrom: David Rowley <[email protected]> \r\nSent: Wednesday, November 4, 2020 9:13 PM\r\nTo: Ehrenreich, Sigrid <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: Partition pruning with joins\r\n\r\nOn Wed, 4 Nov 2020 at 02:20, Ehrenreich, Sigrid <[email protected]> wrote:\r\n>\r\n> -- Statement\r\n> explain SELECT\r\n> count(*)\r\n> FROM\r\n> dim INNER JOIN fact ON (dim.part_key=fact.part_key)\r\n> WHERE dim.part_key >= 110 and dim.part_key <= 160;\r\n>\r\n> Plan shows me, that all partitions are scanned:\r\n> Aggregate (cost=461.00..461.01 rows=1 width=8)\r\n> -> Hash Join (cost=4.64..448.25 rows=5100 width=0)\r\n> Hash Cond: (fact.part_key = dim.part_key)\r\n> -> Append (cost=0.00..390.00 rows=20000 width=4)\r\n> -> Seq Scan on fact_100 fact_1 (cost=0.00..145.00 rows=10000 width=4) ⇐==== unnecessarily scanned\r\n> -> Seq Scan on fact_200 fact_2 (cost=0.00..145.00 rows=10000 width=4)\r\n> -> Hash (cost=4.00..4.00 rows=51 width=4)\r\n> -> Seq Scan on dim (cost=0.00..4.00 rows=51 width=4)\r\n> Filter: ((part_key >= 110) AND (part_key <= 160))\r\n>\r\n>\r\n> I know, that I could get rid of this problem, by rewriting the query to include the partitioned table in the where clause like this:\r\n> WHERE fact.part_key >= 210 and fact.part_key <= 260\r\n> Partition pruning happens very nicely then.\r\n\r\nIt sounds like what I mentioned in [1] would be the best way to\r\noptimise this. Unfortunately, the idea didn't get much support and I\r\ndidn't pursue it any further.\r\n\r\nI think it would also be possible to perform run-time pruning on each\r\nrow from the inner side of the join and bitmap-OR the matching\r\npartitions each time then just scan those ones when performing the\r\nprobe phase of the hash join. However, in practice, I'm not too sure\r\nhow that could be made to work well since nodeHashjoin.c would have to\r\nbe in charge of collecting the matching partitions when building the\r\nhash table, but nodeAppend.c would have to be in charge of skipping\r\npartitions that can't have any matches. I'm unsure how exactly the\r\nhash join would communicate that to the Append. Traditionally, each\r\nnode is oblivious to its children nodes.\r\n\r\nI imaging the overheads of performing run-time pruning for each inner\r\nrow might be quite high for this. We do something like this for\r\nparameterised nested loop joins, but generally those are only chosen\r\nwhen the number of lookups is relatively small. And with Nested Loop,\r\nthe lookups are almost certainly much more expensive than a hash\r\nprobe, since they'll require scanning an index from the root each time\r\nwe get a new parameter.\r\n\r\n> Unfortunately this is not an option for us, because the code in our case is generated by some third party software (sigh).\r\n>\r\n> Do you have any suggestions, what else I could do? (Or maybe you could add it as a new feature for v14 )?\r\n\r\nIf there was some way to make a parameterised nested loop more\r\nfavourable, then that might help you. That would require an index on\r\nfact(part_key), but I imagine it's unlikely to work for you as the\r\nbenefits of run-time pruning are not really costed into the plan, so\r\nit may be unlikely that you'll coax the planner into choosing that\r\nplan. It may be an inferior plan anyway.\r\n\r\nDavid\r\n\r\n[1] https://www.postgresql.org/message-id/flat/30810.1449335261%40sss.pgh.pa.us#906319f5e212fc3a6a682f16da079f04\r\n",
"msg_date": "Thu, 5 Nov 2020 06:23:59 +0000",
"msg_from": "\"Ehrenreich, Sigrid\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Partition pruning with joins"
}
] |
[
{
"msg_contents": "Hello (TL;DR):\n\nNoob here, so please bear with me. The SQL I'm presenting is part of a\nlarger PL/PGSQL script that generates generic \"counts\" from tables in our\ndatabase. This is code converted from an Oracle database that we recently\nmigrated from.\n\nI have a strange situation where a base query completes in about 30 seconds\nbut if I add a nextval() call to the select it never completes. There are\nother processes running that are accessing the same sequence, but I thought\nthat concurrency was not an issue for sequences (other than skipped\nvalues).\n\nWe are running on Google Cloud SQL v12 (I believe it is currently 12.3).\nWe are configured with a failover replica. The VM configured is 8 vCPUs\nand 16GB of RAM. PgAdmin shows that our cache hit rates are around 99%, as\ndoes this SQL:\n\nSELECT\n sum(heap_blks_read) as heap_read,\n sum(heap_blks_hit) as heap_hit,\n sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio\nFROM\n pg_statio_user_tables;\n\n heap_read | heap_hit | ratio\n------------+--------------+------------------------\n 1558247211 | 156357754256 | 0.99013242992145017164\n(1 row)\n\nWe run autovacuum. work_mem = 131072.\n\nThe base SQL without the nextval() call and plan are at:\nhttps://explain.depesz.com/s/T3Gn\n\nWhile the performance is not the fastest, 30 seconds for the execution is\nacceptable in our application. We only run this once a month. I am not\nlooking to optimize the query as it stands (yet). The only change that\ncauses it to be extremely slow or hang (can't tell which) is that I changed\nthe select from:\n\nselect unnest(....\n\nto\n\nselect nextval('sbowner.idgen'), unnest(....\n\nHere are all the tables/views involved, as requested in the \"Slow Query\nQuestions\" FAQ.\n\nI am aware that the structure of these views leaves A LOT to be desired,\nbut I don't think they have a bearing on this issue, since the addition of\nnextval() is the problem. We are going to restructure all of this and\nremove many layers eventually. Before subjecting the reader to this long\nlist of views, here's the theory. We have customers and orders for various\nproducts. Those products are grouped together into \"lists\" that can be\nselected. Some products could be in more than one list. At the same time,\nsome products are \"pre-release\", so they are only reported internally and\nomitted from these counts. Next, some of the orders are \"autoship\",\nmeaning that the customer has subscribed to receive the product\nautomatically. Any customer with an autoship in the same category of\nproduct is to be omitted from these counts.\n\nAs there are many of these \"lists\", there was a naming scheme created in\nthe Oracle database we converted from. Oracle allows synonyms so that we\ncould create one view and rename it to match the naming scheme. PostgreSQL\ndoes not allow that, so instead we had to create views on views to keep the\nnames intact.\n\nThe views/tables involved are:\n\n Table \"lruser.count_tempcols\"\n Column | Type | Collation | Nullable | Default\n-----------+-----------------------------+-----------+----------+---------\n typecode | character(1) | | |\n disporder | smallint | | |\n mindate | timestamp without time zone | | |\n maxdate | timestamp without time zone | | |\n fmtdate | character varying(10) | | |\n\nThis table holds generated date ranges for the counts to be generated by\nthe main query. Records are inserted once at the start of execution of the\nscript, followed by creating an index:\n\ncreate index count_tempcols_ndx on count_tempcols(mindate, maxdate,\nfmtdate, disporder, typecode);\nanalyze count_tempcols;\n\nIt is actually created as a temporary table, but that makes it hard to\npresent here. ;-)\n\nHere is fortherb_indcounts and the entire view chain:\n\nfortherb_indcounts - an extract of the data that we actually generate the\ncounts from\n\n View \"lruser.fortherb_indcounts\"\n Column | Type | Collation | Nullable | Default |\nStorage | Description\n----------+-----------------------------+-----------+----------+---------+----------+-------------\n id | bigint | | | |\nplain |\n state | character varying(2) | | | |\nextended |\n zip | character varying(6) | | | |\nextended |\n rtype | bpchar | | | |\nextended |\n sexcode | character(1) | | | |\nextended |\n origdate | timestamp without time zone | | | |\nplain |\n hotline | timestamp without time zone | | | |\nplain |\n numpurch | bigint | | | |\nplain |\n scf | text | | | |\nextended |\n phone | character varying(16) | | | |\nextended |\n paymeth | character varying(4) | | | |\nextended |\n email | character varying(40) | | | |\nextended |\n itemcode | character varying(10) | | | |\nextended |\nView definition:\n SELECT c.id,\n c.state,\n c.zip,\n c.rtype,\n c.sexcode,\n c.origdate,\n date_trunc('day'::text, t.hotlinedate) AS hotline,\n c.numpurch,\n substr(c.zip::text, 1, 3) AS scf,\n c.phone,\n t.paymeth,\n c.email,\n t.itemcode\n FROM fortherb_ind c,\n \"fortherb_ind$rent$tracking\" t\n WHERE c.id = t.pasid;\n\nThis next table is a list of \"rentable\" transactions - those transactions\nthat we want to actually count (omitting pre-release and autoships).\n\n View \"lruser.fortherb_ind$rent$tracking\"\n Column | Type | Collation | Nullable | Default\n| Storage | Description\n-------------+-----------------------------+-----------+----------+---------+----------+-------------\n pasid | bigint | | |\n| plain |\n jobid | bigint | | |\n| plain |\n itemcode | character varying(10) | | |\n| extended |\n hotlinedate | timestamp without time zone | | |\n| plain |\n updatedate | timestamp without time zone | | |\n| plain |\n rectype | character(1) | | |\n| extended |\n autoship | character(1) | | |\n| extended |\n subid | character varying(20) | | |\n| extended |\n amount | numeric(10,2) | | |\n| main |\n sourcecode | character varying(20) | | |\n| extended |\n ordernum | character varying(20) | | |\n| extended |\n paymeth | character varying(4) | | |\n| extended |\nView definition:\n SELECT \"fortherb$rent$i_tracking\".pasid,\n \"fortherb$rent$i_tracking\".jobid,\n \"fortherb$rent$i_tracking\".itemcode,\n \"fortherb$rent$i_tracking\".hotlinedate,\n \"fortherb$rent$i_tracking\".updatedate,\n \"fortherb$rent$i_tracking\".rectype,\n \"fortherb$rent$i_tracking\".autoship,\n \"fortherb$rent$i_tracking\".subid,\n \"fortherb$rent$i_tracking\".amount,\n \"fortherb$rent$i_tracking\".sourcecode,\n \"fortherb$rent$i_tracking\".ordernum,\n \"fortherb$rent$i_tracking\".paymeth\n FROM \"fortherb$rent$i_tracking\";\n\nThis is the same table as the previous, just with a different name (Oracle\nsynonym simulation):\n\n View \"lruser.fortherb$rent$i_tracking\"\n Column | Type | Collation | Nullable | Default\n| Storage | Description\n-------------+-----------------------------+-----------+----------+---------+----------+-------------\n pasid | bigint | | |\n| plain |\n jobid | bigint | | |\n| plain |\n itemcode | character varying(10) | | |\n| extended |\n hotlinedate | timestamp without time zone | | |\n| plain |\n updatedate | timestamp without time zone | | |\n| plain |\n rectype | character(1) | | |\n| extended |\n autoship | character(1) | | |\n| extended |\n subid | character varying(20) | | |\n| extended |\n amount | numeric(10,2) | | |\n| main |\n sourcecode | character varying(20) | | |\n| extended |\n ordernum | character varying(20) | | |\n| extended |\n paymeth | character varying(4) | | |\n| extended |\nView definition:\n SELECT i.pasid,\n i.jobid,\n i.itemcode,\n i.hotlinedate,\n i.updatedate,\n i.rectype,\n i.autoship,\n i.subid,\n i.amount,\n i.sourcecode,\n i.ordernum,\n i.paymeth\n FROM glm.glmitems i\n WHERE (i.prodtable::text = ANY (ARRAY['fortherb'::character\nvarying::text, 'fortherb2'::character varying::text])) AND NOT (EXISTS (\nSELECT NULL::text AS text\n FROM glmprods\n WHERE glmprods.prerelease IS NOT NULL AND\nglmprods.prerelease::text <> ''::text AND glmprods.prodcode::text =\ni.itemcode::text)) AND (i.rectype = ANY (ARRAY['2'::bpchar, '3'::bpchar]))\nAND NOT (EXISTS ( SELECT NULL::text AS text\n FROM \"fortherb$rent$i_track_as\" a\n WHERE a.pasid = i.pasid));\n\nThis is a view of all transactions with the \"list\" they belong to appended,\nas well as a pre-release code.\n\n View \"glm.glmitems\"\n Column | Type | Collation | Nullable | Default\n| Storage | Description\n-------------+-----------------------------+-----------+----------+---------+----------+-------------\n pasid | bigint | | |\n| plain |\n jobid | bigint | | |\n| plain |\n itemcode | character varying(10) | | |\n| extended |\n hotlinedate | timestamp without time zone | | |\n| plain |\n updatedate | timestamp without time zone | | |\n| plain |\n rectype | character(1) | | |\n| extended |\n autoship | character(1) | | |\n| extended |\n subid | character varying(20) | | |\n| extended |\n amount | numeric(10,2) | | |\n| main |\n sourcecode | character varying(20) | | |\n| extended |\n ordernum | character varying(20) | | |\n| extended |\n paymeth | character varying(4) | | |\n| extended |\n itemid | bigint | | |\n| plain |\n prodtable | character varying | | |\n| extended |\n category | character varying(50) | | |\n| extended |\n subcategory | character varying(15) | | |\n| extended |\n prerelease | character(1) | | |\n| extended |\nView definition:\n SELECT t.pasid,\n t.jobid,\n t.itemcode,\n t.hotlinedate,\n t.updatedate,\n t.rectype,\n t.autoship,\n t.subid,\n t.amount,\n t.sourcecode,\n t.ordernum,\n t.paymeth,\n t.itemid,\n CASE\n WHEN t.hotlinedate >= p.changedate THEN p.prodtable\n ELSE p.prodtable_old\n END AS prodtable,\n p.category,\n p.subcategory,\n p.prerelease\n FROM \"glm$tracking\" t\n JOIN glm.glmproducts p ON t.itemcode::text = p.prodcode::text;\n\nThis is the base table of transactions:\n\n Table \"lruser.glm$tracking\"\n Column | Type | Collation | Nullable | Default\n-------------+-----------------------------+-----------+----------+---------\n pasid | bigint | | |\n jobid | bigint | | |\n itemcode | character varying(10) | | |\n hotlinedate | timestamp without time zone | | |\n updatedate | timestamp without time zone | | |\n rectype | character(1) | | |\n autoship | character(1) | | |\n subid | character varying(20) | | |\n amount | numeric(10,2) | | |\n sourcecode | character varying(20) | | |\n ordernum | character varying(20) | | |\n paymeth | character varying(4) | | |\n itemid | bigint | | |\nIndexes:\n \"glm$tracking$countndx\" btree (itemcode, pasid, rectype, hotlinedate)\n \"glm$tracking$ndx\" btree (itemcode, hotlinedate, rectype, pasid)\n \"glm$tracking$prodndx\" btree (itemcode, pasid, rectype, hotlinedate)\n \"glm$tracking$rent$ndx\" btree (pasid, hotlinedate, itemcode) INCLUDE\n(rectype)\nForeign-key constraints:\n \"glm$autoship$fk\" FOREIGN KEY (subid) REFERENCES \"glm$autoship\"(subid)\n \"glm$cust$fk\" FOREIGN KEY (pasid) REFERENCES glm(id)\n \"glm$tracking$prod$fk\" FOREIGN KEY (itemcode) REFERENCES\nglm.glmproducts(prodcode)\nTriggers:\n \"glm$tracking$itemid\" BEFORE INSERT OR UPDATE ON \"glm$tracking\" FOR\nEACH ROW EXECUTE FUNCTION \"trigger_fct_glm$tracking$itemid\"()\n\nThis is an old version of our table of products sold - it should be\nreplaced by our newer \"glmproducts\" table.\n\n Table \"lruser.glmprods\" *(OLD\nVERSION OF GLMPRODUCTS - SHOULD BE REMOVED!)*\n Column | Type | Collation | Nullable | Default |\nStorage | Stats target | Description\n------------+-----------------------+-----------+----------+---------+----------+--------------+-------------\n prodcode | character varying(10) | | not null | |\nextended | |\n prodtable | character varying(30) | | | |\nextended | |\n prerelease | character(1) | | | |\nextended | |\n category | character varying(50) | | | |\nextended | |\nIndexes:\n \"glmprods$pk\" PRIMARY KEY, btree (prodcode)\nAccess method: heap\n\nThis is a view that identifies customers with autoships and the category\nthe product belongs to. Note that subscriptions can end, hence the\ncanceldate. We only want to omit customers with *active* subscription in\nproducts of the same category.\n\n View \"lruser.category_autoship\"\n Column | Type | Collation | Nullable | Default\n| Storage | Description\n------------+-----------------------------+-----------+----------+---------+----------+-------------\n category | character varying(50) | | |\n| extended |\n prodtable | character varying(30) | | |\n| extended |\n subid | character varying(20) | | |\n| extended |\n pasid | bigint | | |\n| plain |\n jobid | bigint | | |\n| plain |\n updatedate | timestamp without time zone | | |\n| plain |\n startdate | timestamp without time zone | | |\n| plain |\n canceldate | timestamp without time zone | | |\n| plain |\n itemcode | character varying(10) | | |\n| extended |\nView definition:\n SELECT p.category,\n p.prodtable,\n a.subid,\n a.pasid,\n a.jobid,\n a.updatedate,\n a.startdate,\n a.canceldate,\n a.itemcode\n FROM \"glm$autoship\" a,\n glmprods p\n WHERE a.itemcode::text = p.prodcode::text;\n\nThis is the base table of the actual autoship subscriptions.\n\n Table \"lruser.glm$autoship\"\n Column | Type | Collation | Nullable | Default\n------------+-----------------------------+-----------+----------+---------\n subid | character varying(20) | | not null |\n pasid | bigint | | |\n jobid | bigint | | |\n updatedate | timestamp without time zone | | |\n startdate | timestamp without time zone | | |\n canceldate | timestamp without time zone | | |\n itemcode | character varying(10) | | |\nIndexes:\n \"glm$autoship$pk\" PRIMARY KEY, btree (subid)\n \"glm$autoship$catndx\" btree (pasid, itemcode, canceldate)\n \"glm$autoship$prodndx\" btree (itemcode, pasid, canceldate)\nForeign-key constraints:\n \"glm$autoship$prodfk\" FOREIGN KEY (itemcode) REFERENCES\nglm.glmproducts(prodcode)\nReferenced by:\n TABLE \"\"glm$tracking\"\" CONSTRAINT \"glm$autoship$fk\" FOREIGN KEY (subid)\nREFERENCES \"glm$autoship\"(subid)\n\nThis is the new table of products.\n\n Table \"glm.glmproducts\"\n Column | Type | Collation | Nullable |\nDefault\n---------------+-----------------------------+-----------+----------+---------\n id | bigint | | |\n prodcode | character varying(8) | | not null |\n prodtable | character varying(20) | | not null |\n category | character varying(50) | | |\n prodtable_old | character varying(30) | | |\n category_old | character varying(50) | | |\n prodname | character varying(30) | | |\n broker | character varying(20) | | |\n prerelease | character(1) | | |\n exclude | character(1) | | |\n changedate | timestamp without time zone | | |\n subcategory | character varying(15) | | |\n changedate_ih | timestamp without time zone | | |\nIndexes:\n \"glmproducts$pk\" PRIMARY KEY, btree (prodcode)\nReferenced by:\n TABLE \"\"glm$autoship\"\" CONSTRAINT \"glm$autoship$prodfk\" FOREIGN KEY\n(itemcode) REFERENCES glm.glmproducts(prodcode)\n TABLE \"\"glm$tracking\"\" CONSTRAINT \"glm$tracking$prod$fk\" FOREIGN KEY\n(itemcode) REFERENCES glm.glmproducts(prodcode)\nTriggers:\n \"aur$glmproducts\" AFTER UPDATE ON glm.glmproducts FOR EACH ROW EXECUTE\nFUNCTION glm.\"trigger_fct_aur$glmproducts\"()\n \"bdr$glmproducts\" BEFORE DELETE ON glm.glmproducts FOR EACH ROW EXECUTE\nFUNCTION glm.\"trigger_fct_bdr$glmproducts\"()\n \"biur$glmproducts\" BEFORE INSERT OR UPDATE ON glm.glmproducts FOR EACH\nROW EXECUTE FUNCTION glm.\"trigger_fct_biur$glmproducts\"()\n\nI believe that's the entire list of tables/views involved. Again, my\napologies for the long post. I presented all of this for completeness,\nalthough I don't believe it has anything to do with the actual problem. :-/\n\nThanks in advance for any advice!\n\n Eric Raskin\n\n\n-- \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nEric H. Raskin\n 914-765-0500 x120 or *315-338-4461\n(direct)*\n\nProfessional Advertising Systems Inc.\n fax: 914-765-0500 or *315-338-4461 (direct)*\n\n3 Morgan Drive #310\n [email protected]\n\nMt Kisco, NY 10549\n http://www.paslists.com\n\nHello (TL;DR):Noob here, so please bear with me. The SQL I'm presenting is part of a larger PL/PGSQL script that generates generic \"counts\" from tables in our database. This is code converted from an Oracle database that we recently migrated from.I have a strange situation where a base query completes in about 30 seconds but if I add a nextval() call to the select it never completes. There are other processes running that are accessing the same sequence, but I thought that concurrency was not an issue for sequences (other than skipped values). We are running on Google Cloud SQL v12 (I believe it is currently 12.3). We are configured with a failover replica. The VM configured is 8 vCPUs and 16GB of RAM. PgAdmin shows that our cache hit rates are around 99%, as does this SQL:SELECT sum(heap_blks_read) as heap_read, sum(heap_blks_hit) as heap_hit, sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratioFROM pg_statio_user_tables; heap_read | heap_hit | ratio------------+--------------+------------------------ 1558247211 | 156357754256 | 0.99013242992145017164(1 row)We run autovacuum. work_mem = 131072. The base SQL without the nextval() call and plan are at: https://explain.depesz.com/s/T3GnWhile the performance is not the fastest, 30 seconds for the execution is acceptable in our application. We only run this once a month. I am not looking to optimize the query as it stands (yet). The only change that causes it to be extremely slow or hang (can't tell which) is that I changed the select from:select unnest(....toselect nextval('sbowner.idgen'), unnest(....Here are all the tables/views involved, as requested in the \"Slow Query Questions\" FAQ. I am aware that the structure of these views leaves A LOT to be desired, but I don't think they have a bearing on this issue, since the addition of nextval() is the problem. We are going to restructure all of this and remove many layers eventually. Before subjecting the reader to this long list of views, here's the theory. We have customers and orders for various products. Those products are grouped together into \"lists\" that can be selected. Some products could be in more than one list. At the same time, some products are \"pre-release\", so they are only reported internally and omitted from these counts. Next, some of the orders are \"autoship\", meaning that the customer has subscribed to receive the product automatically. Any customer with an autoship in the same category of product is to be omitted from these counts. As there are many of these \"lists\", there was a naming scheme created in the Oracle database we converted from. Oracle allows synonyms so that we could create one view and rename it to match the naming scheme. PostgreSQL does not allow that, so instead we had to create views on views to keep the names intact. The views/tables involved are: Table \"lruser.count_tempcols\" Column | Type | Collation | Nullable | Default-----------+-----------------------------+-----------+----------+--------- typecode | character(1) | | | disporder | smallint | | | mindate | timestamp without time zone | | | maxdate | timestamp without time zone | | | fmtdate | character varying(10) | | |This table holds generated date ranges for the counts to be generated by the main query. Records are inserted once at the start of execution of the script, followed by creating an index:create index count_tempcols_ndx on count_tempcols(mindate, maxdate, fmtdate, disporder, typecode);analyze count_tempcols;It is actually created as a temporary table, but that makes it hard to present here. ;-)Here is fortherb_indcounts and the entire view chain:fortherb_indcounts - an extract of the data that we actually generate the counts from View \"lruser.fortherb_indcounts\" Column | Type | Collation | Nullable | Default | Storage | Description----------+-----------------------------+-----------+----------+---------+----------+------------- id | bigint | | | | plain | state | character varying(2) | | | | extended | zip | character varying(6) | | | | extended | rtype | bpchar | | | | extended | sexcode | character(1) | | | | extended | origdate | timestamp without time zone | | | | plain | hotline | timestamp without time zone | | | | plain | numpurch | bigint | | | | plain | scf | text | | | | extended | phone | character varying(16) | | | | extended | paymeth | character varying(4) | | | | extended | email | character varying(40) | | | | extended | itemcode | character varying(10) | | | | extended |View definition: SELECT c.id, c.state, c.zip, c.rtype, c.sexcode, c.origdate, date_trunc('day'::text, t.hotlinedate) AS hotline, c.numpurch, substr(c.zip::text, 1, 3) AS scf, c.phone, t.paymeth, c.email, t.itemcode FROM fortherb_ind c, \"fortherb_ind$rent$tracking\" t WHERE c.id = t.pasid;This next table is a list of \"rentable\" transactions - those transactions that we want to actually count (omitting pre-release and autoships). View \"lruser.fortherb_ind$rent$tracking\" Column | Type | Collation | Nullable | Default | Storage | Description-------------+-----------------------------+-----------+----------+---------+----------+------------- pasid | bigint | | | | plain | jobid | bigint | | | | plain | itemcode | character varying(10) | | | | extended | hotlinedate | timestamp without time zone | | | | plain | updatedate | timestamp without time zone | | | | plain | rectype | character(1) | | | | extended | autoship | character(1) | | | | extended | subid | character varying(20) | | | | extended | amount | numeric(10,2) | | | | main | sourcecode | character varying(20) | | | | extended | ordernum | character varying(20) | | | | extended | paymeth | character varying(4) | | | | extended |View definition: SELECT \"fortherb$rent$i_tracking\".pasid, \"fortherb$rent$i_tracking\".jobid, \"fortherb$rent$i_tracking\".itemcode, \"fortherb$rent$i_tracking\".hotlinedate, \"fortherb$rent$i_tracking\".updatedate, \"fortherb$rent$i_tracking\".rectype, \"fortherb$rent$i_tracking\".autoship, \"fortherb$rent$i_tracking\".subid, \"fortherb$rent$i_tracking\".amount, \"fortherb$rent$i_tracking\".sourcecode, \"fortherb$rent$i_tracking\".ordernum, \"fortherb$rent$i_tracking\".paymeth FROM \"fortherb$rent$i_tracking\";This is the same table as the previous, just with a different name (Oracle synonym simulation): View \"lruser.fortherb$rent$i_tracking\" Column | Type | Collation | Nullable | Default | Storage | Description-------------+-----------------------------+-----------+----------+---------+----------+------------- pasid | bigint | | | | plain | jobid | bigint | | | | plain | itemcode | character varying(10) | | | | extended | hotlinedate | timestamp without time zone | | | | plain | updatedate | timestamp without time zone | | | | plain | rectype | character(1) | | | | extended | autoship | character(1) | | | | extended | subid | character varying(20) | | | | extended | amount | numeric(10,2) | | | | main | sourcecode | character varying(20) | | | | extended | ordernum | character varying(20) | | | | extended | paymeth | character varying(4) | | | | extended |View definition: SELECT i.pasid, i.jobid, i.itemcode, i.hotlinedate, i.updatedate, i.rectype, i.autoship, i.subid, i.amount, i.sourcecode, i.ordernum, i.paymeth FROM glm.glmitems i WHERE (i.prodtable::text = ANY (ARRAY['fortherb'::character varying::text, 'fortherb2'::character varying::text])) AND NOT (EXISTS ( SELECT NULL::text AS text FROM glmprods WHERE glmprods.prerelease IS NOT NULL AND glmprods.prerelease::text <> ''::text AND glmprods.prodcode::text = i.itemcode::text)) AND (i.rectype = ANY (ARRAY['2'::bpchar, '3'::bpchar])) AND NOT (EXISTS ( SELECT NULL::text AS text FROM \"fortherb$rent$i_track_as\" a WHERE a.pasid = i.pasid));This is a view of all transactions with the \"list\" they belong to appended, as well as a pre-release code. View \"glm.glmitems\" Column | Type | Collation | Nullable | Default | Storage | Description-------------+-----------------------------+-----------+----------+---------+----------+------------- pasid | bigint | | | | plain | jobid | bigint | | | | plain | itemcode | character varying(10) | | | | extended | hotlinedate | timestamp without time zone | | | | plain | updatedate | timestamp without time zone | | | | plain | rectype | character(1) | | | | extended | autoship | character(1) | | | | extended | subid | character varying(20) | | | | extended | amount | numeric(10,2) | | | | main | sourcecode | character varying(20) | | | | extended | ordernum | character varying(20) | | | | extended | paymeth | character varying(4) | | | | extended | itemid | bigint | | | | plain | prodtable | character varying | | | | extended | category | character varying(50) | | | | extended | subcategory | character varying(15) | | | | extended | prerelease | character(1) | | | | extended |View definition: SELECT t.pasid, t.jobid, t.itemcode, t.hotlinedate, t.updatedate, t.rectype, t.autoship, t.subid, t.amount, t.sourcecode, t.ordernum, t.paymeth, t.itemid, CASE WHEN t.hotlinedate >= p.changedate THEN p.prodtable ELSE p.prodtable_old END AS prodtable, p.category, p.subcategory, p.prerelease FROM \"glm$tracking\" t JOIN glm.glmproducts p ON t.itemcode::text = p.prodcode::text;This is the base table of transactions: Table \"lruser.glm$tracking\" Column | Type | Collation | Nullable | Default-------------+-----------------------------+-----------+----------+--------- pasid | bigint | | | jobid | bigint | | | itemcode | character varying(10) | | | hotlinedate | timestamp without time zone | | | updatedate | timestamp without time zone | | | rectype | character(1) | | | autoship | character(1) | | | subid | character varying(20) | | | amount | numeric(10,2) | | | sourcecode | character varying(20) | | | ordernum | character varying(20) | | | paymeth | character varying(4) | | | itemid | bigint | | |Indexes: \"glm$tracking$countndx\" btree (itemcode, pasid, rectype, hotlinedate) \"glm$tracking$ndx\" btree (itemcode, hotlinedate, rectype, pasid) \"glm$tracking$prodndx\" btree (itemcode, pasid, rectype, hotlinedate) \"glm$tracking$rent$ndx\" btree (pasid, hotlinedate, itemcode) INCLUDE (rectype)Foreign-key constraints: \"glm$autoship$fk\" FOREIGN KEY (subid) REFERENCES \"glm$autoship\"(subid) \"glm$cust$fk\" FOREIGN KEY (pasid) REFERENCES glm(id) \"glm$tracking$prod$fk\" FOREIGN KEY (itemcode) REFERENCES glm.glmproducts(prodcode)Triggers: \"glm$tracking$itemid\" BEFORE INSERT OR UPDATE ON \"glm$tracking\" FOR EACH ROW EXECUTE FUNCTION \"trigger_fct_glm$tracking$itemid\"()This is an old version of our table of products sold - it should be replaced by our newer \"glmproducts\" table. Table \"lruser.glmprods\" (OLD VERSION OF GLMPRODUCTS - SHOULD BE REMOVED!) Column | Type | Collation | Nullable | Default | Storage | Stats target | Description------------+-----------------------+-----------+----------+---------+----------+--------------+------------- prodcode | character varying(10) | | not null | | extended | | prodtable | character varying(30) | | | | extended | | prerelease | character(1) | | | | extended | | category | character varying(50) | | | | extended | |Indexes: \"glmprods$pk\" PRIMARY KEY, btree (prodcode)Access method: heapThis is a view that identifies customers with autoships and the category the product belongs to. Note that subscriptions can end, hence the canceldate. We only want to omit customers with active subscription in products of the same category. View \"lruser.category_autoship\" Column | Type | Collation | Nullable | Default | Storage | Description------------+-----------------------------+-----------+----------+---------+----------+------------- category | character varying(50) | | | | extended | prodtable | character varying(30) | | | | extended | subid | character varying(20) | | | | extended | pasid | bigint | | | | plain | jobid | bigint | | | | plain | updatedate | timestamp without time zone | | | | plain | startdate | timestamp without time zone | | | | plain | canceldate | timestamp without time zone | | | | plain | itemcode | character varying(10) | | | | extended |View definition: SELECT p.category, p.prodtable, a.subid, a.pasid, a.jobid, a.updatedate, a.startdate, a.canceldate, a.itemcode FROM \"glm$autoship\" a, glmprods p WHERE a.itemcode::text = p.prodcode::text;This is the base table of the actual autoship subscriptions. Table \"lruser.glm$autoship\" Column | Type | Collation | Nullable | Default------------+-----------------------------+-----------+----------+--------- subid | character varying(20) | | not null | pasid | bigint | | | jobid | bigint | | | updatedate | timestamp without time zone | | | startdate | timestamp without time zone | | | canceldate | timestamp without time zone | | | itemcode | character varying(10) | | |Indexes: \"glm$autoship$pk\" PRIMARY KEY, btree (subid) \"glm$autoship$catndx\" btree (pasid, itemcode, canceldate) \"glm$autoship$prodndx\" btree (itemcode, pasid, canceldate)Foreign-key constraints: \"glm$autoship$prodfk\" FOREIGN KEY (itemcode) REFERENCES glm.glmproducts(prodcode)Referenced by: TABLE \"\"glm$tracking\"\" CONSTRAINT \"glm$autoship$fk\" FOREIGN KEY (subid) REFERENCES \"glm$autoship\"(subid)This is the new table of products. Table \"glm.glmproducts\" Column | Type | Collation | Nullable | Default---------------+-----------------------------+-----------+----------+--------- id | bigint | | | prodcode | character varying(8) | | not null | prodtable | character varying(20) | | not null | category | character varying(50) | | | prodtable_old | character varying(30) | | | category_old | character varying(50) | | | prodname | character varying(30) | | | broker | character varying(20) | | | prerelease | character(1) | | | exclude | character(1) | | | changedate | timestamp without time zone | | | subcategory | character varying(15) | | | changedate_ih | timestamp without time zone | | |Indexes: \"glmproducts$pk\" PRIMARY KEY, btree (prodcode)Referenced by: TABLE \"\"glm$autoship\"\" CONSTRAINT \"glm$autoship$prodfk\" FOREIGN KEY (itemcode) REFERENCES glm.glmproducts(prodcode) TABLE \"\"glm$tracking\"\" CONSTRAINT \"glm$tracking$prod$fk\" FOREIGN KEY (itemcode) REFERENCES glm.glmproducts(prodcode)Triggers: \"aur$glmproducts\" AFTER UPDATE ON glm.glmproducts FOR EACH ROW EXECUTE FUNCTION glm.\"trigger_fct_aur$glmproducts\"() \"bdr$glmproducts\" BEFORE DELETE ON glm.glmproducts FOR EACH ROW EXECUTE FUNCTION glm.\"trigger_fct_bdr$glmproducts\"() \"biur$glmproducts\" BEFORE INSERT OR UPDATE ON glm.glmproducts FOR EACH ROW EXECUTE FUNCTION glm.\"trigger_fct_biur$glmproducts\"()I believe that's the entire list of tables/views involved. Again, my apologies for the long post. I presented all of this for completeness, although I don't believe it has anything to do with the actual problem. :-/Thanks in advance for any advice! Eric Raskin-- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Eric H. Raskin 914-765-0500 x120 or 315-338-4461 (direct)Professional Advertising Systems Inc. fax: 914-765-0500 or 315-338-4461 (direct)3 Morgan Drive #310 [email protected] Kisco, NY 10549 http://www.paslists.com",
"msg_date": "Wed, 4 Nov 2020 12:10:21 -0500",
"msg_from": "Eric Raskin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Adding nextval() to a select caused hang/very slow execution"
},
{
"msg_contents": "Eric Raskin <[email protected]> writes:\n> I have a strange situation where a base query completes in about 30 seconds\n> but if I add a nextval() call to the select it never completes. There are\n> other processes running that are accessing the same sequence, but I thought\n> that concurrency was not an issue for sequences (other than skipped\n> values).\n\nShouldn't be, probably ... but did you check to see if the query is\nblocked on a lock? (See pg_stat_activity or pg_locks views.)\n\n> The only change that\n> causes it to be extremely slow or hang (can't tell which) is that I changed\n> the select from:\n> select unnest(....\n> to\n> select nextval('sbowner.idgen'), unnest(....\n\nWithout seeing the complete query it's hard to say much. But if\nthis isn't the topmost select list, maybe what's happening is that\nthe presence of a volatile function in a sub-select is defeating\nsome key plan optimization. Did you compare plain EXPLAIN (w/out\nANALYZE) output for the two cases, to see if the plan shape changes?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Nov 2020 13:04:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding nextval() to a select caused hang/very slow execution"
},
{
"msg_contents": "Thanks for the reply. I see that the explain.depesz.com did not show you\nthe query. My apologies:\n\nselect unnest(array[273941676,273941677,273941678,273941679,273941680])\ncountrow_id,\n disporder, fmtdate, typecode,\n\n unnest(array[count_273941676,count_273941677,count_273941678,count_273941679,count_273941680])\ncountval\n from (select coalesce(count(distinct id_273941676),0) count_273941676,\n coalesce(count(distinct id_273941677),0) count_273941677,\n coalesce(count(distinct id_273941678),0) count_273941678,\n coalesce(count(distinct id_273941679),0) count_273941679,\n coalesce(count(distinct id_273941680),0) count_273941680,\n disporder, fmtdate, typecode\n from (select case when sexcode = 'M' then id else null end\nid_273941676,\n case when sexcode = 'F' then id else null end\nid_273941677,\n case when sexcode = 'A' then id else null end\nid_273941678,\n case when sexcode = 'C' then id else null end\nid_273941679,\n case when sexcode not in ('M','F','A','C') then id\nelse null end id_273941680,\n hotline cnt_hotline\n from lruser.fortherb_indcounts c\n where ( (rtype = '2')\n and ((sexcode = 'M') or (sexcode = 'F') or (sexcode = 'A')\nor (sexcode = 'C') or (sexcode not in ('M','F','A','C'))\n )\n )\n ) as x\nright outer join count_tempcols t on (x.cnt_hotline between t.mindate and\nt.maxdate) group by disporder, fmtdate, typecode ) as y\n\nI know it seems overly complicated, but it is auto-generated by our code.\nThe conditions and fields are variable based on what the user wants to\ngenerate.\n\nThis is the topmost select. The only difference that causes the hang is\nadding nextval('sbowner.idgen') to the start of the select right before the\nfirst unnest().\n\nIn the real application, this code feeds an insert statement with a trigger\nthat accesses the sequence where we store the results of the query. I\n\"simplified\" it and discovered that the nextval() was the difference that\ncaused the performance hit.\n\n Eric\n\n\nOn Wed, Nov 4, 2020 at 1:04 PM Tom Lane <[email protected]> wrote:\n\n> Eric Raskin <[email protected]> writes:\n> > I have a strange situation where a base query completes in about 30\n> seconds\n> > but if I add a nextval() call to the select it never completes. There\n> are\n> > other processes running that are accessing the same sequence, but I\n> thought\n> > that concurrency was not an issue for sequences (other than skipped\n> > values).\n>\n> Shouldn't be, probably ... but did you check to see if the query is\n> blocked on a lock? (See pg_stat_activity or pg_locks views.)\n>\n> > The only change that\n> > causes it to be extremely slow or hang (can't tell which) is that I\n> changed\n> > the select from:\n> > select unnest(....\n> > to\n> > select nextval('sbowner.idgen'), unnest(....\n>\n> Without seeing the complete query it's hard to say much. But if\n> this isn't the topmost select list, maybe what's happening is that\n> the presence of a volatile function in a sub-select is defeating\n> some key plan optimization. Did you compare plain EXPLAIN (w/out\n> ANALYZE) output for the two cases, to see if the plan shape changes?\n>\n> regards, tom lane\n>\n\n\n-- \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nEric H. Raskin\n 914-765-0500 x120 or *315-338-4461\n(direct)*\n\nProfessional Advertising Systems Inc.\n fax: 914-765-0500 or *315-338-4461 (direct)*\n\n3 Morgan Drive #310\n [email protected]\n\nMt Kisco, NY 10549\n http://www.paslists.com\n\nThanks for the reply. I see that the explain.depesz.com did not show you the query. My apologies:select unnest(array[273941676,273941677,273941678,273941679,273941680]) countrow_id, disporder, fmtdate, typecode, unnest(array[count_273941676,count_273941677,count_273941678,count_273941679,count_273941680]) countval from (select coalesce(count(distinct id_273941676),0) count_273941676, coalesce(count(distinct id_273941677),0) count_273941677, coalesce(count(distinct id_273941678),0) count_273941678, coalesce(count(distinct id_273941679),0) count_273941679, coalesce(count(distinct id_273941680),0) count_273941680, disporder, fmtdate, typecode from (select case when sexcode = 'M' then id else null end id_273941676, case when sexcode = 'F' then id else null end id_273941677, case when sexcode = 'A' then id else null end id_273941678, case when sexcode = 'C' then id else null end id_273941679, case when sexcode not in ('M','F','A','C') then id else null end id_273941680, hotline cnt_hotline from lruser.fortherb_indcounts c where ( (rtype = '2') and ((sexcode = 'M') or (sexcode = 'F') or (sexcode = 'A') or (sexcode = 'C') or (sexcode not in ('M','F','A','C')) ) ) ) as xright outer join count_tempcols t on (x.cnt_hotline between t.mindate and t.maxdate) group by disporder, fmtdate, typecode ) as yI know it seems overly complicated, but it is auto-generated by our code. The conditions and fields are variable based on what the user wants to generate. This is the topmost select. The only difference that causes the hang is adding nextval('sbowner.idgen') to the start of the select right before the first unnest().In the real application, this code feeds an insert statement with a trigger that accesses the sequence where we store the results of the query. I \"simplified\" it and discovered that the nextval() was the difference that caused the performance hit. EricOn Wed, Nov 4, 2020 at 1:04 PM Tom Lane <[email protected]> wrote:Eric Raskin <[email protected]> writes:\n> I have a strange situation where a base query completes in about 30 seconds\n> but if I add a nextval() call to the select it never completes. There are\n> other processes running that are accessing the same sequence, but I thought\n> that concurrency was not an issue for sequences (other than skipped\n> values).\n\nShouldn't be, probably ... but did you check to see if the query is\nblocked on a lock? (See pg_stat_activity or pg_locks views.)\n\n> The only change that\n> causes it to be extremely slow or hang (can't tell which) is that I changed\n> the select from:\n> select unnest(....\n> to\n> select nextval('sbowner.idgen'), unnest(....\n\nWithout seeing the complete query it's hard to say much. But if\nthis isn't the topmost select list, maybe what's happening is that\nthe presence of a volatile function in a sub-select is defeating\nsome key plan optimization. Did you compare plain EXPLAIN (w/out\nANALYZE) output for the two cases, to see if the plan shape changes?\n\n regards, tom lane\n-- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Eric H. Raskin 914-765-0500 x120 or 315-338-4461 (direct)Professional Advertising Systems Inc. fax: 914-765-0500 or 315-338-4461 (direct)3 Morgan Drive #310 [email protected] Kisco, NY 10549 http://www.paslists.com",
"msg_date": "Wed, 4 Nov 2020 13:22:33 -0500",
"msg_from": "Eric Raskin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding nextval() to a select caused hang/very slow execution"
},
{
"msg_contents": "And, to follow up on your question, the plan shape DOES change when I\nadd/remove the nextval() on a plain explain.\n\nWithout nextval(): https://explain.depesz.com/s/SCdY\n\nWith nextval(): https://explain.depesz.com/s/oLPn\n\n\n\n\nOn Wed, Nov 4, 2020 at 1:22 PM Eric Raskin <[email protected]> wrote:\n\n> Thanks for the reply. I see that the explain.depesz.com did not show you\n> the query. My apologies:\n>\n> select unnest(array[273941676,273941677,273941678,273941679,273941680])\n> countrow_id,\n> disporder, fmtdate, typecode,\n>\n> unnest(array[count_273941676,count_273941677,count_273941678,count_273941679,count_273941680])\n> countval\n> from (select coalesce(count(distinct id_273941676),0) count_273941676,\n> coalesce(count(distinct id_273941677),0) count_273941677,\n> coalesce(count(distinct id_273941678),0) count_273941678,\n> coalesce(count(distinct id_273941679),0) count_273941679,\n> coalesce(count(distinct id_273941680),0) count_273941680,\n> disporder, fmtdate, typecode\n> from (select case when sexcode = 'M' then id else null end\n> id_273941676,\n> case when sexcode = 'F' then id else null end\n> id_273941677,\n> case when sexcode = 'A' then id else null end\n> id_273941678,\n> case when sexcode = 'C' then id else null end\n> id_273941679,\n> case when sexcode not in ('M','F','A','C') then id\n> else null end id_273941680,\n> hotline cnt_hotline\n> from lruser.fortherb_indcounts c\n> where ( (rtype = '2')\n> and ((sexcode = 'M') or (sexcode = 'F') or (sexcode = 'A')\n> or (sexcode = 'C') or (sexcode not in ('M','F','A','C'))\n> )\n> )\n> ) as x\n> right outer join count_tempcols t on (x.cnt_hotline between t.mindate and\n> t.maxdate) group by disporder, fmtdate, typecode ) as y\n>\n> I know it seems overly complicated, but it is auto-generated by our code.\n> The conditions and fields are variable based on what the user wants to\n> generate.\n>\n> This is the topmost select. The only difference that causes the hang is\n> adding nextval('sbowner.idgen') to the start of the select right before the\n> first unnest().\n>\n> In the real application, this code feeds an insert statement with a\n> trigger that accesses the sequence where we store the results of the\n> query. I \"simplified\" it and discovered that the nextval() was the\n> difference that caused the performance hit.\n>\n> Eric\n>\n>\n> On Wed, Nov 4, 2020 at 1:04 PM Tom Lane <[email protected]> wrote:\n>\n>> Eric Raskin <[email protected]> writes:\n>> > I have a strange situation where a base query completes in about 30\n>> seconds\n>> > but if I add a nextval() call to the select it never completes. There\n>> are\n>> > other processes running that are accessing the same sequence, but I\n>> thought\n>> > that concurrency was not an issue for sequences (other than skipped\n>> > values).\n>>\n>> Shouldn't be, probably ... but did you check to see if the query is\n>> blocked on a lock? (See pg_stat_activity or pg_locks views.)\n>>\n>> > The only change that\n>> > causes it to be extremely slow or hang (can't tell which) is that I\n>> changed\n>> > the select from:\n>> > select unnest(....\n>> > to\n>> > select nextval('sbowner.idgen'), unnest(....\n>>\n>> Without seeing the complete query it's hard to say much. But if\n>> this isn't the topmost select list, maybe what's happening is that\n>> the presence of a volatile function in a sub-select is defeating\n>> some key plan optimization. Did you compare plain EXPLAIN (w/out\n>> ANALYZE) output for the two cases, to see if the plan shape changes?\n>>\n>> regards, tom lane\n>>\n>\n>\n> --\n>\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Eric H. Raskin\n> 914-765-0500 x120 or *315-338-4461\n> (direct)*\n>\n> Professional Advertising Systems Inc.\n> fax: 914-765-0500 or *315-338-4461\n> (direct)*\n>\n> 3 Morgan Drive #310\n> [email protected]\n>\n> Mt Kisco, NY 10549\n> http://www.paslists.com\n>\n>\n\n-- \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nEric H. Raskin\n 914-765-0500 x120 or *315-338-4461\n(direct)*\n\nProfessional Advertising Systems Inc.\n fax: 914-765-0500 or *315-338-4461 (direct)*\n\n3 Morgan Drive #310\n [email protected]\n\nMt Kisco, NY 10549\n http://www.paslists.com\n\nAnd, to follow up on your question, the plan shape DOES change when I add/remove the nextval() on a plain explain. Without nextval(): https://explain.depesz.com/s/SCdYWith nextval(): https://explain.depesz.com/s/oLPnOn Wed, Nov 4, 2020 at 1:22 PM Eric Raskin <[email protected]> wrote:Thanks for the reply. I see that the explain.depesz.com did not show you the query. My apologies:select unnest(array[273941676,273941677,273941678,273941679,273941680]) countrow_id, disporder, fmtdate, typecode, unnest(array[count_273941676,count_273941677,count_273941678,count_273941679,count_273941680]) countval from (select coalesce(count(distinct id_273941676),0) count_273941676, coalesce(count(distinct id_273941677),0) count_273941677, coalesce(count(distinct id_273941678),0) count_273941678, coalesce(count(distinct id_273941679),0) count_273941679, coalesce(count(distinct id_273941680),0) count_273941680, disporder, fmtdate, typecode from (select case when sexcode = 'M' then id else null end id_273941676, case when sexcode = 'F' then id else null end id_273941677, case when sexcode = 'A' then id else null end id_273941678, case when sexcode = 'C' then id else null end id_273941679, case when sexcode not in ('M','F','A','C') then id else null end id_273941680, hotline cnt_hotline from lruser.fortherb_indcounts c where ( (rtype = '2') and ((sexcode = 'M') or (sexcode = 'F') or (sexcode = 'A') or (sexcode = 'C') or (sexcode not in ('M','F','A','C')) ) ) ) as xright outer join count_tempcols t on (x.cnt_hotline between t.mindate and t.maxdate) group by disporder, fmtdate, typecode ) as yI know it seems overly complicated, but it is auto-generated by our code. The conditions and fields are variable based on what the user wants to generate. This is the topmost select. The only difference that causes the hang is adding nextval('sbowner.idgen') to the start of the select right before the first unnest().In the real application, this code feeds an insert statement with a trigger that accesses the sequence where we store the results of the query. I \"simplified\" it and discovered that the nextval() was the difference that caused the performance hit. EricOn Wed, Nov 4, 2020 at 1:04 PM Tom Lane <[email protected]> wrote:Eric Raskin <[email protected]> writes:\n> I have a strange situation where a base query completes in about 30 seconds\n> but if I add a nextval() call to the select it never completes. There are\n> other processes running that are accessing the same sequence, but I thought\n> that concurrency was not an issue for sequences (other than skipped\n> values).\n\nShouldn't be, probably ... but did you check to see if the query is\nblocked on a lock? (See pg_stat_activity or pg_locks views.)\n\n> The only change that\n> causes it to be extremely slow or hang (can't tell which) is that I changed\n> the select from:\n> select unnest(....\n> to\n> select nextval('sbowner.idgen'), unnest(....\n\nWithout seeing the complete query it's hard to say much. But if\nthis isn't the topmost select list, maybe what's happening is that\nthe presence of a volatile function in a sub-select is defeating\nsome key plan optimization. Did you compare plain EXPLAIN (w/out\nANALYZE) output for the two cases, to see if the plan shape changes?\n\n regards, tom lane\n-- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Eric H. Raskin 914-765-0500 x120 or 315-338-4461 (direct)Professional Advertising Systems Inc. fax: 914-765-0500 or 315-338-4461 (direct)3 Morgan Drive #310 [email protected] Kisco, NY 10549 http://www.paslists.com\n-- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Eric H. Raskin 914-765-0500 x120 or 315-338-4461 (direct)Professional Advertising Systems Inc. fax: 914-765-0500 or 315-338-4461 (direct)3 Morgan Drive #310 [email protected] Kisco, NY 10549 http://www.paslists.com",
"msg_date": "Wed, 4 Nov 2020 13:25:42 -0500",
"msg_from": "Eric Raskin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding nextval() to a select caused hang/very slow execution"
},
{
"msg_contents": "Eric Raskin <[email protected]> writes:\n> And, to follow up on your question, the plan shape DOES change when I\n> add/remove the nextval() on a plain explain.\n> Without nextval(): https://explain.depesz.com/s/SCdY\n> With nextval(): https://explain.depesz.com/s/oLPn\n\nAh, there's your problem, I think: the plan without nextval() is\nparallelized while the plan with nextval() is not, because nextval() is\nmarked as parallel-unsafe. It's not immediately clear why that would\nresult in more than about a 4X speed difference, given that the parallel\nplan is using 4 workers. But some of the rowcount estimates seem fairly\nfar off, so I'm betting that the planner is just accidentally lighting on\na decent plan when it's using parallelism while making some poor choices\nwhen it isn't.\n\nThe reason for the original form of your problem is likely that we don't\nuse parallelism at all in non-SELECT queries, so you ended up with a bad\nplan even though the nextval() was hidden in a trigger.\n\nWhat you need to do is get the rowcount estimates nearer to reality\n--- those places where you've got estimated rowcount 1 while reality\nis tens or hundreds of thousands of rows are just disasters waiting\nto bite. I suspect most of the problem is join conditions like\n\nJoin Filter: (CASE WHEN (c.rtype = ANY ('{0,1,7,9}'::bpchar[])) THEN c.rtype ELSE x.rtype END = '2'::bpchar)\n\nThe planner just isn't going to have any credible idea how selective\nthat is. I wonder to what extent you could fix this by storing\ngenerated columns that represent the derived conditions you want to\nfilter on.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Nov 2020 14:01:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding nextval() to a select caused hang/very slow execution"
},
{
"msg_contents": "OK - I see. And to add insult to injury, I tried creating a temporary\ntable to store the intermediate results. Then I was going to just do an\ninsert... select... to insert the rows. That would de-couple the\nnextval() from the query.\n\nStrangely, the first query I tried it on worked great. But, when I tried\nto add a second set of data with a similar query to the same temporary\ntable, it slowed right down again. And, of course, when I remove the\ninsert, it's fine.\n\nAnd, of course, your explanation that inserts will not be parallelized must\nbe the reason. I will certainly re-vacuum the tables. I wonder why\nauto-vacuum didn't collect better stats. vacuum analyze <table> is all I\nneed, right?\n\nAs a last resort, what about a PL/PGSQL procedure loop on the query\nresult? Since the insert is very few rows relative to the work the select\nhas to do, I could just turn the insert.. select.. into a for loop. Then\nthe select could be parallel?\n\nWhat do you think?\n\n\nOn Wed, Nov 4, 2020 at 2:01 PM Tom Lane <[email protected]> wrote:\n\n> Eric Raskin <[email protected]> writes:\n> > And, to follow up on your question, the plan shape DOES change when I\n> > add/remove the nextval() on a plain explain.\n> > Without nextval(): https://explain.depesz.com/s/SCdY\n> > With nextval(): https://explain.depesz.com/s/oLPn\n>\n> Ah, there's your problem, I think: the plan without nextval() is\n> parallelized while the plan with nextval() is not, because nextval() is\n> marked as parallel-unsafe. It's not immediately clear why that would\n> result in more than about a 4X speed difference, given that the parallel\n> plan is using 4 workers. But some of the rowcount estimates seem fairly\n> far off, so I'm betting that the planner is just accidentally lighting on\n> a decent plan when it's using parallelism while making some poor choices\n> when it isn't.\n>\n> The reason for the original form of your problem is likely that we don't\n> use parallelism at all in non-SELECT queries, so you ended up with a bad\n> plan even though the nextval() was hidden in a trigger.\n>\n> What you need to do is get the rowcount estimates nearer to reality\n> --- those places where you've got estimated rowcount 1 while reality\n> is tens or hundreds of thousands of rows are just disasters waiting\n> to bite. I suspect most of the problem is join conditions like\n>\n> Join Filter: (CASE WHEN (c.rtype = ANY ('{0,1,7,9}'::bpchar[])) THEN\n> c.rtype ELSE x.rtype END = '2'::bpchar)\n>\n> The planner just isn't going to have any credible idea how selective\n> that is. I wonder to what extent you could fix this by storing\n> generated columns that represent the derived conditions you want to\n> filter on.\n>\n> regards, tom lane\n>\n\n\n-- \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nEric H. Raskin\n 914-765-0500 x120 or *315-338-4461\n(direct)*\n\nProfessional Advertising Systems Inc.\n fax: 914-765-0500 or *315-338-4461 (direct)*\n\n3 Morgan Drive #310\n [email protected]\n\nMt Kisco, NY 10549\n http://www.paslists.com\n\nOK - I see. And to add insult to injury, I tried creating a temporary table to store the intermediate results. Then I was going to just do an insert... select... to insert the rows. That would de-couple the nextval() from the query.Strangely, the first query I tried it on worked great. But, when I tried to add a second set of data with a similar query to the same temporary table, it slowed right down again. And, of course, when I remove the insert, it's fine. And, of course, your explanation that inserts will not be parallelized must be the reason. I will certainly re-vacuum the tables. I wonder why auto-vacuum didn't collect better stats. vacuum analyze <table> is all I need, right?As a last resort, what about a PL/PGSQL procedure loop on the query result? Since the insert is very few rows relative to the work the select has to do, I could just turn the insert.. select.. into a for loop. Then the select could be parallel?What do you think?On Wed, Nov 4, 2020 at 2:01 PM Tom Lane <[email protected]> wrote:Eric Raskin <[email protected]> writes:\n> And, to follow up on your question, the plan shape DOES change when I\n> add/remove the nextval() on a plain explain.\n> Without nextval(): https://explain.depesz.com/s/SCdY\n> With nextval(): https://explain.depesz.com/s/oLPn\n\nAh, there's your problem, I think: the plan without nextval() is\nparallelized while the plan with nextval() is not, because nextval() is\nmarked as parallel-unsafe. It's not immediately clear why that would\nresult in more than about a 4X speed difference, given that the parallel\nplan is using 4 workers. But some of the rowcount estimates seem fairly\nfar off, so I'm betting that the planner is just accidentally lighting on\na decent plan when it's using parallelism while making some poor choices\nwhen it isn't.\n\nThe reason for the original form of your problem is likely that we don't\nuse parallelism at all in non-SELECT queries, so you ended up with a bad\nplan even though the nextval() was hidden in a trigger.\n\nWhat you need to do is get the rowcount estimates nearer to reality\n--- those places where you've got estimated rowcount 1 while reality\nis tens or hundreds of thousands of rows are just disasters waiting\nto bite. I suspect most of the problem is join conditions like\n\nJoin Filter: (CASE WHEN (c.rtype = ANY ('{0,1,7,9}'::bpchar[])) THEN c.rtype ELSE x.rtype END = '2'::bpchar)\n\nThe planner just isn't going to have any credible idea how selective\nthat is. I wonder to what extent you could fix this by storing\ngenerated columns that represent the derived conditions you want to\nfilter on.\n\n regards, tom lane\n-- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Eric H. Raskin 914-765-0500 x120 or 315-338-4461 (direct)Professional Advertising Systems Inc. fax: 914-765-0500 or 315-338-4461 (direct)3 Morgan Drive #310 [email protected] Kisco, NY 10549 http://www.paslists.com",
"msg_date": "Wed, 4 Nov 2020 14:12:07 -0500",
"msg_from": "Eric Raskin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding nextval() to a select caused hang/very slow execution"
},
{
"msg_contents": "Eric Raskin <[email protected]> writes:\n> And, of course, your explanation that inserts will not be parallelized must\n> be the reason. I will certainly re-vacuum the tables. I wonder why\n> auto-vacuum didn't collect better stats. vacuum analyze <table> is all I\n> need, right?\n\nPlain ANALYZE is enough to collect stats; but I doubt that'll improve\nmatters for you. The problem is basically that the planner can't do\nanything with a CASE construct, so you end up with default selectivity\nestimates for anything involving a CASE, statistics or no statistics.\nYou need to try to reformulate the query with simpler join conditions.\n\n> As a last resort, what about a PL/PGSQL procedure loop on the query\n> result? Since the insert is very few rows relative to the work the select\n> has to do, I could just turn the insert.. select.. into a for loop. Then\n> the select could be parallel?\n\nMaybe, but you're still skating on a cliff edge. I think it's pure chance\nthat the parallelized query is working acceptably well; next month with\nslightly different conditions, it might not.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Nov 2020 14:23:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding nextval() to a select caused hang/very slow execution"
},
{
"msg_contents": "OK -- got it. Thanks very much for your help. I'll see what I can do to\ndenormalize the case statements into actual columns to support the queries.\n\nOn Wed, Nov 4, 2020 at 2:23 PM Tom Lane <[email protected]> wrote:\n\n> Eric Raskin <[email protected]> writes:\n> > And, of course, your explanation that inserts will not be parallelized\n> must\n> > be the reason. I will certainly re-vacuum the tables. I wonder why\n> > auto-vacuum didn't collect better stats. vacuum analyze <table> is all\n> I\n> > need, right?\n>\n> Plain ANALYZE is enough to collect stats; but I doubt that'll improve\n> matters for you. The problem is basically that the planner can't do\n> anything with a CASE construct, so you end up with default selectivity\n> estimates for anything involving a CASE, statistics or no statistics.\n> You need to try to reformulate the query with simpler join conditions.\n>\n> > As a last resort, what about a PL/PGSQL procedure loop on the query\n> > result? Since the insert is very few rows relative to the work the\n> select\n> > has to do, I could just turn the insert.. select.. into a for loop. Then\n> > the select could be parallel?\n>\n> Maybe, but you're still skating on a cliff edge. I think it's pure chance\n> that the parallelized query is working acceptably well; next month with\n> slightly different conditions, it might not.\n>\n> regards, tom lane\n>\n\n\n-- \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nEric H. Raskin\n 914-765-0500 x120 or *315-338-4461\n(direct)*\n\nProfessional Advertising Systems Inc.\n fax: 914-765-0500 or *315-338-4461 (direct)*\n\n3 Morgan Drive #310\n [email protected]\n\nMt Kisco, NY 10549\n http://www.paslists.com\n\nOK -- got it. Thanks very much for your help. I'll see what I can do to denormalize the case statements into actual columns to support the queries.On Wed, Nov 4, 2020 at 2:23 PM Tom Lane <[email protected]> wrote:Eric Raskin <[email protected]> writes:\n> And, of course, your explanation that inserts will not be parallelized must\n> be the reason. I will certainly re-vacuum the tables. I wonder why\n> auto-vacuum didn't collect better stats. vacuum analyze <table> is all I\n> need, right?\n\nPlain ANALYZE is enough to collect stats; but I doubt that'll improve\nmatters for you. The problem is basically that the planner can't do\nanything with a CASE construct, so you end up with default selectivity\nestimates for anything involving a CASE, statistics or no statistics.\nYou need to try to reformulate the query with simpler join conditions.\n\n> As a last resort, what about a PL/PGSQL procedure loop on the query\n> result? Since the insert is very few rows relative to the work the select\n> has to do, I could just turn the insert.. select.. into a for loop. Then\n> the select could be parallel?\n\nMaybe, but you're still skating on a cliff edge. I think it's pure chance\nthat the parallelized query is working acceptably well; next month with\nslightly different conditions, it might not.\n\n regards, tom lane\n-- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Eric H. Raskin 914-765-0500 x120 or 315-338-4461 (direct)Professional Advertising Systems Inc. fax: 914-765-0500 or 315-338-4461 (direct)3 Morgan Drive #310 [email protected] Kisco, NY 10549 http://www.paslists.com",
"msg_date": "Wed, 4 Nov 2020 14:25:00 -0500",
"msg_from": "Eric Raskin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding nextval() to a select caused hang/very slow execution"
},
{
"msg_contents": "... btw, it occurs to me that at least as a stopgap,\n\"set enable_nestloop = off\" would be worth trying.\nThe killer problem with rowcount-1 estimates is that they\nencourage the planner to use nestloops when it shouldn't.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Nov 2020 14:35:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding nextval() to a select caused hang/very slow execution"
},
{
"msg_contents": "On Wed, Nov 4, 2020 at 12:12 PM Eric Raskin <[email protected]> wrote:\n\n> OK - I see. And to add insult to injury, I tried creating a temporary\n> table to store the intermediate results. Then I was going to just do an\n> insert... select... to insert the rows. That would de-couple the\n> nextval() from the query.\n>\n> Strangely, the first query I tried it on worked great. But, when I tried\n> to add a second set of data with a similar query to the same temporary\n> table, it slowed right down again. And, of course, when I remove the\n> insert, it's fine.\n>\n\nI am not entirely sure I am understanding your process properly, but just a\nnote- If you are getting acceptable results creating the temp table, and\nthe issue is just that you get very bad plans when using it in some query\nthat follows, then it is worth noting that autovacuum does nothing on temp\ntables and for me it is nearly always worth the small cost to perform an\nanalyze (at least on key fields) after creating a temp table, or rather\nafter inserting/updating/deleting records in a significant way.\n\nOn Wed, Nov 4, 2020 at 12:12 PM Eric Raskin <[email protected]> wrote:OK - I see. And to add insult to injury, I tried creating a temporary table to store the intermediate results. Then I was going to just do an insert... select... to insert the rows. That would de-couple the nextval() from the query.Strangely, the first query I tried it on worked great. But, when I tried to add a second set of data with a similar query to the same temporary table, it slowed right down again. And, of course, when I remove the insert, it's fine. I am not entirely sure I am understanding your process properly, but just a note- If you are getting acceptable results creating the temp table, and the issue is just that you get very bad plans when using it in some query that follows, then it is worth noting that autovacuum does nothing on temp tables and for me it is nearly always worth the small cost to perform an analyze (at least on key fields) after creating a temp table, or rather after inserting/updating/deleting records in a significant way.",
"msg_date": "Wed, 4 Nov 2020 13:19:53 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding nextval() to a select caused hang/very slow execution"
},
{
"msg_contents": "So, things get even weirder. When I execute each individual select\nstatement I am generating from a psql prompt, they all finish very\nquickly.\n\nIf I execute them inside a pl/pgsql block, the second one hangs.\n\nIs there something about execution inside a pl/pgsql block that is\ndifferent from the psql command line?\n\n\nOn Wed, Nov 4, 2020 at 3:20 PM Michael Lewis <[email protected]> wrote:\n\n> On Wed, Nov 4, 2020 at 12:12 PM Eric Raskin <[email protected]> wrote:\n>\n>> OK - I see. And to add insult to injury, I tried creating a temporary\n>> table to store the intermediate results. Then I was going to just do an\n>> insert... select... to insert the rows. That would de-couple the\n>> nextval() from the query.\n>>\n>> Strangely, the first query I tried it on worked great. But, when I tried\n>> to add a second set of data with a similar query to the same temporary\n>> table, it slowed right down again. And, of course, when I remove the\n>> insert, it's fine.\n>>\n>\n> I am not entirely sure I am understanding your process properly, but just\n> a note- If you are getting acceptable results creating the temp table, and\n> the issue is just that you get very bad plans when using it in some query\n> that follows, then it is worth noting that autovacuum does nothing on temp\n> tables and for me it is nearly always worth the small cost to perform an\n> analyze (at least on key fields) after creating a temp table, or rather\n> after inserting/updating/deleting records in a significant way.\n>\n\n\n-- \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nEric H. Raskin\n 914-765-0500 x120 or *315-338-4461\n(direct)*\n\nProfessional Advertising Systems Inc.\n fax: 914-765-0500 or *315-338-4461 (direct)*\n\n3 Morgan Drive #310\n [email protected]\n\nMt Kisco, NY 10549\n http://www.paslists.com\n\nSo, things get even weirder. When I execute each individual select statement I am generating from a psql prompt, they all finish very quickly. If I execute them inside a pl/pgsql block, the second one hangs.Is there something about execution inside a pl/pgsql block that is different from the psql command line?On Wed, Nov 4, 2020 at 3:20 PM Michael Lewis <[email protected]> wrote:On Wed, Nov 4, 2020 at 12:12 PM Eric Raskin <[email protected]> wrote:OK - I see. And to add insult to injury, I tried creating a temporary table to store the intermediate results. Then I was going to just do an insert... select... to insert the rows. That would de-couple the nextval() from the query.Strangely, the first query I tried it on worked great. But, when I tried to add a second set of data with a similar query to the same temporary table, it slowed right down again. And, of course, when I remove the insert, it's fine. I am not entirely sure I am understanding your process properly, but just a note- If you are getting acceptable results creating the temp table, and the issue is just that you get very bad plans when using it in some query that follows, then it is worth noting that autovacuum does nothing on temp tables and for me it is nearly always worth the small cost to perform an analyze (at least on key fields) after creating a temp table, or rather after inserting/updating/deleting records in a significant way.\n-- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Eric H. Raskin 914-765-0500 x120 or 315-338-4461 (direct)Professional Advertising Systems Inc. fax: 914-765-0500 or 315-338-4461 (direct)3 Morgan Drive #310 [email protected] Kisco, NY 10549 http://www.paslists.com",
"msg_date": "Wed, 4 Nov 2020 15:39:41 -0500",
"msg_from": "Eric Raskin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding nextval() to a select caused hang/very slow execution"
},
{
"msg_contents": "Eric Raskin <[email protected]> writes:\n> So, things get even weirder. When I execute each individual select\n> statement I am generating from a psql prompt, they all finish very\n> quickly.\n> If I execute them inside a pl/pgsql block, the second one hangs.\n> Is there something about execution inside a pl/pgsql block that is\n> different from the psql command line?\n\nGeneric vs specific plan, perhaps? Are you passing any parameter\nvalues in from plpgql variables?\n\nIIRC, you could force the matter by using EXECUTE, though it's\nsomewhat more notationally tedious. In late-model PG versions,\nplan_cache_mode could help you too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Nov 2020 16:16:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding nextval() to a select caused hang/very slow execution"
},
{
"msg_contents": "set enable_nestloop=off did the trick. Execution time when down to seconds\nper query.\n\nThanks very much for your help.\n\nOn Wed, Nov 4, 2020 at 4:16 PM Tom Lane <[email protected]> wrote:\n\n> Eric Raskin <[email protected]> writes:\n> > So, things get even weirder. When I execute each individual select\n> > statement I am generating from a psql prompt, they all finish very\n> > quickly.\n> > If I execute them inside a pl/pgsql block, the second one hangs.\n> > Is there something about execution inside a pl/pgsql block that is\n> > different from the psql command line?\n>\n> Generic vs specific plan, perhaps? Are you passing any parameter\n> values in from plpgql variables?\n>\n> IIRC, you could force the matter by using EXECUTE, though it's\n> somewhat more notationally tedious. In late-model PG versions,\n> plan_cache_mode could help you too.\n>\n> regards, tom lane\n>\n-- \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nEric H. Raskin\n 914-765-0500 x120 or *315-338-4461\n(direct)*\n\nProfessional Advertising Systems Inc.\n fax: 914-765-0500 or *315-338-4461 (direct)*\n\n3 Morgan Drive #310\n [email protected]\n\nMt Kisco, NY 10549\n http://www.paslists.com\n\nset enable_nestloop=off did the trick. Execution time when down to seconds per query.Thanks very much for your help.On Wed, Nov 4, 2020 at 4:16 PM Tom Lane <[email protected]> wrote:Eric Raskin <[email protected]> writes:\n> So, things get even weirder. When I execute each individual select\n> statement I am generating from a psql prompt, they all finish very\n> quickly.\n> If I execute them inside a pl/pgsql block, the second one hangs.\n> Is there something about execution inside a pl/pgsql block that is\n> different from the psql command line?\n\nGeneric vs specific plan, perhaps? Are you passing any parameter\nvalues in from plpgql variables?\n\nIIRC, you could force the matter by using EXECUTE, though it's\nsomewhat more notationally tedious. In late-model PG versions,\nplan_cache_mode could help you too.\n\n regards, tom lane\n-- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Eric H. Raskin 914-765-0500 x120 or 315-338-4461 (direct)Professional Advertising Systems Inc. fax: 914-765-0500 or 315-338-4461 (direct)3 Morgan Drive #310 [email protected] Kisco, NY 10549 http://www.paslists.com",
"msg_date": "Wed, 4 Nov 2020 23:10:00 -0500",
"msg_from": "Eric Raskin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding nextval() to a select caused hang/very slow execution"
}
] |
[
{
"msg_contents": "SORRY! Here's a link that should show the plan:\n\nhttps://explain.depesz.com/s/SCdY\n\n\n-- \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nEric H. Raskin\n 914-765-0500 x120 or *315-338-4461\n(direct)*\n\nProfessional Advertising Systems Inc.\n fax: 914-765-0500 or *315-338-4461 (direct)*\n\n3 Morgan Drive #310\n [email protected]\n\nMt Kisco, NY 10549\n http://www.paslists.com\n\nSORRY! Here's a link that should show the plan:https://explain.depesz.com/s/SCdY-- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Eric H. Raskin 914-765-0500 x120 or 315-338-4461 (direct)Professional Advertising Systems Inc. fax: 914-765-0500 or 315-338-4461 (direct)3 Morgan Drive #310 [email protected] Kisco, NY 10549 http://www.paslists.com",
"msg_date": "Wed, 4 Nov 2020 12:19:51 -0500",
"msg_from": "Eric Raskin <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Adding nextval() to a select caused hang/very slow execution"
}
] |
[
{
"msg_contents": "Hi Guys, Sorry for the bold below. I just feel it helps others identify my question easily.\n\nA description of what you are trying to achieve and what results you expect.:\n\nA very low cost query running long (clustered and vacuumed and analyzed on join columns), a very generic query..What can be done without admin privs??\n\nPostgreSQL version number you are running: 11.5\n\nHow you installed PostgreSQL: admin installed it\n\nChanges made to the settings in the postgresql.conf file: see Server Configuration for a quick way to list them all. no changes to postgresql.conf\n\nOperating system and version:\n\nWhat program you're using to connect to PostgreSQL: pgadmin 4v4\n \nIs there anything relevant or unusual in the PostgreSQL server logs?: no\n \nFor questions about any kind of error: no errors\n\nWhat you were doing when the error happened / how to cause the error: no error\n\nThe EXACT TEXT of the error message you're getting, if there is one: (Copy and paste the message to the email, do not send a screenshot) no error.\n\n-- \nThanks & Regards\nKoteswara Daliparthi\nHi Guys, Sorry for the bold below. I just feel it helps others identify my question easily.A description of what you are trying to achieve and what results you expect.:A very low cost query running long (clustered and vacuumed and analyzed on join columns), a very generic query..What can be done without admin privs??PostgreSQL version number you are running: 11.5How you installed PostgreSQL: admin installed itChanges made to the settings in the postgresql.conf file: see Server Configuration for a quick way to list them all. no changes to postgresql.confOperating system and version:What program you're using to connect to PostgreSQL: pgadmin 4v4 Is there anything relevant or unusual in the PostgreSQL server logs?: no For questions about any kind of error: no errorsWhat you were doing when the error happened / how to cause the error: no errorThe EXACT TEXT of the error message you're getting, if there is one: (Copy and paste the message to the email, do not send a screenshot) no error.-- Thanks & RegardsKoteswara Daliparthi",
"msg_date": "Wed, 4 Nov 2020 21:58:50 -0500",
"msg_from": "Koteswara Rao Daliparthi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Low cost query - running longer"
},
{
"msg_contents": "On Wednesday, November 4, 2020, Koteswara Rao Daliparthi <[email protected]>\nwrote:\n\n> Hi Guys, Sorry for the bold below. I just feel it helps others identify\n> my question easily.\n>\n\nYou stopped following the reporting form too soon. You also need to\nprovide the “questions about queries” section. Or “slow queries”:\n\n https://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nDavid J.\n\nOn Wednesday, November 4, 2020, Koteswara Rao Daliparthi <[email protected]> wrote:Hi Guys, Sorry for the bold below. I just feel it helps others identify my question easily.You stopped following the reporting form too soon. You also need to provide the “questions about queries” section. Or “slow queries”: https://wiki.postgresql.org/wiki/Slow_Query_QuestionsDavid J.",
"msg_date": "Wed, 4 Nov 2020 20:02:54 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Low cost query - running longer"
}
] |
[
{
"msg_contents": "Hello,\n\nI've got a postgres master node that receives a lot of writes, WAL \nwritten at 100MB/sec or more at times.\nAnd when these load spikes happen streaming replication starts lagging.\nIt looks like the lag happens on sending stage, and is limited by the \nmaster pg_wal partition throughput.\nIt's an SSD RAID-1 but it was the same when it was an HDD RAID-1.\nI tried to prioritize reads using deadline scheduler, but even extreme \nvalues don't change the situation:\niostat shows more bytes written than read, while the device is busy, \noften 90-100%.\n\nWriters connect via a transaction-pooling-mode pgbouncer, so I can tune \nthe number of parallel connections.\nReplication works fine when I limit it to 3, which is quite low.\nIt works for me so far, but looks really inflexible to me,\nas e.g. I'll have to allocate a separate pgbouncer server pool for less \neager apps and for humans.\n\nI increased wal_buffers to 256MB hoping that it'll reduce the disk load, \nbut it probably works only for\ninitial lag accumulation, once the lag is there it's not going to help.\n\nionice'ing walsender to best-effort -2 didn't help either\n\nAny ideas how to prioritize walsender reads over writes from wal writer \nand backends even if there are multiple quite active ones?\n\nThe problem happens only occasionally, so if you ask for more details it \nmay take some time to reply.\nSorry about this.\n\nBest, Alex\n\n\n",
"msg_date": "Fri, 13 Nov 2020 10:12:17 +0000",
"msg_from": "Alexey Bashtanov <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to prioritise walsender reading from pg_wal over WAL writes?"
}
] |
[
{
"msg_contents": "Hi list,\n\nWe have an application that generates SQL statements that are then executed\non a postgresql database. The statements are always \"bulk\" type statements:\nthey always return a relatively large amount of data, and have only a few\nnot very selective filter expressions. They do contain a terrible amount of\njoins, though.\nThe database has a \"datavault\" structure consisting of satellite, hub and\nlink tables. Tables can easily contain a large amount of rows (10..100\nmillion). The individual tables have primary key constraints but no\nreferential constraints, and the only indexes present are those for the\nprimary key constraints. There are also no indices on any other column. The\nreason for this is that everything is done in this database to get the\nhighest performance possible for both loading data and querying it for our\nspecific purpose, and indices do not help with that at all (they are never\nused by the planner because the conditions are never selective enough).\n\nOne problem we have with these queries is that Postgresql's planner often\nbadly underestimates the number of rows returned by query steps. It then\nuses nested loops for merging parts because it estimated it needs to loop\nonly a few times, but in reality it needs to loop 10 million times, and\nthat tends to not finish in any reasonable time ;)\n\nConsidering the type of query we do we can safely say that using a nested\nloop is always a bad choice, and so we always run these statements after\nsetting enable_nestloop to false. This has greatly helped the stability of\nthese queries.\n\nBut lately while migrating to Postgres 13 (from 9.6) we found that Postgres\ndoes not (always) obey the enable_nestloop = false setting anymore: some\nqueries make a plan that contains a nested loop, and consequently they do\nnot finish anymore. Whether a nested loop is being generated still seems to\ndepend on the database's actual statistics; on some databases it uses the\nnested loop while on others (that use the exact same schema but have\ndifferent data in them) it uses only hash and merge joins- as it should.\n\nWhat can I do to prevent these nested loops from occurring?\n\nFYI: an example query in a datavault database:\nselect\n coalesce(adres_pe.id_s, -1) as adres_id\n, coalesce(tijd.tijdkey, 'Unknown') as calender_id\n, coalesce(di01905cluster_pe.id_s, -1) as di01905cluster_id\n, coalesce(di02697relatie_pe.id_s, -1) as di02697relatie_id\n, coalesce(di04238cluster_pe.id_s, -1) as di04238cluster_id\n, coalesce(di04306natuurlijkpersoon_pe.id_s, -1) as\ndi04306natuurlijkpersoon_id\n, coalesce(eenheid_pe.id_s, -1) as eenheid_id\n, coalesce(huurovereenkomst_pe.id_s, -1) as huurovereenkomst_id\n, cast(count(huurovereenkomst_pe.identificatie) as bigint) as kg00770\nfrom datavault.tijd tijd\ncross join lateral (select * from datavault.s_h_huurovereenkomst_ssm where\ndv_start_dts <= tijd.einddatum and dv_end_dts > tijd.einddatum)\nhuurovereenkomst_pe\ninner join datavault.l_huurovk_ovk_ssm l_huurovk_ovk_ssm_pe\n on huurovereenkomst_pe.id_h_huurovereenkomst =\nl_huurovk_ovk_ssm_pe.id_h_huurovereenkomst\n and l_huurovk_ovk_ssm_pe.dv_start_dts <= tijd.einddatum\n and l_huurovk_ovk_ssm_pe.dv_end_dts > tijd.einddatum\ninner join datavault.s_h_overeenkomst_ssm overeenkomst_pe\n on l_huurovk_ovk_ssm_pe.id_h_overeenkomst =\novereenkomst_pe.id_h_overeenkomst\n and overeenkomst_pe.dv_start_dts <= tijd.einddatum\n and overeenkomst_pe.dv_end_dts > tijd.einddatum\nleft join datavault.l_huurovk_eenheid_ssm l_huurovk_eenheid_ssm_pe\n on huurovereenkomst_pe.id_h_huurovereenkomst =\nl_huurovk_eenheid_ssm_pe.id_h_huurovereenkomst\n and l_huurovk_eenheid_ssm_pe.dv_start_dts <= tijd.einddatum\n and l_huurovk_eenheid_ssm_pe.dv_end_dts > tijd.einddatum\nleft join datavault.s_h_eenheid_ssm eenheid_pe\n on l_huurovk_eenheid_ssm_pe.id_h_eenheid = eenheid_pe.id_h_eenheid\n and eenheid_pe.dv_start_dts <= tijd.einddatum\n and eenheid_pe.dv_end_dts > tijd.einddatum\nleft join datavault.l_adres_eenheid_ssm l_adres_eenheid_ssm_pe\n on l_huurovk_eenheid_ssm_pe.id_h_eenheid =\nl_adres_eenheid_ssm_pe.id_h_eenheid\n and l_adres_eenheid_ssm_pe.dv_start_dts <= tijd.einddatum\n and l_adres_eenheid_ssm_pe.dv_end_dts > tijd.einddatum\nleft join datavault.s_h_adres_ssm adres_pe\n on l_adres_eenheid_ssm_pe.id_h_adres = adres_pe.id_h_adres\n and adres_pe.dv_start_dts <= tijd.einddatum\n and adres_pe.dv_end_dts > tijd.einddatum\nleft join lateral (select\n l_cluster_eenheid_ssm.id_h_eenheid\n , di01905cluster.id_s\n from datavault.l_cluster_eenheid_ssm\n inner join datavault.s_h_cluster_ssm di01905cluster\n on l_cluster_eenheid_ssm.id_h_cluster = di01905cluster.id_h_cluster\n and di01905cluster.dv_start_dts <= tijd.einddatum\n and di01905cluster.dv_end_dts > tijd.einddatum\n where di01905cluster.soort = 'FIN'\n) di01905cluster_pe\n on l_huurovk_eenheid_ssm_pe.id_h_eenheid =\ndi01905cluster_pe.id_h_eenheid\nleft join lateral (select\n l_ovk_ovkrel_ssm.id_h_overeenkomst\n , di02697relatie.id_s\n from datavault.l_ovk_ovkrel_ssm\n inner join datavault.l_ovkrel_rel_ssm l_ovkrel_rel_ssm\n on l_ovk_ovkrel_ssm.id_h_overeenkomstrelatie =\nl_ovkrel_rel_ssm.id_h_overeenkomstrelatie\n inner join datavault.l_huurovk_ovk_ssm l_huurovk_ovk_ssm\n on l_ovk_ovkrel_ssm.id_h_overeenkomst =\nl_huurovk_ovk_ssm.id_h_overeenkomst\n inner join s_h_huurovereenkomst_ssm huurovereenkomst_pe\n on l_huurovk_ovk_ssm.id_h_huurovereenkomst =\nhuurovereenkomst_pe.id_h_huurovereenkomst\n and huurovereenkomst_pe.dv_start_dts <= tijd.einddatum\n and huurovereenkomst_pe.dv_end_dts > tijd.einddatum\n inner join datavault.s_h_relatie_ssm di02697relatie\n on l_ovkrel_rel_ssm.id_h_relatie = di02697relatie.id_h_relatie\n and di02697relatie.dv_start_dts <= tijd.einddatum\n and di02697relatie.dv_end_dts > tijd.einddatum\n left join datavault.mv_ve0269801 ve02698\n on ve02698.calender_id = coalesce(tijd.tijdkey, 'Unknown')\n and di02697relatie.identificatie = VE02698.VE02698\n and ve02698.huurovereenkomst_id = huurovereenkomst_pe.id_s\n where VE02698.VE02698 is not null\n) di02697relatie_pe\n on l_huurovk_ovk_ssm_pe.id_h_overeenkomst =\ndi02697relatie_pe.id_h_overeenkomst\nleft join lateral (select\n l_cluster_eenheid_ssm.id_h_eenheid\n , di04238cluster.id_s\n from datavault.l_cluster_eenheid_ssm\n inner join datavault.s_h_cluster_ssm di04238cluster\n on l_cluster_eenheid_ssm.id_h_cluster = di04238cluster.id_h_cluster\n and di04238cluster.dv_start_dts <= tijd.einddatum\n and di04238cluster.dv_end_dts > tijd.einddatum\n where di04238cluster.soort = 'OND'\n) di04238cluster_pe\n on l_huurovk_eenheid_ssm_pe.id_h_eenheid =\ndi04238cluster_pe.id_h_eenheid\nleft join lateral (select\n l_ovk_ovkrel_ssm.id_h_overeenkomst\n , di04306natuurlijkpersoon.id_s\n from datavault.l_ovk_ovkrel_ssm\n inner join datavault.l_ovkrel_rel_ssm l_ovkrel_rel_ssm\n on l_ovk_ovkrel_ssm.id_h_overeenkomstrelatie =\nl_ovkrel_rel_ssm.id_h_overeenkomstrelatie\n inner join datavault.l_huurovk_ovk_ssm l_huurovk_ovk_ssm\n on l_ovk_ovkrel_ssm.id_h_overeenkomst =\nl_huurovk_ovk_ssm.id_h_overeenkomst\n inner join s_h_huurovereenkomst_ssm huurovereenkomst_pe\n on l_huurovk_ovk_ssm.id_h_huurovereenkomst =\nhuurovereenkomst_pe.id_h_huurovereenkomst\n and huurovereenkomst_pe.dv_start_dts <= tijd.einddatum\n and huurovereenkomst_pe.dv_end_dts > tijd.einddatum\n inner join datavault.l_natuurlijkpersoon_rel_ssm\nl_natuurlijkpersoon_rel_ssm\n on l_ovkrel_rel_ssm.id_h_relatie =\nl_natuurlijkpersoon_rel_ssm.id_h_relatie\n inner join datavault.s_h_natuurlijkpersoon_ssm di04306natuurlijkpersoon\n on l_natuurlijkpersoon_rel_ssm.id_h_natuurlijkpersoon =\ndi04306natuurlijkpersoon.id_h_natuurlijkpersoon\n and di04306natuurlijkpersoon.dv_start_dts <= tijd.einddatum\n and di04306natuurlijkpersoon.dv_end_dts > tijd.einddatum\n left join datavault.mv_ve0269801 ve02698\n on ve02698.calender_id = coalesce(tijd.tijdkey, 'Unknown')\n and di04306natuurlijkpersoon.identificatie = VE02698.VE02698\n and ve02698.huurovereenkomst_id = huurovereenkomst_pe.id_s\n where VE02698.VE02698 is not null\n) di04306natuurlijkpersoon_pe\n on l_huurovk_ovk_ssm_pe.id_h_overeenkomst =\ndi04306natuurlijkpersoon_pe.id_h_overeenkomst\nwhere huurovereenkomst_pe.soort = 'HUU'\nand overeenkomst_pe.begindatum <= tijd.einddatum\nand (overeenkomst_pe.einddatum >= tijd.einddatum or\novereenkomst_pe.einddatum is null)\ngroup by coalesce(adres_pe.id_s, -1)\n , coalesce(tijd.tijdkey, 'Unknown')\n , coalesce(di01905cluster_pe.id_s, -1)\n , coalesce(di02697relatie_pe.id_s, -1)\n , coalesce(di04238cluster_pe.id_s, -1)\n , coalesce(di04306natuurlijkpersoon_pe.id_s, -1)\n , coalesce(eenheid_pe.id_s, -1)\n , coalesce(huurovereenkomst_pe.id_s, -1)\n\nThe execution plan on Postgres 13.1:\nGroupAggregate (cost=20008853763.07..20008853776.02 rows=370 width=68)\n Group Key: (COALESCE(adres_pe.id_s, '-1'::integer)),\n(COALESCE(tijd.tijdkey, 'Unknown'::character varying)),\n(COALESCE(di01905cluster.id_s, '-1'::integer)),\n(COALESCE(di02697relatie_pe.id_s, '-1'::integer)),\n(COALESCE(di04238cluster.id_s, '-1'::integer)),\n(COALESCE(di04306natuurlijkpersoon.id_s, '-1'::integer)),\n(COALESCE(eenheid_pe.id_s, '-1'::integer)),\n(COALESCE(s_h_huurovereenkomst_ssm.id_s, '-1'::integer))\n -> Sort (cost=20008853763.07..20008853764.00 rows=370 width=81)\n Sort Key: (COALESCE(adres_pe.id_s, '-1'::integer)),\n(COALESCE(tijd.tijdkey, 'Unknown'::character varying)),\n(COALESCE(di01905cluster.id_s, '-1'::integer)),\n(COALESCE(di02697relatie_pe.id_s, '-1'::integer)),\n(COALESCE(di04238cluster.id_s, '-1'::integer)),\n(COALESCE(di04306natuurlijkpersoon.id_s, '-1'::integer)),\n(COALESCE(eenheid_pe.id_s, '-1'::integer)),\n(COALESCE(s_h_huurovereenkomst_ssm.id_s, '-1'::integer))\n -> Nested Loop Left Join (cost=20000106618.94..20008853747.29\nrows=370 width=81)\n Join Filter: (l_huurovk_ovk_ssm_pe.id_h_overeenkomst =\nl_ovk_ovkrel_ssm_1.id_h_overeenkomst)\n -> Merge Left Join (cost=10000096634.62..10000097034.06\nrows=370 width=60)\n Merge Cond: (l_huurovk_eenheid_ssm_pe.id_h_eenheid =\nl_cluster_eenheid_ssm_1.id_h_eenheid)\n Join Filter: ((di04238cluster.dv_start_dts <=\ntijd.einddatum) AND (di04238cluster.dv_end_dts > tijd.einddatum))\n -> Merge Left Join\n (cost=10000091816.99..10000092215.26 rows=370 width=60)\n Merge Cond:\n(l_huurovk_eenheid_ssm_pe.id_h_eenheid = eenheid_pe.id_h_eenheid)\n Join Filter: ((eenheid_pe.dv_start_dts <=\ntijd.einddatum) AND (eenheid_pe.dv_end_dts > tijd.einddatum))\n -> Merge Left Join\n (cost=10000087694.35..10000087954.18 rows=370 width=56)\n Merge Cond:\n(l_huurovk_eenheid_ssm_pe.id_h_eenheid = l_cluster_eenheid_ssm.id_h_eenheid)\n Join Filter: ((di01905cluster.dv_start_dts\n<= tijd.einddatum) AND (di01905cluster.dv_end_dts > tijd.einddatum))\n -> Sort\n (cost=10000078369.96..10000078370.88 rows=370 width=52)\n Sort Key:\nl_huurovk_eenheid_ssm_pe.id_h_eenheid\n -> Merge Join\n (cost=10000077852.39..10000078354.17 rows=370 width=52)\n Merge Cond:\n(l_huurovk_ovk_ssm_pe.id_h_overeenkomst = overeenkomst_pe.id_h_overeenkomst)\n Join Filter:\n((overeenkomst_pe.dv_start_dts <= tijd.einddatum) AND\n(overeenkomst_pe.dv_end_dts > tijd.einddatum) AND\n(overeenkomst_pe.begindatum <= tijd.einddatum) AND\n((overeenkomst_pe.einddatum >= tijd.einddatum) OR\n(overeenkomst_pe.einddatum IS NULL)))\n -> Sort\n (cost=10000073751.26..10000073783.05 rows=12715 width=52)\n Sort Key:\nl_huurovk_ovk_ssm_pe.id_h_overeenkomst\n -> Hash Right Join\n (cost=10000068896.35..10000072884.46 rows=12715 width=52)\n Hash Cond:\n(adres_pe.id_h_adres = l_adres_eenheid_ssm_pe.id_h_adres)\n Join Filter:\n((adres_pe.dv_start_dts <= tijd.einddatum) AND (adres_pe.dv_end_dts >\ntijd.einddatum))\n -> Seq Scan on\ns_h_adres_ssm adres_pe (cost=0.00..3424.19 rows=99519 width=24)\n -> Hash\n (cost=10000068737.41..10000068737.41 rows=12715 width=52)\n -> Merge\nLeft Join (cost=10000068351.17..10000068737.41 rows=12715 width=52)\n Merge\nCond: (l_huurovk_eenheid_ssm_pe.id_h_eenheid =\nl_adres_eenheid_ssm_pe.id_h_eenheid)\n Join\nFilter: ((l_adres_eenheid_ssm_pe.dv_start_dts <= tijd.einddatum) AND\n(l_adres_eenheid_ssm_pe.dv_end_dts > tijd.einddatum))\n ->\n Sort (cost=10000065526.53..10000065558.32 rows=12715 width=48)\n\nSort Key: l_huurovk_eenheid_ssm_pe.id_h_eenheid\n\n-> Hash Right Join (cost=10000063619.26..10000064659.74 rows=12715\nwidth=48)\n\n Hash Cond: (l_huurovk_eenheid_ssm_pe.id_h_huurovereenkomst =\ns_h_huurovereenkomst_ssm.id_h_huurovereenkomst)\n\n Join Filter: ((l_huurovk_eenheid_ssm_pe.dv_start_dts <= tijd.einddatum)\nAND (l_huurovk_eenheid_ssm_pe.dv_end_dts > tijd.einddatum))\n\n -> Seq Scan on l_huurovk_eenheid_ssm l_huurovk_eenheid_ssm_pe\n (cost=0.00..711.82 rows=36782 width=24)\n\n -> Hash (cost=10000063460.32..10000063460.32 rows=12715 width=48)\n\n -> Merge Join (cost=10000060987.75..10000063460.32 rows=12715\nwidth=48)\n\n Merge Cond: (s_h_huurovereenkomst_ssm.id_h_huurovereenkomst\n= l_huurovk_ovk_ssm_pe.id_h_huurovereenkomst)\n\n Join Filter: ((s_h_huurovereenkomst_ssm.dv_start_dts <=\ntijd.einddatum) AND (s_h_huurovereenkomst_ssm.dv_end_dts > tijd.einddatum))\n\n -> Sort (cost=4368.09..4460.04 rows=36782 width=45)\n\n Sort Key:\ns_h_huurovereenkomst_ssm.id_h_huurovereenkomst\n\n -> Seq Scan on s_h_huurovereenkomst_ssm\n (cost=0.00..1578.78 rows=36782 width=45)\n\n Filter: (soort = 'HUU'::text)\n\n -> Sort (cost=10000056619.67..10000056905.75 rows=114433\nwidth=23)\n\n Sort Key: l_huurovk_ovk_ssm_pe.id_h_huurovereenkomst\n\n -> Nested Loop (cost=10000022065.79..10000047004.92\nrows=114433 width=23)\n\n -> Seq Scan on tijd (cost=0.00..1.28 rows=28\nwidth=11)\n\n -> Hash Left Join (cost=22065.79..22915.56\nrows=4087 width=28)\n\n Hash Cond:\n(l_huurovk_ovk_ssm_pe.id_h_overeenkomst =\ndi02697relatie_pe.id_h_overeenkomst)\n\n Filter:\n((l_huurovk_ovk_ssm_pe.dv_start_dts <= tijd.einddatum) AND\n(l_huurovk_ovk_ssm_pe.dv_end_dts > tijd.einddatum))\n\n -> Seq Scan on l_huurovk_ovk_ssm\nl_huurovk_ovk_ssm_pe (cost=0.00..711.82 rows=36782 width=24)\n\n -> Hash (cost=22065.78..22065.78 rows=1\nwidth=8)\n\n -> Subquery Scan on\ndi02697relatie_pe (cost=8384.39..22065.78 rows=1 width=8)\n\n -> Hash Join\n (cost=8384.39..22065.77 rows=1 width=8)\n\n Hash Cond:\n((l_ovk_ovkrel_ssm.id_h_overeenkomst = l_huurovk_ovk_ssm.id_h_overeenkomst)\nAND (ve02698.huurovereenkomst_id = huurovereenkomst_pe.id_s))\n\n -> Hash Join\n (cost=5710.64..19380.86 rows=1487 width=12)\n\n Hash Cond:\n(ve02698.ve02698 = di02697relatie.identificatie)\n\n -> Seq Scan on\nmv_ve0269801 ve02698 (cost=0.00..13559.11 rows=25663 width=15)\n\n Filter:\n((ve02698 IS NOT NULL) AND ((calender_id)::text = (COALESCE(tijd.tijdkey,\n'Unknown'::character varying))::text))\n\n -> Hash\n (cost=5645.49..5645.49 rows=5212 width=18)\n\n -> Hash\nJoin (cost=4509.43..5645.49 rows=5212 width=18)\n\n Hash\nCond: (l_ovk_ovkrel_ssm.id_h_overeenkomstrelatie =\nl_ovkrel_rel_ssm.id_h_overeenkomstrelatie)\n\n ->\n Seq Scan on l_ovk_ovkrel_ssm (cost=0.00..908.05 rows=46905 width=8)\n\n ->\n Hash (cost=4444.28..4444.28 rows=5212 width=18)\n\n\n-> Hash Join (cost=3308.22..4444.28 rows=5212 width=18)\n\n\n Hash Cond: (l_ovkrel_rel_ssm.id_h_relatie =\ndi02697relatie.id_h_relatie)\n\n\n -> Seq Scan on l_ovkrel_rel_ssm (cost=0.00..908.05 rows=46905\nwidth=8)\n\n\n -> Hash (cost=3183.28..3183.28 rows=9995 width=18)\n\n\n -> Seq Scan on s_h_relatie_ssm di02697relatie\n (cost=0.00..3183.28 rows=9995 width=18)\n\n\n Filter: ((dv_start_dts <= tijd.einddatum) AND (dv_end_dts\n> tijd.einddatum))\n\n -> Hash\n (cost=2612.44..2612.44 rows=4087 width=8)\n\n -> Hash Join\n (cost=1721.82..2612.44 rows=4087 width=8)\n\n Hash Cond:\n(l_huurovk_ovk_ssm.id_h_huurovereenkomst =\nhuurovereenkomst_pe.id_h_huurovereenkomst)\n\n -> Seq\nScan on l_huurovk_ovk_ssm (cost=0.00..711.82 rows=36782 width=8)\n\n -> Hash\n (cost=1670.73..1670.73 rows=4087 width=8)\n\n ->\n Seq Scan on s_h_huurovereenkomst_ssm huurovereenkomst_pe\n (cost=0.00..1670.73 rows=4087 width=8)\n\n\nFilter: ((dv_start_dts <= tijd.einddatum) AND (dv_end_dts > tijd.einddatum))\n ->\n Sort (cost=2824.63..2899.95 rows=30128 width=24)\n\nSort Key: l_adres_eenheid_ssm_pe.id_h_eenheid\n\n-> Seq Scan on l_adres_eenheid_ssm l_adres_eenheid_ssm_pe\n (cost=0.00..583.28 rows=30128 width=24)\n -> Sort\n (cost=4101.13..4193.09 rows=36782 width=28)\n Sort Key:\novereenkomst_pe.id_h_overeenkomst\n -> Seq Scan on\ns_h_overeenkomst_ssm overeenkomst_pe (cost=0.00..1311.82 rows=36782\nwidth=28)\n -> Sort (cost=9324.37..9451.80 rows=50973\nwidth=24)\n Sort Key:\nl_cluster_eenheid_ssm.id_h_eenheid\n -> Hash Join (cost=168.45..5338.93\nrows=50973 width=24)\n Hash Cond:\n(l_cluster_eenheid_ssm.id_h_cluster = di01905cluster.id_h_cluster)\n -> Seq Scan on\nl_cluster_eenheid_ssm (cost=0.00..3904.00 rows=201800 width=8)\n -> Hash (cost=155.05..155.05\nrows=1072 width=24)\n -> Seq Scan on\ns_h_cluster_ssm di01905cluster (cost=0.00..155.05 rows=1072 width=24)\n Filter: (soort =\n'FIN'::text)\n -> Sort (cost=4122.63..4197.95 rows=30128\nwidth=24)\n Sort Key: eenheid_pe.id_h_eenheid\n -> Seq Scan on s_h_eenheid_ssm eenheid_pe\n (cost=0.00..1881.28 rows=30128 width=24)\n -> Sort (cost=4817.63..4817.75 rows=48 width=24)\n Sort Key: l_cluster_eenheid_ssm_1.id_h_eenheid\n -> Hash Join (cost=155.06..4816.29 rows=48\nwidth=24)\n Hash Cond:\n(l_cluster_eenheid_ssm_1.id_h_cluster = di04238cluster.id_h_cluster)\n -> Seq Scan on l_cluster_eenheid_ssm\nl_cluster_eenheid_ssm_1 (cost=0.00..3904.00 rows=201800 width=8)\n -> Hash (cost=155.05..155.05 rows=1\nwidth=24)\n -> Seq Scan on s_h_cluster_ssm\ndi04238cluster (cost=0.00..155.05 rows=1 width=24)\n Filter: (soort = 'OND'::text)\n -> Hash Join (cost=9984.32..23666.77 rows=1 width=8)\n Hash Cond: ((l_ovk_ovkrel_ssm_1.id_h_overeenkomst =\nl_huurovk_ovk_ssm_1.id_h_overeenkomst) AND (ve02698_1.huurovereenkomst_id =\nhuurovereenkomst_pe_1.id_s))\n -> Hash Join (cost=7310.58..20981.40 rows=1548\nwidth=12)\n Hash Cond: (ve02698_1.ve02698 =\ndi04306natuurlijkpersoon.identificatie)\n -> Seq Scan on mv_ve0269801 ve02698_1\n (cost=0.00..13559.11 rows=25663 width=15)\n Filter: ((ve02698 IS NOT NULL) AND\n((calender_id)::text = (COALESCE(tijd.tijdkey, 'Unknown'::character\nvarying))::text))\n -> Hash (cost=7245.44..7245.44 rows=5211\nwidth=18)\n -> Hash Join (cost=6109.38..7245.44\nrows=5211 width=18)\n Hash Cond:\n(l_ovk_ovkrel_ssm_1.id_h_overeenkomstrelatie =\nl_ovkrel_rel_ssm_1.id_h_overeenkomstrelatie)\n -> Seq Scan on l_ovk_ovkrel_ssm\nl_ovk_ovkrel_ssm_1 (cost=0.00..908.05 rows=46905 width=8)\n -> Hash (cost=6044.25..6044.25\nrows=5211 width=18)\n -> Hash Join\n (cost=4908.18..6044.25 rows=5211 width=18)\n Hash Cond:\n(l_ovkrel_rel_ssm_1.id_h_relatie = l_natuurlijkpersoon_rel_ssm.id_h_relatie)\n -> Seq Scan on\nl_ovkrel_rel_ssm l_ovkrel_rel_ssm_1 (cost=0.00..908.05 rows=46905 width=8)\n -> Hash\n (cost=4797.20..4797.20 rows=8879 width=18)\n -> Hash Join\n (cost=2862.60..4797.20 rows=8879 width=18)\n Hash Cond:\n(l_natuurlijkpersoon_rel_ssm.id_h_natuurlijkpersoon =\ndi04306natuurlijkpersoon.id_h_natuurlijkpersoon)\n -> Seq Scan\non l_natuurlijkpersoon_rel_ssm (cost=0.00..1546.13 rows=79913 width=8)\n -> Hash\n (cost=2742.64..2742.64 rows=9597 width=18)\n -> Seq\nScan on s_h_natuurlijkpersoon_ssm di04306natuurlijkpersoon\n (cost=0.00..2742.64 rows=9597 width=18)\n\nFilter: ((dv_start_dts <= tijd.einddatum) AND (dv_end_dts > tijd.einddatum))\n -> Hash (cost=2612.44..2612.44 rows=4087 width=8)\n -> Hash Join (cost=1721.82..2612.44 rows=4087\nwidth=8)\n Hash Cond:\n(l_huurovk_ovk_ssm_1.id_h_huurovereenkomst =\nhuurovereenkomst_pe_1.id_h_huurovereenkomst)\n -> Seq Scan on l_huurovk_ovk_ssm\nl_huurovk_ovk_ssm_1 (cost=0.00..711.82 rows=36782 width=8)\n -> Hash (cost=1670.73..1670.73 rows=4087\nwidth=8)\n -> Seq Scan on\ns_h_huurovereenkomst_ssm huurovereenkomst_pe_1 (cost=0.00..1670.73\nrows=4087 width=8)\n Filter: ((dv_start_dts <=\ntijd.einddatum) AND (dv_end_dts > tijd.einddatum))\nJIT:\n Functions: 249\n Options: Inlining true, Optimization true, Expressions true, Deforming\ntrue\n\nExecution plan in graphical mode: https://controlc.com/95d76625 (save file\nas html, then open in a browser).\n\nThanks a lot for your time and help.\n\nRegards,\n\nFrits\n\nHi list,We have an application that generates SQL statements that are then executed on a postgresql database. The statements are always \"bulk\" type statements: they always return a relatively large amount of data, and have only a few not very selective filter expressions. They do contain a terrible amount of joins, though.The database has a \"datavault\" structure consisting of satellite, hub and link tables. Tables can easily contain a large amount of rows (10..100 million). The individual tables have primary key constraints but no referential constraints, and the only indexes present are those for the primary key constraints. There are also no indices on any other column. The reason for this is that everything is done in this database to get the highest performance possible for both loading data and querying it for our specific purpose, and indices do not help with that at all (they are never used by the planner because the conditions are never selective enough).One problem we have with these queries is that Postgresql's planner often badly underestimates the number of rows returned by query steps. It then uses nested loops for merging parts because it estimated it needs to loop only a few times, but in reality it needs to loop 10 million times, and that tends to not finish in any reasonable time ;)Considering the type of query we do we can safely say that using a nested loop is always a bad choice, and so we always run these statements after setting enable_nestloop to false. This has greatly helped the stability of these queries.But lately while migrating to Postgres 13 (from 9.6) we found that Postgres does not (always) obey the enable_nestloop = false setting anymore: some queries make a plan that contains a nested loop, and consequently they do not finish anymore. Whether a nested loop is being generated still seems to depend on the database's actual statistics; on some databases it uses the nested loop while on others (that use the exact same schema but have different data in them) it uses only hash and merge joins- as it should.What can I do to prevent these nested loops from occurring?FYI: an example query in a datavault database:select coalesce(adres_pe.id_s, -1) as adres_id, coalesce(tijd.tijdkey, 'Unknown') as calender_id, coalesce(di01905cluster_pe.id_s, -1) as di01905cluster_id, coalesce(di02697relatie_pe.id_s, -1) as di02697relatie_id, coalesce(di04238cluster_pe.id_s, -1) as di04238cluster_id, coalesce(di04306natuurlijkpersoon_pe.id_s, -1) as di04306natuurlijkpersoon_id, coalesce(eenheid_pe.id_s, -1) as eenheid_id, coalesce(huurovereenkomst_pe.id_s, -1) as huurovereenkomst_id, cast(count(huurovereenkomst_pe.identificatie) as bigint) as kg00770from datavault.tijd tijdcross join lateral (select * from datavault.s_h_huurovereenkomst_ssm where dv_start_dts <= tijd.einddatum and dv_end_dts > tijd.einddatum) huurovereenkomst_peinner join datavault.l_huurovk_ovk_ssm l_huurovk_ovk_ssm_pe on huurovereenkomst_pe.id_h_huurovereenkomst = l_huurovk_ovk_ssm_pe.id_h_huurovereenkomst and l_huurovk_ovk_ssm_pe.dv_start_dts <= tijd.einddatum and l_huurovk_ovk_ssm_pe.dv_end_dts > tijd.einddatuminner join datavault.s_h_overeenkomst_ssm overeenkomst_pe on l_huurovk_ovk_ssm_pe.id_h_overeenkomst = overeenkomst_pe.id_h_overeenkomst and overeenkomst_pe.dv_start_dts <= tijd.einddatum and overeenkomst_pe.dv_end_dts > tijd.einddatumleft join datavault.l_huurovk_eenheid_ssm l_huurovk_eenheid_ssm_pe on huurovereenkomst_pe.id_h_huurovereenkomst = l_huurovk_eenheid_ssm_pe.id_h_huurovereenkomst and l_huurovk_eenheid_ssm_pe.dv_start_dts <= tijd.einddatum and l_huurovk_eenheid_ssm_pe.dv_end_dts > tijd.einddatumleft join datavault.s_h_eenheid_ssm eenheid_pe on l_huurovk_eenheid_ssm_pe.id_h_eenheid = eenheid_pe.id_h_eenheid and eenheid_pe.dv_start_dts <= tijd.einddatum and eenheid_pe.dv_end_dts > tijd.einddatumleft join datavault.l_adres_eenheid_ssm l_adres_eenheid_ssm_pe on l_huurovk_eenheid_ssm_pe.id_h_eenheid = l_adres_eenheid_ssm_pe.id_h_eenheid and l_adres_eenheid_ssm_pe.dv_start_dts <= tijd.einddatum and l_adres_eenheid_ssm_pe.dv_end_dts > tijd.einddatumleft join datavault.s_h_adres_ssm adres_pe on l_adres_eenheid_ssm_pe.id_h_adres = adres_pe.id_h_adres and adres_pe.dv_start_dts <= tijd.einddatum and adres_pe.dv_end_dts > tijd.einddatumleft join lateral (select l_cluster_eenheid_ssm.id_h_eenheid , di01905cluster.id_s from datavault.l_cluster_eenheid_ssm inner join datavault.s_h_cluster_ssm di01905cluster on l_cluster_eenheid_ssm.id_h_cluster = di01905cluster.id_h_cluster and di01905cluster.dv_start_dts <= tijd.einddatum and di01905cluster.dv_end_dts > tijd.einddatum where di01905cluster.soort = 'FIN') di01905cluster_pe on l_huurovk_eenheid_ssm_pe.id_h_eenheid = di01905cluster_pe.id_h_eenheidleft join lateral (select l_ovk_ovkrel_ssm.id_h_overeenkomst , di02697relatie.id_s from datavault.l_ovk_ovkrel_ssm inner join datavault.l_ovkrel_rel_ssm l_ovkrel_rel_ssm on l_ovk_ovkrel_ssm.id_h_overeenkomstrelatie = l_ovkrel_rel_ssm.id_h_overeenkomstrelatie inner join datavault.l_huurovk_ovk_ssm l_huurovk_ovk_ssm \ton l_ovk_ovkrel_ssm.id_h_overeenkomst = l_huurovk_ovk_ssm.id_h_overeenkomst inner join s_h_huurovereenkomst_ssm huurovereenkomst_pe \ton l_huurovk_ovk_ssm.id_h_huurovereenkomst = huurovereenkomst_pe.id_h_huurovereenkomst and huurovereenkomst_pe.dv_start_dts <= tijd.einddatum and huurovereenkomst_pe.dv_end_dts > tijd.einddatum inner join datavault.s_h_relatie_ssm di02697relatie on l_ovkrel_rel_ssm.id_h_relatie = di02697relatie.id_h_relatie and di02697relatie.dv_start_dts <= tijd.einddatum and di02697relatie.dv_end_dts > tijd.einddatum left join datavault.mv_ve0269801 ve02698 on ve02698.calender_id = coalesce(tijd.tijdkey, 'Unknown') and di02697relatie.identificatie = VE02698.VE02698 and ve02698.huurovereenkomst_id = huurovereenkomst_pe.id_s where VE02698.VE02698 is not null) di02697relatie_pe on l_huurovk_ovk_ssm_pe.id_h_overeenkomst = di02697relatie_pe.id_h_overeenkomstleft join lateral (select l_cluster_eenheid_ssm.id_h_eenheid , di04238cluster.id_s from datavault.l_cluster_eenheid_ssm inner join datavault.s_h_cluster_ssm di04238cluster on l_cluster_eenheid_ssm.id_h_cluster = di04238cluster.id_h_cluster and di04238cluster.dv_start_dts <= tijd.einddatum and di04238cluster.dv_end_dts > tijd.einddatum where di04238cluster.soort = 'OND') di04238cluster_pe on l_huurovk_eenheid_ssm_pe.id_h_eenheid = di04238cluster_pe.id_h_eenheidleft join lateral (select l_ovk_ovkrel_ssm.id_h_overeenkomst , di04306natuurlijkpersoon.id_s from datavault.l_ovk_ovkrel_ssm inner join datavault.l_ovkrel_rel_ssm l_ovkrel_rel_ssm on l_ovk_ovkrel_ssm.id_h_overeenkomstrelatie = l_ovkrel_rel_ssm.id_h_overeenkomstrelatie inner join datavault.l_huurovk_ovk_ssm l_huurovk_ovk_ssm \ton l_ovk_ovkrel_ssm.id_h_overeenkomst = l_huurovk_ovk_ssm.id_h_overeenkomst inner join s_h_huurovereenkomst_ssm huurovereenkomst_pe \ton l_huurovk_ovk_ssm.id_h_huurovereenkomst = huurovereenkomst_pe.id_h_huurovereenkomst and huurovereenkomst_pe.dv_start_dts <= tijd.einddatum and huurovereenkomst_pe.dv_end_dts > tijd.einddatum inner join datavault.l_natuurlijkpersoon_rel_ssm l_natuurlijkpersoon_rel_ssm on l_ovkrel_rel_ssm.id_h_relatie = l_natuurlijkpersoon_rel_ssm.id_h_relatie inner join datavault.s_h_natuurlijkpersoon_ssm di04306natuurlijkpersoon on l_natuurlijkpersoon_rel_ssm.id_h_natuurlijkpersoon = di04306natuurlijkpersoon.id_h_natuurlijkpersoon and di04306natuurlijkpersoon.dv_start_dts <= tijd.einddatum and di04306natuurlijkpersoon.dv_end_dts > tijd.einddatum left join datavault.mv_ve0269801 ve02698 on ve02698.calender_id = coalesce(tijd.tijdkey, 'Unknown') and di04306natuurlijkpersoon.identificatie = VE02698.VE02698 and ve02698.huurovereenkomst_id = huurovereenkomst_pe.id_s where VE02698.VE02698 is not null) di04306natuurlijkpersoon_pe on l_huurovk_ovk_ssm_pe.id_h_overeenkomst = di04306natuurlijkpersoon_pe.id_h_overeenkomstwhere huurovereenkomst_pe.soort = 'HUU'and overeenkomst_pe.begindatum <= tijd.einddatumand (overeenkomst_pe.einddatum >= tijd.einddatum or overeenkomst_pe.einddatum is null)group by coalesce(adres_pe.id_s, -1) , coalesce(tijd.tijdkey, 'Unknown') , coalesce(di01905cluster_pe.id_s, -1) , coalesce(di02697relatie_pe.id_s, -1) , coalesce(di04238cluster_pe.id_s, -1) , coalesce(di04306natuurlijkpersoon_pe.id_s, -1) , coalesce(eenheid_pe.id_s, -1) , coalesce(huurovereenkomst_pe.id_s, -1)The execution plan on Postgres 13.1:GroupAggregate (cost=20008853763.07..20008853776.02 rows=370 width=68) Group Key: (COALESCE(adres_pe.id_s, '-1'::integer)), (COALESCE(tijd.tijdkey, 'Unknown'::character varying)), (COALESCE(di01905cluster.id_s, '-1'::integer)), (COALESCE(di02697relatie_pe.id_s, '-1'::integer)), (COALESCE(di04238cluster.id_s, '-1'::integer)), (COALESCE(di04306natuurlijkpersoon.id_s, '-1'::integer)), (COALESCE(eenheid_pe.id_s, '-1'::integer)), (COALESCE(s_h_huurovereenkomst_ssm.id_s, '-1'::integer)) -> Sort (cost=20008853763.07..20008853764.00 rows=370 width=81) Sort Key: (COALESCE(adres_pe.id_s, '-1'::integer)), (COALESCE(tijd.tijdkey, 'Unknown'::character varying)), (COALESCE(di01905cluster.id_s, '-1'::integer)), (COALESCE(di02697relatie_pe.id_s, '-1'::integer)), (COALESCE(di04238cluster.id_s, '-1'::integer)), (COALESCE(di04306natuurlijkpersoon.id_s, '-1'::integer)), (COALESCE(eenheid_pe.id_s, '-1'::integer)), (COALESCE(s_h_huurovereenkomst_ssm.id_s, '-1'::integer)) -> Nested Loop Left Join (cost=20000106618.94..20008853747.29 rows=370 width=81) Join Filter: (l_huurovk_ovk_ssm_pe.id_h_overeenkomst = l_ovk_ovkrel_ssm_1.id_h_overeenkomst) -> Merge Left Join (cost=10000096634.62..10000097034.06 rows=370 width=60) Merge Cond: (l_huurovk_eenheid_ssm_pe.id_h_eenheid = l_cluster_eenheid_ssm_1.id_h_eenheid) Join Filter: ((di04238cluster.dv_start_dts <= tijd.einddatum) AND (di04238cluster.dv_end_dts > tijd.einddatum)) -> Merge Left Join (cost=10000091816.99..10000092215.26 rows=370 width=60) Merge Cond: (l_huurovk_eenheid_ssm_pe.id_h_eenheid = eenheid_pe.id_h_eenheid) Join Filter: ((eenheid_pe.dv_start_dts <= tijd.einddatum) AND (eenheid_pe.dv_end_dts > tijd.einddatum)) -> Merge Left Join (cost=10000087694.35..10000087954.18 rows=370 width=56) Merge Cond: (l_huurovk_eenheid_ssm_pe.id_h_eenheid = l_cluster_eenheid_ssm.id_h_eenheid) Join Filter: ((di01905cluster.dv_start_dts <= tijd.einddatum) AND (di01905cluster.dv_end_dts > tijd.einddatum)) -> Sort (cost=10000078369.96..10000078370.88 rows=370 width=52) Sort Key: l_huurovk_eenheid_ssm_pe.id_h_eenheid -> Merge Join (cost=10000077852.39..10000078354.17 rows=370 width=52) Merge Cond: (l_huurovk_ovk_ssm_pe.id_h_overeenkomst = overeenkomst_pe.id_h_overeenkomst) Join Filter: ((overeenkomst_pe.dv_start_dts <= tijd.einddatum) AND (overeenkomst_pe.dv_end_dts > tijd.einddatum) AND (overeenkomst_pe.begindatum <= tijd.einddatum) AND ((overeenkomst_pe.einddatum >= tijd.einddatum) OR (overeenkomst_pe.einddatum IS NULL))) -> Sort (cost=10000073751.26..10000073783.05 rows=12715 width=52) Sort Key: l_huurovk_ovk_ssm_pe.id_h_overeenkomst -> Hash Right Join (cost=10000068896.35..10000072884.46 rows=12715 width=52) Hash Cond: (adres_pe.id_h_adres = l_adres_eenheid_ssm_pe.id_h_adres) Join Filter: ((adres_pe.dv_start_dts <= tijd.einddatum) AND (adres_pe.dv_end_dts > tijd.einddatum)) -> Seq Scan on s_h_adres_ssm adres_pe (cost=0.00..3424.19 rows=99519 width=24) -> Hash (cost=10000068737.41..10000068737.41 rows=12715 width=52) -> Merge Left Join (cost=10000068351.17..10000068737.41 rows=12715 width=52) Merge Cond: (l_huurovk_eenheid_ssm_pe.id_h_eenheid = l_adres_eenheid_ssm_pe.id_h_eenheid) Join Filter: ((l_adres_eenheid_ssm_pe.dv_start_dts <= tijd.einddatum) AND (l_adres_eenheid_ssm_pe.dv_end_dts > tijd.einddatum)) -> Sort (cost=10000065526.53..10000065558.32 rows=12715 width=48) Sort Key: l_huurovk_eenheid_ssm_pe.id_h_eenheid -> Hash Right Join (cost=10000063619.26..10000064659.74 rows=12715 width=48) Hash Cond: (l_huurovk_eenheid_ssm_pe.id_h_huurovereenkomst = s_h_huurovereenkomst_ssm.id_h_huurovereenkomst) Join Filter: ((l_huurovk_eenheid_ssm_pe.dv_start_dts <= tijd.einddatum) AND (l_huurovk_eenheid_ssm_pe.dv_end_dts > tijd.einddatum)) -> Seq Scan on l_huurovk_eenheid_ssm l_huurovk_eenheid_ssm_pe (cost=0.00..711.82 rows=36782 width=24) -> Hash (cost=10000063460.32..10000063460.32 rows=12715 width=48) -> Merge Join (cost=10000060987.75..10000063460.32 rows=12715 width=48) Merge Cond: (s_h_huurovereenkomst_ssm.id_h_huurovereenkomst = l_huurovk_ovk_ssm_pe.id_h_huurovereenkomst) Join Filter: ((s_h_huurovereenkomst_ssm.dv_start_dts <= tijd.einddatum) AND (s_h_huurovereenkomst_ssm.dv_end_dts > tijd.einddatum)) -> Sort (cost=4368.09..4460.04 rows=36782 width=45) Sort Key: s_h_huurovereenkomst_ssm.id_h_huurovereenkomst -> Seq Scan on s_h_huurovereenkomst_ssm (cost=0.00..1578.78 rows=36782 width=45) Filter: (soort = 'HUU'::text) -> Sort (cost=10000056619.67..10000056905.75 rows=114433 width=23) Sort Key: l_huurovk_ovk_ssm_pe.id_h_huurovereenkomst -> Nested Loop (cost=10000022065.79..10000047004.92 rows=114433 width=23) -> Seq Scan on tijd (cost=0.00..1.28 rows=28 width=11) -> Hash Left Join (cost=22065.79..22915.56 rows=4087 width=28) Hash Cond: (l_huurovk_ovk_ssm_pe.id_h_overeenkomst = di02697relatie_pe.id_h_overeenkomst) Filter: ((l_huurovk_ovk_ssm_pe.dv_start_dts <= tijd.einddatum) AND (l_huurovk_ovk_ssm_pe.dv_end_dts > tijd.einddatum)) -> Seq Scan on l_huurovk_ovk_ssm l_huurovk_ovk_ssm_pe (cost=0.00..711.82 rows=36782 width=24) -> Hash (cost=22065.78..22065.78 rows=1 width=8) -> Subquery Scan on di02697relatie_pe (cost=8384.39..22065.78 rows=1 width=8) -> Hash Join (cost=8384.39..22065.77 rows=1 width=8) Hash Cond: ((l_ovk_ovkrel_ssm.id_h_overeenkomst = l_huurovk_ovk_ssm.id_h_overeenkomst) AND (ve02698.huurovereenkomst_id = huurovereenkomst_pe.id_s)) -> Hash Join (cost=5710.64..19380.86 rows=1487 width=12) Hash Cond: (ve02698.ve02698 = di02697relatie.identificatie) -> Seq Scan on mv_ve0269801 ve02698 (cost=0.00..13559.11 rows=25663 width=15) Filter: ((ve02698 IS NOT NULL) AND ((calender_id)::text = (COALESCE(tijd.tijdkey, 'Unknown'::character varying))::text)) -> Hash (cost=5645.49..5645.49 rows=5212 width=18) -> Hash Join (cost=4509.43..5645.49 rows=5212 width=18) Hash Cond: (l_ovk_ovkrel_ssm.id_h_overeenkomstrelatie = l_ovkrel_rel_ssm.id_h_overeenkomstrelatie) -> Seq Scan on l_ovk_ovkrel_ssm (cost=0.00..908.05 rows=46905 width=8) -> Hash (cost=4444.28..4444.28 rows=5212 width=18) -> Hash Join (cost=3308.22..4444.28 rows=5212 width=18) Hash Cond: (l_ovkrel_rel_ssm.id_h_relatie = di02697relatie.id_h_relatie) -> Seq Scan on l_ovkrel_rel_ssm (cost=0.00..908.05 rows=46905 width=8) -> Hash (cost=3183.28..3183.28 rows=9995 width=18) -> Seq Scan on s_h_relatie_ssm di02697relatie (cost=0.00..3183.28 rows=9995 width=18) Filter: ((dv_start_dts <= tijd.einddatum) AND (dv_end_dts > tijd.einddatum)) -> Hash (cost=2612.44..2612.44 rows=4087 width=8) -> Hash Join (cost=1721.82..2612.44 rows=4087 width=8) Hash Cond: (l_huurovk_ovk_ssm.id_h_huurovereenkomst = huurovereenkomst_pe.id_h_huurovereenkomst) -> Seq Scan on l_huurovk_ovk_ssm (cost=0.00..711.82 rows=36782 width=8) -> Hash (cost=1670.73..1670.73 rows=4087 width=8) -> Seq Scan on s_h_huurovereenkomst_ssm huurovereenkomst_pe (cost=0.00..1670.73 rows=4087 width=8) Filter: ((dv_start_dts <= tijd.einddatum) AND (dv_end_dts > tijd.einddatum)) -> Sort (cost=2824.63..2899.95 rows=30128 width=24) Sort Key: l_adres_eenheid_ssm_pe.id_h_eenheid -> Seq Scan on l_adres_eenheid_ssm l_adres_eenheid_ssm_pe (cost=0.00..583.28 rows=30128 width=24) -> Sort (cost=4101.13..4193.09 rows=36782 width=28) Sort Key: overeenkomst_pe.id_h_overeenkomst -> Seq Scan on s_h_overeenkomst_ssm overeenkomst_pe (cost=0.00..1311.82 rows=36782 width=28) -> Sort (cost=9324.37..9451.80 rows=50973 width=24) Sort Key: l_cluster_eenheid_ssm.id_h_eenheid -> Hash Join (cost=168.45..5338.93 rows=50973 width=24) Hash Cond: (l_cluster_eenheid_ssm.id_h_cluster = di01905cluster.id_h_cluster) -> Seq Scan on l_cluster_eenheid_ssm (cost=0.00..3904.00 rows=201800 width=8) -> Hash (cost=155.05..155.05 rows=1072 width=24) -> Seq Scan on s_h_cluster_ssm di01905cluster (cost=0.00..155.05 rows=1072 width=24) Filter: (soort = 'FIN'::text) -> Sort (cost=4122.63..4197.95 rows=30128 width=24) Sort Key: eenheid_pe.id_h_eenheid -> Seq Scan on s_h_eenheid_ssm eenheid_pe (cost=0.00..1881.28 rows=30128 width=24) -> Sort (cost=4817.63..4817.75 rows=48 width=24) Sort Key: l_cluster_eenheid_ssm_1.id_h_eenheid -> Hash Join (cost=155.06..4816.29 rows=48 width=24) Hash Cond: (l_cluster_eenheid_ssm_1.id_h_cluster = di04238cluster.id_h_cluster) -> Seq Scan on l_cluster_eenheid_ssm l_cluster_eenheid_ssm_1 (cost=0.00..3904.00 rows=201800 width=8) -> Hash (cost=155.05..155.05 rows=1 width=24) -> Seq Scan on s_h_cluster_ssm di04238cluster (cost=0.00..155.05 rows=1 width=24) Filter: (soort = 'OND'::text) -> Hash Join (cost=9984.32..23666.77 rows=1 width=8) Hash Cond: ((l_ovk_ovkrel_ssm_1.id_h_overeenkomst = l_huurovk_ovk_ssm_1.id_h_overeenkomst) AND (ve02698_1.huurovereenkomst_id = huurovereenkomst_pe_1.id_s)) -> Hash Join (cost=7310.58..20981.40 rows=1548 width=12) Hash Cond: (ve02698_1.ve02698 = di04306natuurlijkpersoon.identificatie) -> Seq Scan on mv_ve0269801 ve02698_1 (cost=0.00..13559.11 rows=25663 width=15) Filter: ((ve02698 IS NOT NULL) AND ((calender_id)::text = (COALESCE(tijd.tijdkey, 'Unknown'::character varying))::text)) -> Hash (cost=7245.44..7245.44 rows=5211 width=18) -> Hash Join (cost=6109.38..7245.44 rows=5211 width=18) Hash Cond: (l_ovk_ovkrel_ssm_1.id_h_overeenkomstrelatie = l_ovkrel_rel_ssm_1.id_h_overeenkomstrelatie) -> Seq Scan on l_ovk_ovkrel_ssm l_ovk_ovkrel_ssm_1 (cost=0.00..908.05 rows=46905 width=8) -> Hash (cost=6044.25..6044.25 rows=5211 width=18) -> Hash Join (cost=4908.18..6044.25 rows=5211 width=18) Hash Cond: (l_ovkrel_rel_ssm_1.id_h_relatie = l_natuurlijkpersoon_rel_ssm.id_h_relatie) -> Seq Scan on l_ovkrel_rel_ssm l_ovkrel_rel_ssm_1 (cost=0.00..908.05 rows=46905 width=8) -> Hash (cost=4797.20..4797.20 rows=8879 width=18) -> Hash Join (cost=2862.60..4797.20 rows=8879 width=18) Hash Cond: (l_natuurlijkpersoon_rel_ssm.id_h_natuurlijkpersoon = di04306natuurlijkpersoon.id_h_natuurlijkpersoon) -> Seq Scan on l_natuurlijkpersoon_rel_ssm (cost=0.00..1546.13 rows=79913 width=8) -> Hash (cost=2742.64..2742.64 rows=9597 width=18) -> Seq Scan on s_h_natuurlijkpersoon_ssm di04306natuurlijkpersoon (cost=0.00..2742.64 rows=9597 width=18) Filter: ((dv_start_dts <= tijd.einddatum) AND (dv_end_dts > tijd.einddatum)) -> Hash (cost=2612.44..2612.44 rows=4087 width=8) -> Hash Join (cost=1721.82..2612.44 rows=4087 width=8) Hash Cond: (l_huurovk_ovk_ssm_1.id_h_huurovereenkomst = huurovereenkomst_pe_1.id_h_huurovereenkomst) -> Seq Scan on l_huurovk_ovk_ssm l_huurovk_ovk_ssm_1 (cost=0.00..711.82 rows=36782 width=8) -> Hash (cost=1670.73..1670.73 rows=4087 width=8) -> Seq Scan on s_h_huurovereenkomst_ssm huurovereenkomst_pe_1 (cost=0.00..1670.73 rows=4087 width=8) Filter: ((dv_start_dts <= tijd.einddatum) AND (dv_end_dts > tijd.einddatum))JIT: Functions: 249 Options: Inlining true, Optimization true, Expressions true, Deforming trueExecution plan in graphical mode: https://controlc.com/95d76625 (save file as html, then open in a browser).Thanks a lot for your time and help.Regards,Frits",
"msg_date": "Tue, 17 Nov 2020 14:47:55 +0100",
"msg_from": "Frits Jalvingh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres using nested loops despite setting enable_nestloop to false"
},
{
"msg_contents": "On Tue, Nov 17, 2020 at 02:47:55PM +0100, Frits Jalvingh wrote:\n> But lately while migrating to Postgres 13 (from 9.6) we found that Postgres\n> does not (always) obey the enable_nestloop = false setting anymore: some\n> \n> The execution plan on Postgres 13.1:\n\nCould you send the plans under pg13 and pg9.6 as attachments ?\n\nWhat is the setting of work_mem ?\n\nI see the cost is dominated by 2*disable_cost, but I wonder whether the I/O\ncost of hash joins now exceeds that. Maybe hash_mem_multiplier helps you?\n\nGroupAggregate (cost=20008853763.07..20008853776.02 rows=370 width=68) \n Group Key: (COALESCE(adres_pe.id_s, '-1'::integer)), \n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 17 Nov 2020 08:20:24 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres using nested loops despite setting enable_nestloop to\n false"
},
{
"msg_contents": "Hi Justin, thanks for your help!\nI have attached both plans, both made with set enable_nestloop = false in\nthe attachments.\nOn the Postgresql 13 server work_mem is 64MB. It cannot really be higher\nthere because Postgresql does not control its use of memory, setting it\nhigher on this VM will cause the OOM killer to kill Postgresql for some\nqueries.\nOn the Postgres 9.6 server we have it way higher, at 5GB (this machine is a\nmonster with about 800GB of RAM).\n\nI indeed saw too that the artificial cost for the nested join led to 2x\nthat amount. But that seems to be because there are actually 2 nested joins\nin there: we use a cross join with a \"time\" table (which contains just some\n28 rows) and that one always seems to need a nested loop (it is present\nalways). So I'm not too certain that that 2x disable_cost is from joins; it\nseems to be from 2x the nested loop. And I actually wondered whether that\nwould be a cause of the issue, because as far as costs are concerned that\nsecond nested loops only _increases_ the cost by 2 times...\n\nRegards,\n\nFrits\n\n\nOn Tue, Nov 17, 2020 at 3:20 PM Justin Pryzby <[email protected]> wrote:\n\n> On Tue, Nov 17, 2020 at 02:47:55PM +0100, Frits Jalvingh wrote:\n> > But lately while migrating to Postgres 13 (from 9.6) we found that\n> Postgres\n> > does not (always) obey the enable_nestloop = false setting anymore: some\n> >\n> > The execution plan on Postgres 13.1:\n>\n> Could you send the plans under pg13 and pg9.6 as attachments ?\n>\n> What is the setting of work_mem ?\n>\n> I see the cost is dominated by 2*disable_cost, but I wonder whether the I/O\n> cost of hash joins now exceeds that. Maybe hash_mem_multiplier helps you?\n>\n> GroupAggregate (cost=20008853763.07..20008853776.02 rows=370 width=68)\n>\n>\n> Group Key: (COALESCE(adres_pe.id_s, '-1'::integer)),\n>\n>\n>\n>\n> --\n> Justin\n>\n>\n>",
"msg_date": "Tue, 17 Nov 2020 16:58:45 +0100",
"msg_from": "Frits Jalvingh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres using nested loops despite setting enable_nestloop to\n false"
},
{
"msg_contents": "On Tue, Nov 17, 2020 at 04:58:45PM +0100, Frits Jalvingh wrote:\n> Hi Justin, thanks for your help!\n> I have attached both plans, both made with set enable_nestloop = false in\n> the attachments.\n> On the Postgresql 13 server work_mem is 64MB. It cannot really be higher\n> there because Postgresql does not control its use of memory, setting it\n> higher on this VM will cause the OOM killer to kill Postgresql for some\n> queries.\n\nCan you try to get an explain just for this query with either increased\nwork_mem or hash_mem_multiplier ?\n\nOr possibly by messing with the cost parameters, including seq_page_cost.\nMaking all cost_* params 1000x smaller might allow the disable cost to be\neffective.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 17 Nov 2020 10:06:22 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres using nested loops despite setting enable_nestloop to\n false"
},
{
"msg_contents": "Ah, sorry, I forgot. I set \"hash_mem_multiplier = 2\", and after that to 20.\nIt did had no effects on the nested loops.\n\nOn Tue, Nov 17, 2020 at 4:58 PM Frits Jalvingh <[email protected]> wrote:\n\n> Hi Justin, thanks for your help!\n> I have attached both plans, both made with set enable_nestloop = false in\n> the attachments.\n> On the Postgresql 13 server work_mem is 64MB. It cannot really be higher\n> there because Postgresql does not control its use of memory, setting it\n> higher on this VM will cause the OOM killer to kill Postgresql for some\n> queries.\n> On the Postgres 9.6 server we have it way higher, at 5GB (this machine is\n> a monster with about 800GB of RAM).\n>\n> I indeed saw too that the artificial cost for the nested join led to 2x\n> that amount. But that seems to be because there are actually 2 nested joins\n> in there: we use a cross join with a \"time\" table (which contains just some\n> 28 rows) and that one always seems to need a nested loop (it is present\n> always). So I'm not too certain that that 2x disable_cost is from joins; it\n> seems to be from 2x the nested loop. And I actually wondered whether that\n> would be a cause of the issue, because as far as costs are concerned that\n> second nested loops only _increases_ the cost by 2 times...\n>\n> Regards,\n>\n> Frits\n>\n>\n> On Tue, Nov 17, 2020 at 3:20 PM Justin Pryzby <[email protected]>\n> wrote:\n>\n>> On Tue, Nov 17, 2020 at 02:47:55PM +0100, Frits Jalvingh wrote:\n>> > But lately while migrating to Postgres 13 (from 9.6) we found that\n>> Postgres\n>> > does not (always) obey the enable_nestloop = false setting anymore: some\n>> >\n>> > The execution plan on Postgres 13.1:\n>>\n>> Could you send the plans under pg13 and pg9.6 as attachments ?\n>>\n>> What is the setting of work_mem ?\n>>\n>> I see the cost is dominated by 2*disable_cost, but I wonder whether the\n>> I/O\n>> cost of hash joins now exceeds that. Maybe hash_mem_multiplier helps you?\n>>\n>> GroupAggregate (cost=20008853763.07..20008853776.02 rows=370 width=68)\n>>\n>>\n>>\n>> Group Key: (COALESCE(adres_pe.id_s, '-1'::integer)),\n>>\n>>\n>>\n>>\n>> --\n>> Justin\n>>\n>>\n>>\n\nAh, sorry, I forgot. I set \"hash_mem_multiplier = 2\", and after that to 20. It did had no effects on the nested loops.On Tue, Nov 17, 2020 at 4:58 PM Frits Jalvingh <[email protected]> wrote:Hi Justin, thanks for your help!I have attached both plans, both made with set enable_nestloop = false in the attachments.On the Postgresql 13 server work_mem is 64MB. It cannot really be higher there because Postgresql does not control its use of memory, setting it higher on this VM will cause the OOM killer to kill Postgresql for some queries.On the Postgres 9.6 server we have it way higher, at 5GB (this machine is a monster with about 800GB of RAM).I indeed saw too that the artificial cost for the nested join led to 2x that amount. But that seems to be because there are actually 2 nested joins in there: we use a cross join with a \"time\" table (which contains just some 28 rows) and that one always seems to need a nested loop (it is present always). So I'm not too certain that that 2x disable_cost is from joins; it seems to be from 2x the nested loop. And I actually wondered whether that would be a cause of the issue, because as far as costs are concerned that second nested loops only _increases_ the cost by 2 times...Regards,FritsOn Tue, Nov 17, 2020 at 3:20 PM Justin Pryzby <[email protected]> wrote:On Tue, Nov 17, 2020 at 02:47:55PM +0100, Frits Jalvingh wrote:\n> But lately while migrating to Postgres 13 (from 9.6) we found that Postgres\n> does not (always) obey the enable_nestloop = false setting anymore: some\n> \n> The execution plan on Postgres 13.1:\n\nCould you send the plans under pg13 and pg9.6 as attachments ?\n\nWhat is the setting of work_mem ?\n\nI see the cost is dominated by 2*disable_cost, but I wonder whether the I/O\ncost of hash joins now exceeds that. Maybe hash_mem_multiplier helps you?\n\nGroupAggregate (cost=20008853763.07..20008853776.02 rows=370 width=68) \n Group Key: (COALESCE(adres_pe.id_s, '-1'::integer)), \n\n-- \nJustin",
"msg_date": "Tue, 17 Nov 2020 17:08:39 +0100",
"msg_from": "Frits Jalvingh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres using nested loops despite setting enable_nestloop to\n false"
},
{
"msg_contents": "Ok, I set all those cost parameters:\n# - Planner Cost Constants -\n\nseq_page_cost = 0.0001 # measured on an arbitrary scale\nrandom_page_cost = 0.0002\ncpu_tuple_cost = 0.00001 # same scale as above\ncpu_index_tuple_cost = 0.000005 # same scale as above\ncpu_operator_cost = 0.0000025 # same scale as above\nparallel_tuple_cost = 0.0001 # same scale as above\nparallel_setup_cost = 1.0 # same scale as above\n#min_parallel_table_scan_size = 8MB\n#min_parallel_index_scan_size = 512kB\neffective_cache_size = 2GB\n\nIt still has the nested loop on top, but the total cost is now:\nGroupAggregate (cost=20000005652.88..20000005652.90 rows=370 width=68)\n\n\nOn Tue, Nov 17, 2020 at 5:08 PM Frits Jalvingh <[email protected]> wrote:\n\n> Ah, sorry, I forgot. I set \"hash_mem_multiplier = 2\", and after that to\n> 20. It did had no effects on the nested loops.\n>\n> On Tue, Nov 17, 2020 at 4:58 PM Frits Jalvingh <[email protected]> wrote:\n>\n>> Hi Justin, thanks for your help!\n>> I have attached both plans, both made with set enable_nestloop = false in\n>> the attachments.\n>> On the Postgresql 13 server work_mem is 64MB. It cannot really be higher\n>> there because Postgresql does not control its use of memory, setting it\n>> higher on this VM will cause the OOM killer to kill Postgresql for some\n>> queries.\n>> On the Postgres 9.6 server we have it way higher, at 5GB (this machine is\n>> a monster with about 800GB of RAM).\n>>\n>> I indeed saw too that the artificial cost for the nested join led to 2x\n>> that amount. But that seems to be because there are actually 2 nested joins\n>> in there: we use a cross join with a \"time\" table (which contains just some\n>> 28 rows) and that one always seems to need a nested loop (it is present\n>> always). So I'm not too certain that that 2x disable_cost is from joins; it\n>> seems to be from 2x the nested loop. And I actually wondered whether that\n>> would be a cause of the issue, because as far as costs are concerned that\n>> second nested loops only _increases_ the cost by 2 times...\n>>\n>> Regards,\n>>\n>> Frits\n>>\n>>\n>> On Tue, Nov 17, 2020 at 3:20 PM Justin Pryzby <[email protected]>\n>> wrote:\n>>\n>>> On Tue, Nov 17, 2020 at 02:47:55PM +0100, Frits Jalvingh wrote:\n>>> > But lately while migrating to Postgres 13 (from 9.6) we found that\n>>> Postgres\n>>> > does not (always) obey the enable_nestloop = false setting anymore:\n>>> some\n>>> >\n>>> > The execution plan on Postgres 13.1:\n>>>\n>>> Could you send the plans under pg13 and pg9.6 as attachments ?\n>>>\n>>> What is the setting of work_mem ?\n>>>\n>>> I see the cost is dominated by 2*disable_cost, but I wonder whether the\n>>> I/O\n>>> cost of hash joins now exceeds that. Maybe hash_mem_multiplier helps\n>>> you?\n>>>\n>>> GroupAggregate (cost=20008853763.07..20008853776.02 rows=370 width=68)\n>>>\n>>>\n>>>\n>>> Group Key: (COALESCE(adres_pe.id_s, '-1'::integer)),\n>>>\n>>>\n>>>\n>>>\n>>> --\n>>> Justin\n>>>\n>>>\n>>>\n\nOk, I set all those cost parameters:# - Planner Cost Constants -seq_page_cost = 0.0001 # measured on an arbitrary scalerandom_page_cost = 0.0002cpu_tuple_cost = 0.00001 # same scale as abovecpu_index_tuple_cost = 0.000005 # same scale as abovecpu_operator_cost = 0.0000025 # same scale as aboveparallel_tuple_cost = 0.0001 # same scale as aboveparallel_setup_cost = 1.0 # same scale as above#min_parallel_table_scan_size = 8MB#min_parallel_index_scan_size = 512kBeffective_cache_size = 2GBIt still has the nested loop on top, but the total cost is now:GroupAggregate (cost=20000005652.88..20000005652.90 rows=370 width=68)On Tue, Nov 17, 2020 at 5:08 PM Frits Jalvingh <[email protected]> wrote:Ah, sorry, I forgot. I set \"hash_mem_multiplier = 2\", and after that to 20. It did had no effects on the nested loops.On Tue, Nov 17, 2020 at 4:58 PM Frits Jalvingh <[email protected]> wrote:Hi Justin, thanks for your help!I have attached both plans, both made with set enable_nestloop = false in the attachments.On the Postgresql 13 server work_mem is 64MB. It cannot really be higher there because Postgresql does not control its use of memory, setting it higher on this VM will cause the OOM killer to kill Postgresql for some queries.On the Postgres 9.6 server we have it way higher, at 5GB (this machine is a monster with about 800GB of RAM).I indeed saw too that the artificial cost for the nested join led to 2x that amount. But that seems to be because there are actually 2 nested joins in there: we use a cross join with a \"time\" table (which contains just some 28 rows) and that one always seems to need a nested loop (it is present always). So I'm not too certain that that 2x disable_cost is from joins; it seems to be from 2x the nested loop. And I actually wondered whether that would be a cause of the issue, because as far as costs are concerned that second nested loops only _increases_ the cost by 2 times...Regards,FritsOn Tue, Nov 17, 2020 at 3:20 PM Justin Pryzby <[email protected]> wrote:On Tue, Nov 17, 2020 at 02:47:55PM +0100, Frits Jalvingh wrote:\n> But lately while migrating to Postgres 13 (from 9.6) we found that Postgres\n> does not (always) obey the enable_nestloop = false setting anymore: some\n> \n> The execution plan on Postgres 13.1:\n\nCould you send the plans under pg13 and pg9.6 as attachments ?\n\nWhat is the setting of work_mem ?\n\nI see the cost is dominated by 2*disable_cost, but I wonder whether the I/O\ncost of hash joins now exceeds that. Maybe hash_mem_multiplier helps you?\n\nGroupAggregate (cost=20008853763.07..20008853776.02 rows=370 width=68) \n Group Key: (COALESCE(adres_pe.id_s, '-1'::integer)), \n\n-- \nJustin",
"msg_date": "Tue, 17 Nov 2020 17:14:44 +0100",
"msg_from": "Frits Jalvingh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres using nested loops despite setting enable_nestloop to\n false"
},
{
"msg_contents": "Frits Jalvingh <[email protected]> writes:\n> I have attached both plans, both made with set enable_nestloop = false in\n> the attachments.\n\nThe reason why you're getting a nested loop is that the planner has no\nother choice. The \"tijd\" table has no join conditions that would be\namenable to hash- or merge-joining it to something else, because both\nof those join methods require a plain equality join condition. AFAICS\nin a quick look, all of tijd's join conditions look more like\n\n Join Filter: ((di04238cluster.dv_start_dts <= tijd.einddatum) AND (di04238cluster.dv_end_dts > tijd.einddatum))\n\nwhich is not amenable to anything except brute force cross-join-and-\ntest-the-condition.\n\nGiven that, it's likely that \"enable_nestloop = false\" is making things\nworse not better, by artificially distorting the plan shape.\n\nSeeing the large number of joins involved, I wonder what your\ngeqo_threshold, join_collapse_limit, and from_collapse_limit settings\nare, and whether you can get a better plan by increasing them.\n\nThe planner doesn't seem to think that any of these joins involve\na very large number of rows, so I doubt that your work_mem setting\nis very relevant. However, are these rowcount estimates accurate?\nYou claimed upthread that you were dealing with hundreds of millions\nof rows, but it's impossible to credit that cost estimates like\n\n -> Seq Scan on s_h_cluster_ssm di01905cluster (cost=0.00..155.05 rows=1072 width=24)\n Filter: (soort = 'FIN'::text)\n\ncorrespond to scanning large tables.\n\nIn the end, I fear that finding a way to get rid of those\ninequality join conditions may be your only real answer.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Nov 2020 11:21:47 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres using nested loops despite setting enable_nestloop to\n false"
},
{
"msg_contents": "Hello Tom, thanks for your help!\n\nI understand that the \"time\" table cross join needs a nested loop. Indeed\nthat nested loop is present in all plans generated.\nBut it is the _second_ (topmost) nested loop that is the issue. Once the\ntime table has been joined it should be possible to do something else for\nthat second nested loop. This is proven by that query on 9.6 (which has\nonly one nested loop for that exact same query, on almost the same database\ncontent as the Postgresql 13 one). Even on Postgresql 13 a correct plan is\nmade in another database (exact same structure, different data); I have\nattached the plan that is made there too.\nAll databases that make a plan without the second nested loops also finish\nthe query within a reasonable time period (16 seconds on the .9.6 server).\nOn the 13 server with the nested loops plan the process times out after 2\nhours.\n\nAs far as the row counts go: yes, this database is not by far the biggest\none, so the row counts are less. It also depends on what query we actually\nrun (we can have hundreds of them on different tables, and not all tables\nare that big).\n\nI disabled nested_loops not just for fun, I disabled it because without it\nmany of the queries effectively hang because their plan estimate expects\nonly a few rows while in reality there are millions. Disabling nested loops\nwill let lots of the generated queries fail, even on smaller datasets.\n\nI have no idea of how to get rid of those inequality queries, except by not\nusing SQL and doing them by hand in code.. That would prove to be\ndisastrous for performance as I'd have to read all those datasets\ncompletely... Do you have an idea on how to do that better?\n\nRegards,\nFrits\n\n\nOn Tue, Nov 17, 2020 at 5:21 PM Tom Lane <[email protected]> wrote:\n\n> Frits Jalvingh <[email protected]> writes:\n> > I have attached both plans, both made with set enable_nestloop = false in\n> > the attachments.\n>\n> The reason why you're getting a nested loop is that the planner has no\n> other choice. The \"tijd\" table has no join conditions that would be\n> amenable to hash- or merge-joining it to something else, because both\n> of those join methods require a plain equality join condition. AFAICS\n> in a quick look, all of tijd's join conditions look more like\n>\n> Join Filter: ((di04238cluster.dv_start_dts <= tijd.einddatum) AND\n> (di04238cluster.dv_end_dts > tijd.einddatum))\n>\n> which is not amenable to anything except brute force cross-join-and-\n> test-the-condition.\n>\n> Given that, it's likely that \"enable_nestloop = false\" is making things\n> worse not better, by artificially distorting the plan shape.\n>\n> Seeing the large number of joins involved, I wonder what your\n> geqo_threshold, join_collapse_limit, and from_collapse_limit settings\n> are, and whether you can get a better plan by increasing them.\n>\n> The planner doesn't seem to think that any of these joins involve\n> a very large number of rows, so I doubt that your work_mem setting\n> is very relevant. However, are these rowcount estimates accurate?\n> You claimed upthread that you were dealing with hundreds of millions\n> of rows, but it's impossible to credit that cost estimates like\n>\n> -> Seq Scan on s_h_cluster_ssm di01905cluster (cost=0.00..155.05\n> rows=1072 width=24)\n> Filter: (soort = 'FIN'::text)\n>\n> correspond to scanning large tables.\n>\n> In the end, I fear that finding a way to get rid of those\n> inequality join conditions may be your only real answer.\n>\n> regards, tom lane\n>",
"msg_date": "Tue, 17 Nov 2020 17:42:30 +0100",
"msg_from": "Frits Jalvingh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres using nested loops despite setting enable_nestloop to\n false"
},
{
"msg_contents": "I found out that setting:\nset join_collapse_limit = 14;\nset from_collapse_limit = 14;\nIn addition to disabling the nested loops does produce a viable plan, with\nonly the nested loop to generate the tijd table cross join as a basic part\ndown low... The original values for those were 12. It does seem scary to\nupdate those as the possibility of having 14! plans to choose from seems...\nscary...\n\nIt does feel a bit like throwing dice...\n\nI assume the bad plan is being made by the gequ planner. Is there a way to\ndiscourage it from using those nested loops?\n\nRegards,\n\nFrits\n\n\nOn Tue, Nov 17, 2020 at 5:42 PM Frits Jalvingh <[email protected]> wrote:\n\n> Hello Tom, thanks for your help!\n>\n> I understand that the \"time\" table cross join needs a nested loop. Indeed\n> that nested loop is present in all plans generated.\n> But it is the _second_ (topmost) nested loop that is the issue. Once the\n> time table has been joined it should be possible to do something else for\n> that second nested loop. This is proven by that query on 9.6 (which has\n> only one nested loop for that exact same query, on almost the same database\n> content as the Postgresql 13 one). Even on Postgresql 13 a correct plan is\n> made in another database (exact same structure, different data); I have\n> attached the plan that is made there too.\n> All databases that make a plan without the second nested loops also finish\n> the query within a reasonable time period (16 seconds on the .9.6 server).\n> On the 13 server with the nested loops plan the process times out after 2\n> hours.\n>\n> As far as the row counts go: yes, this database is not by far the biggest\n> one, so the row counts are less. It also depends on what query we actually\n> run (we can have hundreds of them on different tables, and not all tables\n> are that big).\n>\n> I disabled nested_loops not just for fun, I disabled it because without it\n> many of the queries effectively hang because their plan estimate expects\n> only a few rows while in reality there are millions. Disabling nested loops\n> will let lots of the generated queries fail, even on smaller datasets.\n>\n> I have no idea of how to get rid of those inequality queries, except by\n> not using SQL and doing them by hand in code.. That would prove to be\n> disastrous for performance as I'd have to read all those datasets\n> completely... Do you have an idea on how to do that better?\n>\n> Regards,\n> Frits\n>\n>\n> On Tue, Nov 17, 2020 at 5:21 PM Tom Lane <[email protected]> wrote:\n>\n>> Frits Jalvingh <[email protected]> writes:\n>> > I have attached both plans, both made with set enable_nestloop = false\n>> in\n>> > the attachments.\n>>\n>> The reason why you're getting a nested loop is that the planner has no\n>> other choice. The \"tijd\" table has no join conditions that would be\n>> amenable to hash- or merge-joining it to something else, because both\n>> of those join methods require a plain equality join condition. AFAICS\n>> in a quick look, all of tijd's join conditions look more like\n>>\n>> Join Filter: ((di04238cluster.dv_start_dts <= tijd.einddatum) AND\n>> (di04238cluster.dv_end_dts > tijd.einddatum))\n>>\n>> which is not amenable to anything except brute force cross-join-and-\n>> test-the-condition.\n>>\n>> Given that, it's likely that \"enable_nestloop = false\" is making things\n>> worse not better, by artificially distorting the plan shape.\n>>\n>> Seeing the large number of joins involved, I wonder what your\n>> geqo_threshold, join_collapse_limit, and from_collapse_limit settings\n>> are, and whether you can get a better plan by increasing them.\n>>\n>> The planner doesn't seem to think that any of these joins involve\n>> a very large number of rows, so I doubt that your work_mem setting\n>> is very relevant. However, are these rowcount estimates accurate?\n>> You claimed upthread that you were dealing with hundreds of millions\n>> of rows, but it's impossible to credit that cost estimates like\n>>\n>> -> Seq Scan on s_h_cluster_ssm di01905cluster (cost=0.00..155.05\n>> rows=1072 width=24)\n>> Filter: (soort = 'FIN'::text)\n>>\n>> correspond to scanning large tables.\n>>\n>> In the end, I fear that finding a way to get rid of those\n>> inequality join conditions may be your only real answer.\n>>\n>> regards, tom lane\n>>\n>\n\nI found out that setting:set join_collapse_limit = 14;set from_collapse_limit = 14;In addition to disabling the nested loops does produce a viable plan, with only the nested loop to generate the tijd table cross join as a basic part down low... The original values for those were 12. It does seem scary to update those as the possibility of having 14! plans to choose from seems... scary...It does feel a bit like throwing dice...I assume the bad plan is being made by the gequ planner. Is there a way to discourage it from using those nested loops?Regards,FritsOn Tue, Nov 17, 2020 at 5:42 PM Frits Jalvingh <[email protected]> wrote:Hello Tom, thanks for your help!I understand that the \"time\" table cross join needs a nested loop. Indeed that nested loop is present in all plans generated.But it is the _second_ (topmost) nested loop that is the issue. Once the time table has been joined it should be possible to do something else for that second nested loop. This is proven by that query on 9.6 (which has only one nested loop for that exact same query, on almost the same database content as the Postgresql 13 one). Even on Postgresql 13 a correct plan is made in another database (exact same structure, different data); I have attached the plan that is made there too.All databases that make a plan without the second nested loops also finish the query within a reasonable time period (16 seconds on the .9.6 server). On the 13 server with the nested loops plan the process times out after 2 hours.As far as the row counts go: yes, this database is not by far the biggest one, so the row counts are less. It also depends on what query we actually run (we can have hundreds of them on different tables, and not all tables are that big).I disabled nested_loops not just for fun, I disabled it because without it many of the queries effectively hang because their plan estimate expects only a few rows while in reality there are millions. Disabling nested loops will let lots of the generated queries fail, even on smaller datasets.I have no idea of how to get rid of those inequality queries, except by not using SQL and doing them by hand in code.. That would prove to be disastrous for performance as I'd have to read all those datasets completely... Do you have an idea on how to do that better?Regards,FritsOn Tue, Nov 17, 2020 at 5:21 PM Tom Lane <[email protected]> wrote:Frits Jalvingh <[email protected]> writes:\n> I have attached both plans, both made with set enable_nestloop = false in\n> the attachments.\n\nThe reason why you're getting a nested loop is that the planner has no\nother choice. The \"tijd\" table has no join conditions that would be\namenable to hash- or merge-joining it to something else, because both\nof those join methods require a plain equality join condition. AFAICS\nin a quick look, all of tijd's join conditions look more like\n\n Join Filter: ((di04238cluster.dv_start_dts <= tijd.einddatum) AND (di04238cluster.dv_end_dts > tijd.einddatum))\n\nwhich is not amenable to anything except brute force cross-join-and-\ntest-the-condition.\n\nGiven that, it's likely that \"enable_nestloop = false\" is making things\nworse not better, by artificially distorting the plan shape.\n\nSeeing the large number of joins involved, I wonder what your\ngeqo_threshold, join_collapse_limit, and from_collapse_limit settings\nare, and whether you can get a better plan by increasing them.\n\nThe planner doesn't seem to think that any of these joins involve\na very large number of rows, so I doubt that your work_mem setting\nis very relevant. However, are these rowcount estimates accurate?\nYou claimed upthread that you were dealing with hundreds of millions\nof rows, but it's impossible to credit that cost estimates like\n\n -> Seq Scan on s_h_cluster_ssm di01905cluster (cost=0.00..155.05 rows=1072 width=24)\n Filter: (soort = 'FIN'::text)\n\ncorrespond to scanning large tables.\n\nIn the end, I fear that finding a way to get rid of those\ninequality join conditions may be your only real answer.\n\n regards, tom lane",
"msg_date": "Thu, 19 Nov 2020 12:52:42 +0100",
"msg_from": "Frits Jalvingh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres using nested loops despite setting enable_nestloop to\n false"
}
] |
[
{
"msg_contents": "Hello to all,\nCan you help me understand if it is fessible to install clustered Postgres 12 on 2 balanced servers with VmWsare?\nWho has had this experience and can you share more information with me?\n\nBest Regards,\nNancy\n\n\n\n\n\n\n\n\n\n\n\n\nHello to all,\nCan you help me understand if it is fessible to install clustered Postgres 12 on 2 balanced servers with VmWsare?\nWho has had this experience and can you share more information with me?\n \nBest Regards,\nNancy",
"msg_date": "Tue, 17 Nov 2020 13:54:03 +0000",
"msg_from": "Nunzia Vairo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Install clustered postgres"
}
] |
[
{
"msg_contents": "If it helps, here's the details of the hardware config.\nThe controller is |AVAGO MegaRAID SAS 9361-4i|,\nthe SSDs are |INTEL SSDSC2KG960G8| (configured as a raid1).\nCurrent scheduler used is deadline.\nCurrently XFS is mounted without nobarriers, but I'm going to set that \nwhen there's high load next time to see how it affects throughput.\n\nProperties of the array\n\nVD6 Properties :\n==============\nStrip Size = 256 KB\nNumber of Blocks = 1874329600\nVD has Emulated PD = Yes\nSpan Depth = 1\nNumber of Drives Per Span = 2\nWrite Cache(initial setting) = WriteBack\nDisk Cache Policy = Disk's Default\nEncryption = None\nData Protection = Disabled\nActive Operations = None\nExposed to OS = Yes\nCreation Date = 28-10-2020\nCreation Time = 05:17:16 PM\nEmulation type = default\nCachebypass size = Cachebypass-64k\nCachebypass Mode = Cachebypass Intelligent\nIs LD Ready for OS Requests = Yes\nSCSI NAA Id = 600605b00af03650272c641ca30c3196\n\nBest, Alex\n\n\n\n\n\n\n\n\n\n\n\n\nIf it helps, here's the\n details of the hardware config.\n The controller is AVAGO MegaRAID SAS 9361-4i,\n the SSDs are INTEL SSDSC2KG960G8\n (configured as a raid1).\n Current scheduler used is deadline.\n Currently XFS is mounted without nobarriers, but I'm\n going to set that when there's high load next time to\n see how it affects throughput.\n\n Properties of the array\nVD6 Properties :\n==============\nStrip Size = 256 KB\nNumber of Blocks = 1874329600\nVD has Emulated PD = Yes\nSpan Depth = 1\nNumber of Drives Per Span = 2\nWrite Cache(initial setting) = WriteBack\nDisk Cache Policy = Disk's Default\nEncryption = None\nData Protection = Disabled\nActive Operations = None\nExposed to OS = Yes\nCreation Date = 28-10-2020\nCreation Time = 05:17:16 PM\nEmulation type = default\nCachebypass size = Cachebypass-64k\nCachebypass Mode = Cachebypass Intelligent\nIs LD Ready for OS Requests = Yes\nSCSI NAA Id = 600605b00af03650272c641ca30c3196\n\n\n Best, Alex",
"msg_date": "Wed, 18 Nov 2020 11:54:33 +0000",
"msg_from": "Alexey Bashtanov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to prioritise walsender reading from pg_wal over WAL writes?"
},
{
"msg_contents": "On Wed, 2020-11-18 at 11:54 +0000, Alexey Bashtanov wrote:\n> If it helps, here's the details of the hardware config.\n> The controller is AVAGO MegaRAID SAS 9361-4i,\n> the SSDs are INTEL SSDSC2KG960G8 (configured as a raid1).\n> Current scheduler used is deadline.\n> Currently XFS is mounted without nobarriers, but I'm going to set that when there's high load next time to see how it affects throughput.\n> \n> Properties of the array\n> VD6 Properties :\n> ==============\n> Strip Size = 256 KB\n> Number of Blocks = 1874329600\n> VD has Emulated PD = Yes\n> Span Depth = 1\n> Number of Drives Per Span = 2\n> Write Cache(initial setting) = WriteBack\n> Disk Cache Policy = Disk's Default\n> Encryption = None\n> Data Protection = Disabled\n> Active Operations = None\n> Exposed to OS = Yes\n> Creation Date = 28-10-2020\n> Creation Time = 05:17:16 PM\n> Emulation type = default\n> Cachebypass size = Cachebypass-64k\n> Cachebypass Mode = Cachebypass Intelligent\n> Is LD Ready for OS Requests = Yes\n> SCSI NAA Id = 600605b00af03650272c641ca30c3196\n\nWhy??\n\nWAL buffers has the most recent information, so that would result in unnecessary delay and I/O.\n\nYou'd have to hack the code, but I wonder what leads you to this interesting requirement.\n\nYours,\nLaurenz Albe\n-- \n+43-670-6056265\nCYBERTEC PostgreSQL International GmbH\nGröhrmühlgasse 26, A-2700 Wiener Neustadt\nWeb: https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Wed, 18 Nov 2020 13:11:23 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to prioritise walsender reading from pg_wal over WAL writes?"
},
{
"msg_contents": "Sorry Laurenz,\n\nMy reply did not get threaded appropriately.\nMy original question was here:\nhttps://www.postgresql.org/message-id/a74d5732-60fd-d18b-05fd-7b2b97099f19%40imap.cc\nI'd like to prioritize walsender for replication not to lag too much.\nOtherwise, when I have load spikes on master, standby lags, sometimes by \nhundreds of gigabytes.\n\nBest, Alex\n\n\n",
"msg_date": "Wed, 18 Nov 2020 12:15:11 +0000",
"msg_from": "Alexey Bashtanov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to prioritise walsender reading from pg_wal over WAL writes?"
},
{
"msg_contents": "On Wed, 2020-11-18 at 12:15 +0000, Alexey Bashtanov wrote:\n> My reply did not get threaded appropriately.\n> My original question was here:\n> https://www.postgresql.org/message-id/a74d5732-60fd-d18b-05fd-7b2b97099f19%40imap.cc\n> I'd like to prioritize walsender for replication not to lag too much.\n> Otherwise, when I have load spikes on master, standby lags, sometimes by \n> hundreds of gigabytes.\n\nI would first determine where the bottleneck is.\n\nIs it really the walsender, or is it on the network or in the standby server's replay?\n\nCheck the difference between \"sent_lsn\", \"replay_lsn\" from \"pg_stat_replication\" and\npg_current_wal_lsn() on the primary.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Wed, 18 Nov 2020 17:20:58 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to prioritise walsender reading from pg_wal over WAL writes?"
},
{
"msg_contents": "\n> I would first determine where the bottleneck is.\n>\n> Is it really the walsender, or is it on the network or in the standby server's replay?\nIt is really the walsender, and it really is the performance of the WAL \nstorage on the master.\n> Check the difference between \"sent_lsn\", \"replay_lsn\" from \"pg_stat_replication\" and\n> pg_current_wal_lsn() on the primary.\nYes I've checked these numbers, the lagging one is sent_lsn.\nIt doesn't look like it's hitting network capacity either.\nWhen we moved it to an NVMe as a short-term solution it worked fine.\n\nBest, Alex\n\n\n",
"msg_date": "Thu, 19 Nov 2020 13:38:46 +0000",
"msg_from": "Alexey Bashtanov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to prioritise walsender reading from pg_wal over WAL writes?"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed something strange in our PG server. I have a table named\n'timetable' that has only one bigint column and one row.\n\nOnce in every 5 seconds this row is updated to the current time epoch\nvalue in milliseconds.\n\nThe update query seems to be taking considerable time (avg 50\nmilliseconds). When I tried generating the explain (analyze,buffers)\nfor the query, the planning time + execution time is always less than\n0.1 millisecond. However the query time as shown when /timing of psql\nis enabled shows approx 30 milliseconds (I am connecting via psql from\nthe localhost).\n\n\nPlease find the details below.\n\npostgres=> select version();\n version\n----------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.4.15 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)\n4.4.7 20120313 (Red Hat 4.4.7-18), 64-bit\n(1 row)\n\nTime: 0.572 ms\n\n\n\n\n\npostgres=> \\d+ timetable\n Table \"public.timetable\"\n Column | Type | Modifiers | Storage | Stats target | Description\n--------+--------+-----------+---------+--------------+-------------\n time | bigint | | plain | |\n\n\n\n\n\n\npostgres=> table timetable ;\n time\n------------\n 1605988584\n(1 row)\n\nTime: 0.402 ms\n\n\n\n\n\npostgres=> explain (analyze,buffers,verbose) update timetable set time = time+0;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Update on public.timetable (cost=0.00..4.01 rows=1 width=14) (actual\ntime=0.064..0.064 rows=0 loops=1)\n Buffers: shared hit=5\n -> Seq Scan on public.timetable (cost=0.00..4.01 rows=1 width=14)\n(actual time=0.029..0.029 rows=1 loops=1)\n Output: (\"time\" + 0), ctid\n Buffers: shared hit=4\n Planning time: 0.054 ms\n Execution time: 0.093 ms\n(7 rows)\n\nTime: 27.685 ms\n\n\nSometimes this shoots up to even a few hundred milliseconds.\n\npostgres=> explain (analyze,buffers,verbose) update timetable set time = time+0;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Update on public.timetable (cost=0.00..4.01 rows=1 width=14) (actual\ntime=0.048..0.048 rows=0 loops=1)\n Buffers: shared hit=5\n -> Seq Scan on public.timetable (cost=0.00..4.01 rows=1 width=14)\n(actual time=0.027..0.028 rows=1 loops=1)\n Output: (\"time\" + 0), ctid\n Buffers: shared hit=4\n Planning time: 0.063 ms\n Execution time: 0.084 ms\n(7 rows)\n\nTime: 291.090 ms\n\n\n\n\nI guess the problem here may somehow be linked to frequent updates to\nthe one row. However I want to understand what exactly is going wrong\nhere. Also I don't understand the discrepancy between planning +\nexecution time from explain analyze and the time taken by the query as\nreported in pg log and in psql console.\n\nKindly help me on this.\n\nRegards,\nNanda\n\n\n",
"msg_date": "Sun, 22 Nov 2020 01:57:57 +0530",
"msg_from": "Nandakumar M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Simple update query is slow"
},
{
"msg_contents": "Hi,\n\nJust realised that the time difference between explain analyze plan\nand /timing result is due to the implicit commit.\n\nSorry about that.\n\nRegards,\nNanda\n\nOn Sun, 22 Nov 2020 at 01:57, Nandakumar M <[email protected]> wrote:\n>\n> Hi,\n>\n> I noticed something strange in our PG server. I have a table named\n> 'timetable' that has only one bigint column and one row.\n>\n> Once in every 5 seconds this row is updated to the current time epoch\n> value in milliseconds.\n>\n> The update query seems to be taking considerable time (avg 50\n> milliseconds). When I tried generating the explain (analyze,buffers)\n> for the query, the planning time + execution time is always less than\n> 0.1 millisecond. However the query time as shown when /timing of psql\n> is enabled shows approx 30 milliseconds (I am connecting via psql from\n> the localhost).\n>\n>\n> Please find the details below.\n>\n> postgres=> select version();\n> version\n> ----------------------------------------------------------------------------------------------------------------\n> PostgreSQL 9.4.15 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)\n> 4.4.7 20120313 (Red Hat 4.4.7-18), 64-bit\n> (1 row)\n>\n> Time: 0.572 ms\n>\n>\n>\n>\n>\n> postgres=> \\d+ timetable\n> Table \"public.timetable\"\n> Column | Type | Modifiers | Storage | Stats target | Description\n> --------+--------+-----------+---------+--------------+-------------\n> time | bigint | | plain | |\n>\n>\n>\n>\n>\n>\n> postgres=> table timetable ;\n> time\n> ------------\n> 1605988584\n> (1 row)\n>\n> Time: 0.402 ms\n>\n>\n>\n>\n>\n> postgres=> explain (analyze,buffers,verbose) update timetable set time = time+0;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------\n> Update on public.timetable (cost=0.00..4.01 rows=1 width=14) (actual\n> time=0.064..0.064 rows=0 loops=1)\n> Buffers: shared hit=5\n> -> Seq Scan on public.timetable (cost=0.00..4.01 rows=1 width=14)\n> (actual time=0.029..0.029 rows=1 loops=1)\n> Output: (\"time\" + 0), ctid\n> Buffers: shared hit=4\n> Planning time: 0.054 ms\n> Execution time: 0.093 ms\n> (7 rows)\n>\n> Time: 27.685 ms\n>\n>\n> Sometimes this shoots up to even a few hundred milliseconds.\n>\n> postgres=> explain (analyze,buffers,verbose) update timetable set time = time+0;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------\n> Update on public.timetable (cost=0.00..4.01 rows=1 width=14) (actual\n> time=0.048..0.048 rows=0 loops=1)\n> Buffers: shared hit=5\n> -> Seq Scan on public.timetable (cost=0.00..4.01 rows=1 width=14)\n> (actual time=0.027..0.028 rows=1 loops=1)\n> Output: (\"time\" + 0), ctid\n> Buffers: shared hit=4\n> Planning time: 0.063 ms\n> Execution time: 0.084 ms\n> (7 rows)\n>\n> Time: 291.090 ms\n>\n>\n>\n>\n> I guess the problem here may somehow be linked to frequent updates to\n> the one row. However I want to understand what exactly is going wrong\n> here. Also I don't understand the discrepancy between planning +\n> execution time from explain analyze and the time taken by the query as\n> reported in pg log and in psql console.\n>\n> Kindly help me on this.\n>\n> Regards,\n> Nanda\n\n\n",
"msg_date": "Sun, 22 Nov 2020 02:18:10 +0530",
"msg_from": "Nandakumar M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple update query is slow"
},
{
"msg_contents": "On Sun, Nov 22, 2020 at 02:18:10AM +0530, Nandakumar M wrote:\n> Just realised that the time difference between explain analyze plan\n> and /timing result is due to the implicit commit.\n\nCan you run with SET client_min_messages=debug; and SET log_lock_waits=on;\nOh, but your server is too old for that...\n\nOn Sun, 22 Nov 2020 at 01:57, Nandakumar M <[email protected]> wrote:\n>\n> Hi,\n>\n> I noticed something strange in our PG server. I have a table named\n> 'timetable' that has only one bigint column and one row.\n>\n> Once in every 5 seconds this row is updated to the current time epoch\n> value in milliseconds.\n>\n> The update query seems to be taking considerable time (avg 50\n> milliseconds). When I tried generating the explain (analyze,buffers)\n> for the query, the planning time + execution time is always less than\n> 0.1 millisecond. However the query time as shown when /timing of psql\n> is enabled shows approx 30 milliseconds (I am connecting via psql from\n> the localhost).\n>\n>\n> Please find the details below.\n>\n> postgres=> select version();\n> version\n> ----------------------------------------------------------------------------------------------------------------\n> PostgreSQL 9.4.15 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)\n> 4.4.7 20120313 (Red Hat 4.4.7-18), 64-bit\n> (1 row)\n>\n> Time: 0.572 ms\n>\n>\n>\n>\n>\n> postgres=> \\d+ timetable\n> Table \"public.timetable\"\n> Column | Type | Modifiers | Storage | Stats target | Description\n> --------+--------+-----------+---------+--------------+-------------\n> time | bigint | | plain | |\n>\n>\n>\n>\n>\n>\n> postgres=> table timetable ;\n> time\n> ------------\n> 1605988584\n> (1 row)\n>\n> Time: 0.402 ms\n>\n>\n>\n>\n>\n> postgres=> explain (analyze,buffers,verbose) update timetable set time = time+0;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------\n> Update on public.timetable (cost=0.00..4.01 rows=1 width=14) (actual\n> time=0.064..0.064 rows=0 loops=1)\n> Buffers: shared hit=5\n> -> Seq Scan on public.timetable (cost=0.00..4.01 rows=1 width=14)\n> (actual time=0.029..0.029 rows=1 loops=1)\n> Output: (\"time\" + 0), ctid\n> Buffers: shared hit=4\n> Planning time: 0.054 ms\n> Execution time: 0.093 ms\n> (7 rows)\n>\n> Time: 27.685 ms\n>\n>\n> Sometimes this shoots up to even a few hundred milliseconds.\n>\n> postgres=> explain (analyze,buffers,verbose) update timetable set time = time+0;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------\n> Update on public.timetable (cost=0.00..4.01 rows=1 width=14) (actual\n> time=0.048..0.048 rows=0 loops=1)\n> Buffers: shared hit=5\n> -> Seq Scan on public.timetable (cost=0.00..4.01 rows=1 width=14)\n> (actual time=0.027..0.028 rows=1 loops=1)\n> Output: (\"time\" + 0), ctid\n> Buffers: shared hit=4\n> Planning time: 0.063 ms\n> Execution time: 0.084 ms\n> (7 rows)\n>\n> Time: 291.090 ms\n>\n> I guess the problem here may somehow be linked to frequent updates to\n> the one row. However I want to understand what exactly is going wrong\n> here. Also I don't understand the discrepancy between planning +\n> execution time from explain analyze and the time taken by the query as\n> reported in pg log and in psql console.\n\n\n",
"msg_date": "Sun, 22 Nov 2020 11:30:57 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple update query is slow"
}
] |
[
{
"msg_contents": "Hi,\n\nThe feed_posts table has over 50 Million rows.\n\nWhen I m deleting all rows of a certain type that are over 60 days old.\n\nWhen I try to do a delete like this: it hangs for an entire day, so I\nneed to kill it with pg_terminate_backend(pid).\n\nDELETE FROM feed_posts\nWHERE feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'\nAND created_at > '2020-05-11 00:00:00'\nAND created_at < '2020-05-12 00:00:00';\n\nSo– I need help in figuring out how to do large deletes on a\nproduction database during normal hours.\n\nexplain plan is given below\n\n\n\n\"Delete on feed_posts (cost=1156.57..195748.88 rows=15534 width=6)\"\n\" -> Bitmap Heap Scan on feed_posts (cost=1156.57..195748.88\nrows=15534 width=6)\"\n\" Recheck Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp\nwithout time zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp\nwithout time zone))\"\n\" Filter: (feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'::uuid)\"\n\" -> Bitmap Index Scan on feed_posts_created_at (cost=0.00..1152.68\nrows=54812 width=0)\"\n\" Index Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp without\ntime zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp without\ntime zone))\"\n\n\nplease help me on deleting the rows, Do I need to anything in postgres\nconfiguration ?\nor in table structure ?\n\n\n\n\n\nRegards,\nAtul\n\n\n",
"msg_date": "Thu, 3 Dec 2020 14:19:24 +0530",
"msg_from": "Atul Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "time taking deletion on large tables"
},
{
"msg_contents": "Hi,\n\nThe feed_posts table has over 50 Million rows.\n\nWhen I m deleting all rows of a certain type that are over 60 days old.\n\nWhen I try to do a delete like this: it hangs for an entire day, so I\nneed to kill it with pg_terminate_backend(pid).\n\nDELETE FROM feed_posts\nWHERE feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'\nAND created_at > '2020-05-11 00:00:00'\nAND created_at < '2020-05-12 00:00:00';\n\nSo– I need help in figuring out how to do large deletes on a\nproduction database during normal hours.\n\nexplain plan is given below\n\n\n\n\"Delete on feed_posts (cost=1156.57..195748.88 rows=15534 width=6)\"\n\" -> Bitmap Heap Scan on feed_posts (cost=1156.57..195748.88\nrows=15534 width=6)\"\n\" Recheck Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp\nwithout time zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp\nwithout time zone))\"\n\" Filter: (feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'::uuid)\"\n\" -> Bitmap Index Scan on feed_posts_created_at (cost=0.00..1152.68\nrows=54812 width=0)\"\n\" Index Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp without\ntime zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp without\ntime zone))\"\n\n\nplease help me on deleting the rows, Do I need to anything in postgres\nconfiguration ?\nor in table structure ?\n\n\n\n\n\nRegards,\nAtul\n\n\n",
"msg_date": "Thu, 3 Dec 2020 20:15:06 +0530",
"msg_from": "Atul Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "time taking deletion on large tables"
},
{
"msg_contents": "Hi,\n\nThe feed_posts table has over 50 Million rows.\n\nWhen I m deleting all rows of a certain type that are over 60 days old.\n\nWhen I try to do a delete like this: it hangs for an entire day, so I\nneed to kill it with pg_terminate_backend(pid).\n\nDELETE FROM feed_posts\nWHERE feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'\nAND created_at > '2020-05-11 00:00:00'\nAND created_at < '2020-05-12 00:00:00';\n\nSo– I need help in figuring out how to do large deletes on a\nproduction database during normal hours.\n\nexplain plan is given below\n\n\n\n\"Delete on feed_posts (cost=1156.57..195748.88 rows=15534 width=6)\"\n\" -> Bitmap Heap Scan on feed_posts (cost=1156.57..195748.88\nrows=15534 width=6)\"\n\" Recheck Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp\nwithout time zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp\nwithout time zone))\"\n\" Filter: (feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'::uuid)\"\n\" -> Bitmap Index Scan on feed_posts_created_at (cost=0.00..1152.68\nrows=54812 width=0)\"\n\" Index Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp without\ntime zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp without\ntime zone))\"\n\n\nplease help me on deleting the rows, Do I need to anything in postgres\nconfiguration ?\nor in table structure ?\n\n\n\n\n\nRegards,\nAtul\n\n\n",
"msg_date": "Thu, 3 Dec 2020 20:15:40 +0530",
"msg_from": "Atul Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "time taking deletion on large tables"
},
{
"msg_contents": "Hi,\n\nThe feed_posts table has over 50 Million rows.\n\nWhen I m deleting all rows of a certain type that are over 60 days old.\n\nWhen I try to do a delete like this: it hangs for an entire day, so I\nneed to kill it with pg_terminate_backend(pid).\n\nDELETE FROM feed_posts\nWHERE feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'\nAND created_at > '2020-05-11 00:00:00'\nAND created_at < '2020-05-12 00:00:00';\n\nSo– I need help in figuring out how to do large deletes on a\nproduction database during normal hours.\n\nexplain plan is given below\n\n\n\n\"Delete on feed_posts (cost=1156.57..195748.88 rows=15534 width=6)\"\n\" -> Bitmap Heap Scan on feed_posts (cost=1156.57..195748.88\nrows=15534 width=6)\"\n\" Recheck Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp\nwithout time zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp\nwithout time zone))\"\n\" Filter: (feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'::uuid)\"\n\" -> Bitmap Index Scan on feed_posts_created_at (cost=0.00..1152.68\nrows=54812 width=0)\"\n\" Index Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp without\ntime zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp without\ntime zone))\"\n\n\nplease help me on deleting the rows, Do I need to anything in postgres\nconfiguration ?\nor in table structure ?\n\n\n\n\n\nRegards,\nAtul\n\n\n",
"msg_date": "Thu, 3 Dec 2020 20:16:07 +0530",
"msg_from": "Atul Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "time taking deletion on large tables"
},
{
"msg_contents": "Hi,\n\nThe feed_posts table has over 50 Million rows.\n\nWhen I m deleting all rows of a certain type that are over 60 days old.\n\nWhen I try to do a delete like this: it hangs for an entire day, so I\nneed to kill it with pg_terminate_backend(pid).\n\nDELETE FROM feed_posts\nWHERE feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'\nAND created_at > '2020-05-11 00:00:00'\nAND created_at < '2020-05-12 00:00:00';\n\nSo– I need help in figuring out how to do large deletes on a\nproduction database during normal hours.\n\nexplain plan is given below\n\n\n\n\"Delete on feed_posts (cost=1156.57..195748.88 rows=15534 width=6)\"\n\" -> Bitmap Heap Scan on feed_posts (cost=1156.57..195748.88\nrows=15534 width=6)\"\n\" Recheck Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp\nwithout time zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp\nwithout time zone))\"\n\" Filter: (feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'::uuid)\"\n\" -> Bitmap Index Scan on feed_posts_created_at (cost=0.00..1152.68\nrows=54812 width=0)\"\n\" Index Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp without\ntime zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp without\ntime zone))\"\n\n\nplease help me on deleting the rows, Do I need to anything in postgres\nconfiguration ?\nor in table structure ?\n\n\n\n\n\nRegards,\nAtul\n\n\n",
"msg_date": "Thu, 3 Dec 2020 20:17:17 +0530",
"msg_from": "Atul Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "time taking deletion on large tables"
},
{
"msg_contents": "Hi,\n\nThe feed_posts table has over 50 Million rows.\n\nWhen I m deleting all rows of a certain type that are over 60 days old.\n\nWhen I try to do a delete like this: it hangs for an entire day, so I\nneed to kill it with pg_terminate_backend(pid).\n\nDELETE FROM feed_posts\nWHERE feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'\nAND created_at > '2020-05-11 00:00:00'\nAND created_at < '2020-05-12 00:00:00';\n\nSo– I need help in figuring out how to do large deletes on a\nproduction database during normal hours.\n\nexplain plan is given below\n\n\n\n\"Delete on feed_posts (cost=1156.57..195748.88 rows=15534 width=6)\"\n\" -> Bitmap Heap Scan on feed_posts (cost=1156.57..195748.88\nrows=15534 width=6)\"\n\" Recheck Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp\nwithout time zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp\nwithout time zone))\"\n\" Filter: (feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'::uuid)\"\n\" -> Bitmap Index Scan on feed_posts_created_at (cost=0.00..1152.68\nrows=54812 width=0)\"\n\" Index Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp without\ntime zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp without\ntime zone))\"\n\n\nplease help me on deleting the rows, Do I need to anything in postgres\nconfiguration ?\nor in table structure ?\n\n\n\n\n\nRegards,\nAtul\n\n\n",
"msg_date": "Thu, 3 Dec 2020 20:19:19 +0530",
"msg_from": "Atul Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "time taking deletion on large tables"
},
{
"msg_contents": "On Thu, Dec 03, 2020 at 08:15:06PM +0530, Atul Kumar wrote:\n> Hi,\n> \n> The feed_posts table has over 50 Million rows.\n> \n> When I m deleting all rows of a certain type that are over 60 days old.\n\nThe common solution to this problem is to partition, and then, instead\nof deleting rows - delete old partitions.\n\ndepesz\n\n\n",
"msg_date": "Thu, 3 Dec 2020 15:51:26 +0100",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time taking deletion on large tables"
},
{
"msg_contents": "Hi Atul,\n\nPlease try the code below. Execute all the statements in one transaction.\n\nselect * into new_table from old_table where type = 'abcz';\ntruncate table old_table;\ninesrt into old_table select * from new_table;\n\n\n\n\nOn Thu, Dec 3, 2020 at 8:16 PM Atul Kumar <[email protected]> wrote:\n\n> Hi,\n>\n> The feed_posts table has over 50 Million rows.\n>\n> When I m deleting all rows of a certain type that are over 60 days old.\n>\n> When I try to do a delete like this: it hangs for an entire day, so I\n> need to kill it with pg_terminate_backend(pid).\n>\n> DELETE FROM feed_posts\n> WHERE feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'\n> AND created_at > '2020-05-11 00:00:00'\n> AND created_at < '2020-05-12 00:00:00';\n>\n> So– I need help in figuring out how to do large deletes on a\n> production database during normal hours.\n>\n> explain plan is given below\n>\n>\n>\n> \"Delete on feed_posts (cost=1156.57..195748.88 rows=15534 width=6)\"\n> \" -> Bitmap Heap Scan on feed_posts (cost=1156.57..195748.88\n> rows=15534 width=6)\"\n> \" Recheck Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp\n> without time zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp\n> without time zone))\"\n> \" Filter: (feed_definition_id =\n> 'bf33573d-936e-4e55-8607-72b685d2cbae'::uuid)\"\n> \" -> Bitmap Index Scan on feed_posts_created_at (cost=0.00..1152.68\n> rows=54812 width=0)\"\n> \" Index Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp without\n> time zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp without\n> time zone))\"\n>\n>\n> please help me on deleting the rows, Do I need to anything in postgres\n> configuration ?\n> or in table structure ?\n>\n>\n>\n>\n>\n> Regards,\n> Atul\n>\n>\n>\n\n-- \n*Regards,*\n*Ravikumar S,*\n*Ph: 8106741263*\n\nHi Atul,Please try the code below. Execute all the statements in one transaction.select * into new_table from old_table where type = 'abcz';truncate table \n\nold_table;inesrt into \n\nold_table select * from new_table;On Thu, Dec 3, 2020 at 8:16 PM Atul Kumar <[email protected]> wrote:Hi,\n\nThe feed_posts table has over 50 Million rows.\n\nWhen I m deleting all rows of a certain type that are over 60 days old.\n\nWhen I try to do a delete like this: it hangs for an entire day, so I\nneed to kill it with pg_terminate_backend(pid).\n\nDELETE FROM feed_posts\nWHERE feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'\nAND created_at > '2020-05-11 00:00:00'\nAND created_at < '2020-05-12 00:00:00';\n\nSo– I need help in figuring out how to do large deletes on a\nproduction database during normal hours.\n\nexplain plan is given below\n\n\n\n\"Delete on feed_posts (cost=1156.57..195748.88 rows=15534 width=6)\"\n\" -> Bitmap Heap Scan on feed_posts (cost=1156.57..195748.88\nrows=15534 width=6)\"\n\" Recheck Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp\nwithout time zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp\nwithout time zone))\"\n\" Filter: (feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'::uuid)\"\n\" -> Bitmap Index Scan on feed_posts_created_at (cost=0.00..1152.68\nrows=54812 width=0)\"\n\" Index Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp without\ntime zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp without\ntime zone))\"\n\n\nplease help me on deleting the rows, Do I need to anything in postgres\nconfiguration ?\nor in table structure ?\n\n\n\n\n\nRegards,\nAtul\n\n\n-- Regards,Ravikumar S,Ph: 8106741263",
"msg_date": "Thu, 3 Dec 2020 20:43:57 +0530",
"msg_from": "Ravikumar Reddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time taking deletion on large tables"
},
{
"msg_contents": "On Thu, Dec 03, 2020 at 08:43:57PM +0530, Ravikumar Reddy wrote:\n> Please try the code below. Execute all the statements in one transaction.\n> \n> select * into new_table from old_table where type = 'abcz';\n> truncate table old_table;\n> inesrt into old_table select * from new_table;\n\nThis looks like advice for when most of the rows are being deleted, but I don't\nthink that's true here. It'd need to LOCK old_table, first, right? Also,\ntruncate isn't MVCC safe.\n\nAtul: What server version? Do you have an index on feed_definition_id ?\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nIf explain (analyze,buffers) SELECT runs in a reasonable time for that query,\ninclude its output.\n\nOn Thu, Dec 3, 2020 at 8:16 PM Atul Kumar <[email protected]> wrote:\n> The feed_posts table has over 50 Million rows.\n>\n> When I m deleting all rows of a certain type that are over 60 days old.\n>\n> When I try to do a delete like this: it hangs for an entire day, so I\n> need to kill it with pg_terminate_backend(pid).\n>\n> DELETE FROM feed_posts\n> WHERE feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'\n> AND created_at > '2020-05-11 00:00:00'\n> AND created_at < '2020-05-12 00:00:00';\n>\n> So– I need help in figuring out how to do large deletes on a\n> production database during normal hours.\n>\n> please help me on deleting the rows, Do I need to anything in postgres\n> configuration ?\n> or in table structure ?\n\n\n",
"msg_date": "Thu, 3 Dec 2020 09:36:12 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time taking deletion on large tables"
},
{
"msg_contents": "On 12/3/20 8:45 AM, Atul Kumar wrote:\n> Hi,\n>\n> The feed_posts table has over 50 Million rows.\n>\n> When I m deleting all rows of a certain type that are over 60 days old.\n>\n> When I try to do a delete like this: it hangs for an entire day, so I\n> need to kill it with pg_terminate_backend(pid).\n>\n> DELETE FROM feed_posts\n> WHERE feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'\n> AND created_at > '2020-05-11 00:00:00'\n> AND created_at < '2020-05-12 00:00:00';\n>\n> So– I need help in figuring out how to do large deletes on a\n> production database during normal hours.\n\nPresumably there is an index on created_at?\n\nWhat about feed_definition_id?\n\n> explain plan is given below\n>\n>\n>\n> \"Delete on feed_posts (cost=1156.57..195748.88 rows=15534 width=6)\"\n> \" -> Bitmap Heap Scan on feed_posts (cost=1156.57..195748.88\n> rows=15534 width=6)\"\n> \" Recheck Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp\n> without time zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp\n> without time zone))\"\n> \" Filter: (feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'::uuid)\"\n> \" -> Bitmap Index Scan on feed_posts_created_at (cost=0.00..1152.68\n> rows=54812 width=0)\"\n> \" Index Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp without\n> time zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp without\n> time zone))\"\n\nHave you recently analyzed the table?\n\n-- \nAngular momentum makes the world go 'round.\n\n\n",
"msg_date": "Thu, 3 Dec 2020 09:48:46 -0600",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time taking deletion on large tables"
},
{
"msg_contents": "Justin Pryzby <[email protected]> writes:\n> On Thu, Dec 03, 2020 at 08:43:57PM +0530, Ravikumar Reddy wrote:\n>> When I try to do a delete like this: it hangs for an entire day, so I\n>> need to kill it with pg_terminate_backend(pid).\n>> \n>> DELETE FROM feed_posts\n>> WHERE feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'\n>> AND created_at > '2020-05-11 00:00:00'\n>> AND created_at < '2020-05-12 00:00:00';\n\n90% of the \"delete takes forever\" complaints that we hear trace down to\nhaving a foreign key reference to the deletion-target table that's not\nbacked by an index on the referencing column. Then you end up getting\na seqscan on the referencing table to look for rows referencing a\nrow-to-be-deleted. And then another one for the next row. Etc.\n\nYou could try \"explain analyze\" on a query deleting just a single\none of these rows and see if an RI enforcement trigger is what's\neating the time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Dec 2020 11:16:16 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time taking deletion on large tables"
},
{
"msg_contents": "\nOn 12/3/20 11:16 AM, Tom Lane wrote:\n> Justin Pryzby <[email protected]> writes:\n>> On Thu, Dec 03, 2020 at 08:43:57PM +0530, Ravikumar Reddy wrote:\n>>> When I try to do a delete like this: it hangs for an entire day, so I\n>>> need to kill it with pg_terminate_backend(pid).\n>>>\n>>> DELETE FROM feed_posts\n>>> WHERE feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'\n>>> AND created_at > '2020-05-11 00:00:00'\n>>> AND created_at < '2020-05-12 00:00:00';\n> 90% of the \"delete takes forever\" complaints that we hear trace down to\n> having a foreign key reference to the deletion-target table that's not\n> backed by an index on the referencing column. Then you end up getting\n> a seqscan on the referencing table to look for rows referencing a\n> row-to-be-deleted. And then another one for the next row. Etc.\n>\n> You could try \"explain analyze\" on a query deleting just a single\n> one of these rows and see if an RI enforcement trigger is what's\n> eating the time.\n>\n> \t\t\t\n\n\n\nYeah. IIRC some other RDBMS systems actually create such an index if it\ndoesn't already exist. Maybe we should have a warning when setting up an\nFK constraint if the referencing fields aren't usefully indexed.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 3 Dec 2020 12:00:44 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time taking deletion on large tables"
},
{
"msg_contents": "> On Dec 3, 2020, at 9:45 AM, Atul Kumar <[email protected]> wrote:\n> \n> The feed_posts table has over 50 Million rows.\n> \n> When I m deleting all rows of a certain type that are over 60 days old.\n> \n> When I try to do a delete like this: it hangs for an entire day, so I\n> need to kill it with pg_terminate_backend(pid).\n\nDelete the records in batches. I have used this approach many times successfully for large tables that are highly active on live production systems. \n\nYou’ll have to find the correct batch size to use for your dataset while keeping the run time short; i.e. 30 seconds. Then repeatedly call the function using a script — I’ve used a perl script with the DBI module to accomplish it. \n\ni.e. \n\ncreate or replace function purge_feed_post (_purge_date date, _limit int default 5000)\n returns int \nas \n$$\n declare \n _rowcnt int;\n begin\n create temp table if not exists purge_feed_post_set (\n feed_post_id int\n )\n ;\n\n /* Identify records to be purged */\n insert into purge_feed_post_set (\n feed_post_id\n )\n select feed_post_id\n from feed_posts \n where created_at < _purge_date\n order by created_at\n limit _limit\n ;\n \n /* Remove old records */\n delete from feed_posts using purge_feed_post_set \n where feed_posts.feed_post_id = purge_feed_post_set.feed_post_id\n ;\n\n get diagnostics _rowcnt = ROW_COUNT;\n\n delete from purge_feed_post_set;\n\n return _rowcnt;\n end;\n$$ language plpgsql\n set search_path = public\n;\n\n\nOn Dec 3, 2020, at 9:45 AM, Atul Kumar <[email protected]> wrote:The feed_posts table has over 50 Million rows.When I m deleting all rows of a certain type that are over 60 days old.When I try to do a delete like this: it hangs for an entire day, so Ineed to kill it with pg_terminate_backend(pid).Delete the records in batches. I have used this approach many times successfully for large tables that are highly active on live production systems. You’ll have to find the correct batch size to use for your dataset while keeping the run time short; i.e. 30 seconds. Then repeatedly call the function using a script — I’ve used a perl script with the DBI module to accomplish it. i.e. create or replace function purge_feed_post (_purge_date date, _limit int default 5000) returns int as $$ declare _rowcnt int; begin create temp table if not exists purge_feed_post_set ( feed_post_id int ) ; /* Identify records to be purged */ insert into purge_feed_post_set ( feed_post_id ) select feed_post_id from feed_posts where created_at < _purge_date order by created_at limit _limit ; /* Remove old records */ delete from feed_posts using purge_feed_post_set where feed_posts.feed_post_id = purge_feed_post_set.feed_post_id ; get diagnostics _rowcnt = ROW_COUNT; delete from purge_feed_post_set; return _rowcnt; end;$$ language plpgsql set search_path = public;",
"msg_date": "Thu, 3 Dec 2020 13:45:33 -0500",
"msg_from": "Rui DeSousa <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: time taking deletion on large tables"
}
] |
[
{
"msg_contents": "Hi,\nCan we disable not null constraints temporarily in the session-based transaction, like we disable FK constraints? \nSETsession_replication_role = ‘replica’; alter table table_name disable trigger user;”\n\nabove two options are working for unique constraints violation exception. \nThanks,Rj\nHi,Can we disable not null constraints temporarily in the session-based transaction, like we disable FK constraints? SET\nsession_replication_role = ‘replica’; alter table table_name disable trigger user;”above two options are working for unique constraints violation exception. Thanks,Rj",
"msg_date": "Thu, 3 Dec 2020 19:58:15 +0000 (UTC)",
"msg_from": "Nagaraj Raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "Temporarily disable not null constraints"
},
{
"msg_contents": "On Thu, Dec 3, 2020 at 1:00 PM Nagaraj Raj <[email protected]> wrote:\n\n> Hi,\n>\n> Can we disable not null constraints temporarily in the session-based\n> transaction, like we disable FK constraints?\n>\n> SET session_replication_role = ‘replica’;\n> alter table table_name disable trigger user;”\n>\n> above two options are working for unique constraints violation exception.\n>\n> Thanks,\n> Rj\n>\n\n\nYou can alter the column and remove the not null constraint, do your work,\nand then add it back, but it will have to verify all rows have that column\nset, that is, you can't leave some of them null.\n\nOn Thu, Dec 3, 2020 at 1:00 PM Nagaraj Raj <[email protected]> wrote:Hi,Can we disable not null constraints temporarily in the session-based transaction, like we disable FK constraints? SET\nsession_replication_role = ‘replica’; alter table table_name disable trigger user;”above two options are working for unique constraints violation exception. Thanks,RjYou can alter the column and remove the not null constraint, do your work, and then add it back, but it will have to verify all rows have that column set, that is, you can't leave some of them null.",
"msg_date": "Thu, 3 Dec 2020 13:09:51 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temporarily disable not null constraints"
},
{
"msg_contents": "generally, you shouldn't be disabling your constraints, especially if you\nare having multiple parallel processes accessing your db.\ninstead, you should create them DEFERRABLE and have them checked at the end\nof your transaction.\n\nregarding your question about NOT NULL: it is not possible to have it\ndeferred (please check this page:\nhttps://www.postgresql.org/docs/13/sql-set-constraints.html)\nyou may alter your column, remove it, and then get it back, but still all\nrows will have to be checked, which I doubt you would like to see on a\nlarge table.\n\nregards, milos\n\n\n\nOn Thu, Dec 3, 2020 at 9:00 PM Nagaraj Raj <[email protected]> wrote:\n\n> Hi,\n>\n> Can we disable not null constraints temporarily in the session-based\n> transaction, like we disable FK constraints?\n>\n> SET session_replication_role = ‘replica’;\n> alter table table_name disable trigger user;”\n>\n> above two options are working for unique constraints violation exception.\n>\n> Thanks,\n> Rj\n>\n\ngenerally, you shouldn't be disabling your constraints, especially if you are having multiple parallel processes accessing your db.instead, you should create them DEFERRABLE and have them checked at the end of your transaction.regarding your question about NOT NULL: it is not possible to have it deferred (please check this page: https://www.postgresql.org/docs/13/sql-set-constraints.html)you may alter your column, remove it, and then get it back, but still all rows will have to be checked, which I doubt you would like to see on a large table.regards, milosOn Thu, Dec 3, 2020 at 9:00 PM Nagaraj Raj <[email protected]> wrote:Hi,Can we disable not null constraints temporarily in the session-based transaction, like we disable FK constraints? SET\nsession_replication_role = ‘replica’; alter table table_name disable trigger user;”above two options are working for unique constraints violation exception. Thanks,Rj",
"msg_date": "Thu, 3 Dec 2020 21:13:46 +0100",
"msg_from": "Milos Babic <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temporarily disable not null constraints"
},
{
"msg_contents": "On Thu, Dec 03, 2020 at 07:58:15PM +0000, Nagaraj Raj wrote:\n> Can we disable not null constraints temporarily in the session-based transaction, like we disable FK constraints?�\n\nIf you're trying to temporarily violate the not-null constraint..\nI don't know if it's a good idea..\n\n..but maybe this feature in v12 helps you:\n\nhttps://www.postgresql.org/docs/12/sql-altertable.html\n| Ordinarily this is checked during the ALTER TABLE by scanning the entire table; however, if a valid CHECK constraint is found which proves no NULL can exist, then the table scan is skipped.\n\nWhen you're done violating constraints, you can\nALTER .. ADD CONSTRAINT .. CHECK (.. IS NOT NULL) NOT VALID, and then\nALTER .. VALIDATE CONSTRAINT, and then ALTER column SET NOT NULL.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 3 Dec 2020 16:34:36 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temporarily disable not null constraints"
}
] |
[
{
"msg_contents": "Hi Team,\n\nWe are using Postgresql JSONB as storage type in our development.\nIn the below table , RECORD column has JSONB data and we create a view which will derive the column \"TEST_MV_2\" from column \"RECORD\" as below\n\nCREATE OR REPLACE VIEW public.\"V_TEST_SELECT\"\nAS\n SELECT a.recid, a.record AS \"RECORD\",\n jsonb_path_query(a.xmlrecord, '$.\"2\"'::jsonpath) AS \"TEST_MV_2 \"\n FROM \" TEST_SELECT \" a;\n\nSo we might have array of data or an empty JSON object or an array of empty JSON object or a string in the column \"TEST_MV_2\".\nNull is stored as empty JSON object due to our business logic.\n\nRECID\nRECORD (datatype: JSONB)\nTEST_MV_2 (datatype: JSONB)\n\"SELTEST1\"\n\"{\"1\": \"SELTEST1\", \"2\": [{\"\": \"TESTVALUE\"}, {}]}\"\n[{\"\": \"TESTVALUE\"}, {}]\n\"SELTEST2\"\n\"{\"1\": \"SELTEST2\", \"2\": \"TESTVALUE\"}\"\n\"TESTVALUE\"\n\"SELTEST3\"\n\"{\"1\": \"SELTEST3\", \"2\": [{\"\": \"TESTVALUE\"}, {\"\": \"TESTVALUE1\"}]}\"\n[{\"\": \"TESTVALUE\"}, {\"\": \"TESTVALUE1\"}]\n\"SELTEST4\"\n\"{\"1\": \"SELTEST4\", \"2\": [{\"\": \"TESTVALUE4MV1\"}, {}]}\"\n[{\"\": \"TESTVALUE4MV1\"}, {}]\n\"SELTEST5\"\n\"{\"1\": \"SELTEST5\", \"2\": [{}, {}]}\"\n[{},{}]\n\"SELTEST6\"\n\"{\"1\": \"SELTEST6\", \"2\": {}}\"\n{}\n\"SELTEST7\"\n\"{\"1\": \"SELTEST7\", \"2\": [{}, {\"\": \"TESTVALUE\"}]}\"\n[{}, {\"\": \"TESTVALUE\"}]\n\n\nIn such cases, to find the null values in the JSONB, I have written below SQL Function to handle different type of data\n\nCREATE OR REPLACE FUNCTION jsonbNull(jsonb_column JSONB)\nreturns boolean as $$\ndeclare\n isPoint text := jsonb_typeof(jsonb_column) ;\nbegin\n CASE isPoint\n WHEN 'array' THEN\n if true = ALL(select (jsonb_array_elements(jsonb_column)) = '{}') THEN\n return true;\n else\n return false;\n end if;\n WHEN 'object' THEN\n if jsonb_column = '{}' THEN\n return true;\n else\n return false;\n end if;\n WHEN 'string' THEN\n return false;\n ELSE\n return true;\n END CASE;\nend;\n$$ LANGUAGE plpgsql IMMUTABLE;\n\nSample SQL statement used:\nSELECT RECID,\"TEST_MV_2\" FROM \"V_TEST_SELECT\" WHERE true=jsonbNull(\"TEST_MV_2\") ORDER BY RECID ;\n\n\nI would like to know whether we can handle multiple types of JSONB data in a better/nicer way as this function could impact performance of the query.\n\nKindly provide your suggestions.\n\nThanks,\n[cid:[email protected]]\nRISWANA\nTechnical Lead\n\nTEMENOS India\nSterling Road, Chennai\nd: + 91 9943613190\n\n[cid:[email protected]]<https://www.linkedin.com/company/temenos/>[cid:[email protected]]<https://twitter.com/Temenos>[cid:[email protected]]<https://www.facebook.com/TemenosGroup>[cid:[email protected]]<https://www.youtube.com/user/TemenosMarketing> temenos.com<http://www.temenos.com/?utm_source=signature&utm_medium=email&utm_campaign=new-signature&utm_content=email>\n\n\n\n\nThe information in this e-mail and any attachments is confidential and may be legally privileged. It is intended solely for the addressee or addressees. Any use or disclosure of the contents of this e-mail/attachments by a not intended recipient is unauthorized and may be unlawful. If you have received this e-mail in error please notify the sender. Please note that any views or opinions presented in this e-mail are solely those of the author and do not necessarily represent those of TEMENOS. We recommend that you check this e-mail and any attachments against viruses. TEMENOS accepts no liability for any damage caused by any malicious code or virus transmitted by this e-mail.",
"msg_date": "Fri, 4 Dec 2020 10:45:27 +0000",
"msg_from": "Riswana Rahman <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgeSQL JSONB Column with various type of data"
},
{
"msg_contents": "On Fri, Dec 4, 2020 at 9:21 AM Riswana Rahman <[email protected]> wrote:\n\n> CREATE OR REPLACE FUNCTION jsonbNull(jsonb_column JSONB)\n>\n> returns boolean as $$\n>\n> declare\n>\n> isPoint text := jsonb_typeof(jsonb_column) ;\n>\n> begin\n>\n> CASE isPoint\n>\n> WHEN 'array' THEN\n>\n> if true = ALL(select\n> (jsonb_array_elements(jsonb_column)) = '{}') THEN\n>\n> return true;\n>\n> else\n>\n> return false;\n>\n> end if;\n>\n> WHEN 'object' THEN\n>\n> if jsonb_column = '{}' THEN\n>\n> return true;\n>\n> else\n>\n> return false;\n>\n> end if;\n>\n> WHEN 'string' THEN\n>\n> return false;\n>\n> ELSE\n>\n> return true;\n>\n> END CASE;\n>\n> end;\n>\n> $$ LANGUAGE plpgsql IMMUTABLE;\n>\n\nAs far as I can tell, it seems like this could be re-written as a function\nin SQL instead of plpgsql which allows for it to be in-lined. Have you\ntested performance and found it to be an issue, or just optimizing in\nadvance of a need?\n\n>\n\nOn Fri, Dec 4, 2020 at 9:21 AM Riswana Rahman <[email protected]> wrote:\n\n\nCREATE OR REPLACE FUNCTION jsonbNull(jsonb_column JSONB)\nreturns boolean as $$\ndeclare\n isPoint text := jsonb_typeof(jsonb_column) ;\nbegin \r\n\n CASE isPoint\r\n\n WHEN 'array' THEN\n if true = ALL(select (jsonb_array_elements(jsonb_column)) = '{}') THEN\n return true;\n else\r\n\n return false;\n end if;\n WHEN 'object' THEN\n if jsonb_column = '{}' THEN\r\n\n return true;\n else\r\n\n return false;\n end if;\n WHEN 'string' THEN\n return false;\n ELSE\n return true;\n END CASE;\nend;\n$$ LANGUAGE plpgsql IMMUTABLE;As far as I can tell, it seems like this could be re-written as a function in SQL instead of plpgsql which allows for it to be in-lined. Have you tested performance and found it to be an issue, or just optimizing in advance of a need?",
"msg_date": "Fri, 4 Dec 2020 09:31:59 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgeSQL JSONB Column with various type of data"
}
] |
[
{
"msg_contents": "Hello!\n\nWe have a multi-tenant service where each customer has millions of users\n(total: ~150M rows). Now we would like to let each customer define some\ncustom columns for his users and then let the customer search his users\nefficiently based on these columns.\n\nThis problem seems really hard to solve with PostgreSQL:\nhttps://stackoverflow.com/questions/5106335/how-to-design-a-database-for-user-defined-fields\n\nIn particular the easiest way would be to add a JSON field on the users\ntable (e.g. user metadata). However the PostgreSQL GIN index only supports\nexact matches and not range queries. This means that a query on a range\n(e.g. age > 30) would be extremely inefficient and would result in a table\nscan.\n\nAlgorithmically it seems possible to use a GIN index (based on btree) for a\nrange query. Also MongoDB seems to support something similar (\nhttps://docs.mongodb.com/manual/core/index-wildcard/).\n\nAre there any plans to add support for range queries to GIN indexes (on\nJSON) in the future versions of PostgreSQL?\n\n\nMarco Colli\nPushpad\n\nHello!We have a multi-tenant service where each customer has millions of users (total: ~150M rows). Now we would like to let each customer define some custom columns for his users and then let the customer search his users efficiently based on these columns.This problem seems really hard to solve with PostgreSQL:https://stackoverflow.com/questions/5106335/how-to-design-a-database-for-user-defined-fieldsIn particular the easiest way would be to add a JSON field on the users table (e.g. user metadata). However the PostgreSQL GIN index only supports exact matches and not range queries. This means that a query on a range (e.g. age > 30) would be extremely inefficient and would result in a table scan.Algorithmically it seems possible to use a GIN index (based on btree) for a range query. Also MongoDB seems to support something similar (https://docs.mongodb.com/manual/core/index-wildcard/).Are there any plans to add support for range queries to GIN indexes (on JSON) in the future versions of PostgreSQL?Marco ColliPushpad",
"msg_date": "Fri, 4 Dec 2020 16:39:30 +0100",
"msg_from": "Marco Colli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index for range queries on JSON (user defined fields)"
},
{
"msg_contents": "On Fri, 4 Dec 2020 at 15:39, Marco Colli <[email protected]> wrote:\n\n> Hello!\n>\n> We have a multi-tenant service where each customer has millions of users\n> (total: ~150M rows). Now we would like to let each customer define some\n> custom columns for his users and then let the customer search his users\n> efficiently based on these columns.\n>\n> This problem seems really hard to solve with PostgreSQL:\n>\n> https://stackoverflow.com/questions/5106335/how-to-design-a-database-for-user-defined-fields\n>\n> In particular the easiest way would be to add a JSON field on the users\n> table (e.g. user metadata). However the PostgreSQL GIN index only supports\n> exact matches and not range queries. This means that a query on a range\n> (e.g. age > 30) would be extremely inefficient and would result in a table\n> scan.\n>\n\nYou could have a table of (tenant, customer, setting_name, setting_value)\nso that a btree index on (tenant, setting_name, setting_value) would work\nfor \"select customer from my_table where tenant=$1 and setting_name='age'\nand setting_value > 30\"\n\nThat doesn't deal with setting values having a variety of types, but you\ncould have a distinct user defined settings table for each setting value\ntype that you want to support.\n\nOn Fri, 4 Dec 2020 at 15:39, Marco Colli <[email protected]> wrote:Hello!We have a multi-tenant service where each customer has millions of users (total: ~150M rows). Now we would like to let each customer define some custom columns for his users and then let the customer search his users efficiently based on these columns.This problem seems really hard to solve with PostgreSQL:https://stackoverflow.com/questions/5106335/how-to-design-a-database-for-user-defined-fieldsIn particular the easiest way would be to add a JSON field on the users table (e.g. user metadata). However the PostgreSQL GIN index only supports exact matches and not range queries. This means that a query on a range (e.g. age > 30) would be extremely inefficient and would result in a table scan.You could have a table of (tenant, customer, setting_name, setting_value) so that a btree index on (tenant, setting_name, setting_value) would work for \"select customer from my_table where tenant=$1 and setting_name='age' and setting_value > 30\" That doesn't deal with setting values having a variety of types, but you could have a distinct user defined settings table for each setting value type that you want to support.",
"msg_date": "Fri, 4 Dec 2020 22:39:45 +0000",
"msg_from": "Nick Cleaton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index for range queries on JSON (user defined fields)"
},
{
"msg_contents": "Thanks for the suggestion: I had already considered that solution (first\nlink), but the fear is having to JOIN large tables with hundreds of\nmillions of records.\n\nFor my understanding **using JOIN when dealing with big data is bad and a\nnightmare for performance**: can you confirm? Or am I missing something?\n\nThat tables would be frequently read and updated and are the core of the\napplication: that also means that every update on a user would produce\n**many dead rows** - not just 1 user row, as in the case of JSON, but many\nrows in the user metadata table.\n\n\n\n\n\nOn Fri, Dec 4, 2020 at 11:40 PM Nick Cleaton <[email protected]> wrote:\n\n> On Fri, 4 Dec 2020 at 15:39, Marco Colli <[email protected]> wrote:\n>\n>> Hello!\n>>\n>> We have a multi-tenant service where each customer has millions of users\n>> (total: ~150M rows). Now we would like to let each customer define some\n>> custom columns for his users and then let the customer search his users\n>> efficiently based on these columns.\n>>\n>> This problem seems really hard to solve with PostgreSQL:\n>>\n>> https://stackoverflow.com/questions/5106335/how-to-design-a-database-for-user-defined-fields\n>>\n>> In particular the easiest way would be to add a JSON field on the users\n>> table (e.g. user metadata). However the PostgreSQL GIN index only supports\n>> exact matches and not range queries. This means that a query on a range\n>> (e.g. age > 30) would be extremely inefficient and would result in a table\n>> scan.\n>>\n>\n> You could have a table of (tenant, customer, setting_name, setting_value)\n> so that a btree index on (tenant, setting_name, setting_value) would work\n> for \"select customer from my_table where tenant=$1 and setting_name='age'\n> and setting_value > 30\"\n>\n> That doesn't deal with setting values having a variety of types, but you\n> could have a distinct user defined settings table for each setting value\n> type that you want to support.\n>\n>\n\nThanks for the suggestion: I had already considered that solution (first link), but the fear is having to JOIN large tables with hundreds of millions of records. For my understanding **using JOIN when dealing with big data is bad and a nightmare for performance**: can you confirm? Or am I missing something?That tables would be frequently read and updated and are the core of the application: that also means that every update on a user would produce **many dead rows** - not just 1 user row, as in the case of JSON, but many rows in the user metadata table.On Fri, Dec 4, 2020 at 11:40 PM Nick Cleaton <[email protected]> wrote:On Fri, 4 Dec 2020 at 15:39, Marco Colli <[email protected]> wrote:Hello!We have a multi-tenant service where each customer has millions of users (total: ~150M rows). Now we would like to let each customer define some custom columns for his users and then let the customer search his users efficiently based on these columns.This problem seems really hard to solve with PostgreSQL:https://stackoverflow.com/questions/5106335/how-to-design-a-database-for-user-defined-fieldsIn particular the easiest way would be to add a JSON field on the users table (e.g. user metadata). However the PostgreSQL GIN index only supports exact matches and not range queries. This means that a query on a range (e.g. age > 30) would be extremely inefficient and would result in a table scan.You could have a table of (tenant, customer, setting_name, setting_value) so that a btree index on (tenant, setting_name, setting_value) would work for \"select customer from my_table where tenant=$1 and setting_name='age' and setting_value > 30\" That doesn't deal with setting values having a variety of types, but you could have a distinct user defined settings table for each setting value type that you want to support.",
"msg_date": "Sat, 5 Dec 2020 11:50:09 +0100",
"msg_from": "Marco Colli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index for range queries on JSON (user defined fields)"
}
] |
[
{
"msg_contents": "Hi,\n\nWe are getting this alert frequently \"Required checkpoints occurs too\nfrequently\" on postgres version 11.8\n\nThe RAM of the server is 16 GB.\n\nand we have already set the max_wal_size= 4096 MB\nmin_wal_size= 192 MB.\n\nPlease help me in optimizing the same to avoid this alert.\n\n\nRegards,\nAtul\n\n\n",
"msg_date": "Fri, 11 Dec 2020 13:42:40 +0530",
"msg_from": "Atul Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "\"Required checkpoints occurs too frequently\""
},
{
"msg_contents": "On Fri, 2020-12-11 at 13:42 +0530, Atul Kumar wrote:\n> We are getting this alert frequently \"Required checkpoints occurs too\n> frequently\" on postgres version 11.8\n> \n> The RAM of the server is 16 GB.\n> \n> and we have already set the max_wal_size= 4096 MB\n> min_wal_size= 192 MB.\n\nYou should increase \"max_wal_size\" even more.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Fri, 11 Dec 2020 09:51:23 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"Required checkpoints occurs too frequently\""
},
{
"msg_contents": "how much size should I increase in \"max_wal_size\".\n\nDo we need to change any other parameter's value also ?\n\n\n\nRegards,\nAtul\n\nOn 12/11/20, Laurenz Albe <[email protected]> wrote:\n> On Fri, 2020-12-11 at 13:42 +0530, Atul Kumar wrote:\n>> We are getting this alert frequently \"Required checkpoints occurs too\n>> frequently\" on postgres version 11.8\n>>\n>> The RAM of the server is 16 GB.\n>>\n>> and we have already set the max_wal_size= 4096 MB\n>> min_wal_size= 192 MB.\n>\n> You should increase \"max_wal_size\" even more.\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n>\n\n\n",
"msg_date": "Fri, 11 Dec 2020 14:32:10 +0530",
"msg_from": "Atul Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"Required checkpoints occurs too frequently\""
},
{
"msg_contents": "it depends on your cluster environment. you need to know how much wal is\ncreated in checkpoint_timeout duration. for example your\ncheckpoint_timeout = 30 min, you need to measure how much wal is created in\n30 minute. and then you can increase max_wal_size according to this size.\n\n\n\nAtul Kumar <[email protected]>, 11 Ara 2020 Cum, 12:02 tarihinde şunu\nyazdı:\n\n> how much size should I increase in \"max_wal_size\".\n>\n> Do we need to change any other parameter's value also ?\n>\n>\n>\n> Regards,\n> Atul\n>\n> On 12/11/20, Laurenz Albe <[email protected]> wrote:\n> > On Fri, 2020-12-11 at 13:42 +0530, Atul Kumar wrote:\n> >> We are getting this alert frequently \"Required checkpoints occurs too\n> >> frequently\" on postgres version 11.8\n> >>\n> >> The RAM of the server is 16 GB.\n> >>\n> >> and we have already set the max_wal_size= 4096 MB\n> >> min_wal_size= 192 MB.\n> >\n> > You should increase \"max_wal_size\" even more.\n> >\n> > Yours,\n> > Laurenz Albe\n> > --\n> > Cybertec | https://www.cybertec-postgresql.com\n> >\n> >\n>\n>\n>\n\nit depends on your cluster environment. you need to know how much wal is created in checkpoint_timeout duration. for example your \n\ncheckpoint_timeout = 30 min, you need to measure how much wal is created in 30 minute. and then you can increase max_wal_size according to this size.Atul Kumar <[email protected]>, 11 Ara 2020 Cum, 12:02 tarihinde şunu yazdı:how much size should I increase in \"max_wal_size\".\n\nDo we need to change any other parameter's value also ?\n\n\n\nRegards,\nAtul\n\nOn 12/11/20, Laurenz Albe <[email protected]> wrote:\n> On Fri, 2020-12-11 at 13:42 +0530, Atul Kumar wrote:\n>> We are getting this alert frequently \"Required checkpoints occurs too\n>> frequently\" on postgres version 11.8\n>>\n>> The RAM of the server is 16 GB.\n>>\n>> and we have already set the max_wal_size= 4096 MB\n>> min_wal_size= 192 MB.\n>\n> You should increase \"max_wal_size\" even more.\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n>",
"msg_date": "Fri, 11 Dec 2020 12:19:08 +0300",
"msg_from": "Amine Tengilimoglu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"Required checkpoints occurs too frequently\""
},
{
"msg_contents": "What do you mean by “how much wal is created”\nHow total Wal files in size or how much total wal files in numbers.\n\nPlease let me know.\n\n\nRegards\nAtul\n\n\n\n\nOn Friday, December 11, 2020, Amine Tengilimoglu <\[email protected]> wrote:\n\n> it depends on your cluster environment. you need to know how much wal is\n> created in checkpoint_timeout duration. for example your\n> checkpoint_timeout = 30 min, you need to measure how much wal is created in\n> 30 minute. and then you can increase max_wal_size according to this size.\n>\n>\n>\n> Atul Kumar <[email protected]>, 11 Ara 2020 Cum, 12:02 tarihinde şunu\n> yazdı:\n>\n>> how much size should I increase in \"max_wal_size\".\n>>\n>> Do we need to change any other parameter's value also ?\n>>\n>>\n>>\n>> Regards,\n>> Atul\n>>\n>> On 12/11/20, Laurenz Albe <[email protected]> wrote:\n>> > On Fri, 2020-12-11 at 13:42 +0530, Atul Kumar wrote:\n>> >> We are getting this alert frequently \"Required checkpoints occurs too\n>> >> frequently\" on postgres version 11.8\n>> >>\n>> >> The RAM of the server is 16 GB.\n>> >>\n>> >> and we have already set the max_wal_size= 4096 MB\n>> >> min_wal_size= 192 MB.\n>> >\n>> > You should increase \"max_wal_size\" even more.\n>> >\n>> > Yours,\n>> > Laurenz Albe\n>> > --\n>> > Cybertec | https://www.cybertec-postgresql.com\n>> >\n>> >\n>>\n>>\n>>\n\nWhat do you mean by “how much wal is created” How total Wal files in size or how much total wal files in numbers.Please let me know.Regards AtulOn Friday, December 11, 2020, Amine Tengilimoglu <[email protected]> wrote:it depends on your cluster environment. you need to know how much wal is created in checkpoint_timeout duration. for example your \n\ncheckpoint_timeout = 30 min, you need to measure how much wal is created in 30 minute. and then you can increase max_wal_size according to this size.Atul Kumar <[email protected]>, 11 Ara 2020 Cum, 12:02 tarihinde şunu yazdı:how much size should I increase in \"max_wal_size\".\n\nDo we need to change any other parameter's value also ?\n\n\n\nRegards,\nAtul\n\nOn 12/11/20, Laurenz Albe <[email protected]> wrote:\n> On Fri, 2020-12-11 at 13:42 +0530, Atul Kumar wrote:\n>> We are getting this alert frequently \"Required checkpoints occurs too\n>> frequently\" on postgres version 11.8\n>>\n>> The RAM of the server is 16 GB.\n>>\n>> and we have already set the max_wal_size= 4096 MB\n>> min_wal_size= 192 MB.\n>\n> You should increase \"max_wal_size\" even more.\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n>",
"msg_date": "Fri, 11 Dec 2020 20:04:16 +0530",
"msg_from": "Atul Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"Required checkpoints occurs too frequently\""
},
{
"msg_contents": "\n\n> On Dec 11, 2020, at 06:34, Atul Kumar <[email protected]> wrote:\n> \n> What do you mean by “how much wal is created” \n> How total Wal files in size or how much total wal files in numbers.\n\nSince WAL segment files are a fixed size (almost always 16MB), those numbers are directly related.\n--\n-- Christophe Pettus\n [email protected]\n\n\n\n",
"msg_date": "Fri, 11 Dec 2020 08:15:39 -0800",
"msg_from": "Christophe Pettus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"Required checkpoints occurs too frequently\""
}
] |
[
{
"msg_contents": "Hello,\n\nWe are having performance issues with a table partitioned by date, using\ncomposite type columns.\nI have attached the table definition, full reproducible of the issue and\nexecution plans to this email.\n\nUltimately, we want to sum certain fields contained in those composite\ntypes,\nusing a simple status filter across one month (31 partition) and group by\ndate.\nBut, we fail to get a satisfactory performance even on a single partition.\n\nI have tried multiple indexing options, but none work for us, as I will\nexplain.\nI will refer to these indexes as: ix1, ix2, ix3, ix4, ix5, ix6 and for the\nsecond part ix7.\nix1-ix6 are defined in the attached repro.sql file and the performance of\nthe query with each of them is shown in exec_plans file.\nix7 is defined in the repro_part_vs_parent.sql and performance of relevant\nqueries in exec_plans_part_vs_parent file.\n\nThis is the query targeting single partition:\n\nSELECT\nSUM(COALESCE((col1).a + (col1).b + (col1).c + (col1).d, 0)) AS val1,\nSUM(COALESCE((col2).y, 0)) AS val2\nFROM\npublic.\"mytable:2020-12-09\" --single partition of public.mytable\nWHERE status IN (1,2,3,4);\n\nWe get the best performance using ix2, while I would expect to get better\nperformance using ix3, and perhaps ix5.\n\nQuestions:\n1. Why cannot Postgres plan for index-only scan with ix3?\n2. Why is the query cost so high when using ix3?\n3. Is it possible to define an index such as ix3, that is, with a\ndrastically reduced size and listing only expressions we project?\n4. Are there any other indexing or query rewrite options that are worth\ntrying here?\n5. Judging by execution time, it seems that Postgres can leverage defined\nexpressions in ix2, so why not in ix3? Why must it fetch col1 and col2 from\nthe table when I force ix3 usage?\n6. As ix3 is only 53MB in size (see repro.sql) as opposed to ix1 and ix2\nwhich are 266MB and 280MB respectively, I would expect Postgres to use it\ninstead?\n\n\nIn addition to this, please look at the attached repro_part_vs_parent.sql\nfile and its related execution plans file.\nThere, I tried running a similar query on a partitioned table targeting a\nsingle partition, and afterwards on the partition itself.\nThe results confuse me. I would expect to get similar performance in both\nsituations, but the query runs much slower through the parent table.\nBy looking at the output of the seq scan node (parent query), it seems that\nrunning the query on the parent table prepends partition name as an alias\nto projected columns.\nDoes that make Postgres unable to recognize the expression in the index, or\nis there something else happening here?\n\nThese are the queries:\n\n--partitioned (parent) table, targeting single partition\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS, SETTINGS)\nSELECT\ndt,\nSUM(COALESCE((col1).a + (col1).b + (col1).c + (col1).d, 0)) AS expected,\nSUM(COALESCE((col2).y, 0)) AS repayments\nFROM\npublic.mytable\nWHERE\ndt = '2020-12-09'\nAND status IN (1,2,3,4)\nGROUP BY\ndt;\n\n--querying the partition directly instead:\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS, SETTINGS)\nSELECT\ndt,\nSUM(COALESCE((col1).a + (col1).b + (col1).c + (col1).d, 0)) AS expected,\nSUM(COALESCE((col2).y, 0)) AS repayments\nFROM\npublic.\"mytable:2020-12-09\"\nWHERE\ndt = '2020-12-09'\nAND status IN (1,2,3,4)\nGROUP BY\ndt;\n\nRelevant setup information:\npg version/OS (1): PostgreSQL 12.5, compiled by Visual C++ build 1914,\n64-bit / Windows 10\npg version/OS (2): PostgreSQL 12.4 on x86_64-pc-linux-gnu, compiled by gcc\n(GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit / CentOS Linux release\n7.8.2003 (Core)\ntotal number of table partitions: 31\nsingle partition size (with PK, no other indexes): 4GB\nsingle partition number of rows: 2M\nPostgres configuration settings can be observed in the provided execution\nplans\n\n\ndepesz links:\nno index: https://explain.depesz.com/s/8H93\nix1: https://explain.depesz.com/s/kEYi\nix2: https://explain.depesz.com/s/yydX\nix3: https://explain.depesz.com/s/gAFm\nix4: https://explain.depesz.com/s/8lbh\nix5: https://explain.depesz.com/s/WIqwK\nix6: https://explain.depesz.com/s/BNUc\nix7 (parent): https://explain.depesz.com/s/DqUf\nix7 (child): https://explain.depesz.com/s/ejmP\n\nAttached files:\n1. repro.sql: contains the code which will reproduce my issue\n2. exec_plans: lists execution plans for repro.sql I got on my machine with\neach of the mentioned indexes in place\n3. repro_part_vs_parent.sql: contains queries showing the unexpected\nperformance difference for the identical query ran on parent table vs.\nsingle partition\n4. exec_plans_part_vs_parent: lists relevant execution plans for\nrepro_part_vs_parent.sql\n\nThank you very much in advance.\nPlease let me know if something is unclear or if I can provide any other\nrelevant info.\n\nBest regards,\nSebastijan Wieser",
"msg_date": "Mon, 14 Dec 2020 16:01:48 +0100",
"msg_from": "Sebastijan Wieser <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance issues with composite types (partitioned table)"
}
] |
[
{
"msg_contents": "Hi all,\r\n\r\nWe have facing some discrepancy in Postgresql database related to the autovacuum functionality.\r\nBy default autovacuum was enable on Postgres which is used to remove the dead tuples from the database.\r\n\r\nWe have observed autovaccum cleaning dead rows from table_A but same was not functioning correctly for table_B which have a large size(100+GB) in comparision to table_A.\r\n\r\nAll the threshold level requirements for autovacuum was meet and there are about Million’s of dead tuples but autovacuum was unable to clear them, which cause performance issue on production server.\r\n\r\nIs autovacuum not working against large sized tables or Is there any parameters which need to set to make autovacuum functioning?\r\n\r\nAny suggestions?\r\n\r\nRegards\r\nTarkeshwar\r\n\r\n\r\n\n\n\n\n\n\n\n\n\nHi all,\n \nWe have facing some discrepancy in Postgresql database related to the autovacuum functionality.\r\n\nBy default autovacuum was enable on Postgres which is used to remove the dead tuples from the database.\n \nWe have observed autovaccum cleaning dead rows from\r\ntable_A but same was not functioning correctly for table_B which have a large size(100+GB) in comparision to table_A.\n \nAll the threshold level requirements for autovacuum was meet and there are about Million’s of dead tuples but autovacuum was unable to clear them, which cause performance issue on production server.\n \nIs autovacuum not working against large sized tables or Is there any parameters which need to set to make autovacuum functioning?\n \nAny suggestions? \n \nRegards\nTarkeshwar",
"msg_date": "Wed, 16 Dec 2020 11:54:55 +0000",
"msg_from": "M Tarkeshwar Rao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Autovacuum not functioning for large tables but it is working for few\n other small tables."
},
{
"msg_contents": "Hi all,\r\n\r\nWe have facing some discrepancy in Postgresql database related to the autovacuum functionality.\r\nBy default autovacuum was enable on Postgres which is used to remove the dead tuples from the database.\r\n\r\nWe have observed autovaccum cleaning dead rows from table_A but same was not functioning correctly for table_B which have a large size(100+GB) in comparision to table_A.\r\n\r\nAll the threshold level requirements for autovacuum was meet and there are about Million’s of dead tuples but autovacuum was unable to clear them, which cause performance issue on production server.\r\n\r\nIs autovacuum not working against large sized tables or Is there any parameters which need to set to make autovacuum functioning?\r\n\r\nAny suggestions?\r\n\r\nRegards\r\nTarkeshwar\r\n\r\n\r\n\n\n\n\n\n\n\n\n\nHi all,\n \nWe have facing some discrepancy in Postgresql database related to the autovacuum functionality.\r\n\nBy default autovacuum was enable on Postgres which is used to remove the dead tuples from the database.\n \nWe have observed autovaccum cleaning dead rows from\r\ntable_A but same was not functioning correctly for table_B which have a large size(100+GB) in comparision to table_A.\n \nAll the threshold level requirements for autovacuum was meet and there are about Million’s of dead tuples but autovacuum was unable to clear them, which cause performance issue on production server.\n \nIs autovacuum not working against large sized tables or Is there any parameters which need to set to make autovacuum functioning?\n \nAny suggestions? \n \nRegards\nTarkeshwar",
"msg_date": "Wed, 16 Dec 2020 11:55:47 +0000",
"msg_from": "M Tarkeshwar Rao <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Autovacuum not functioning for large tables but it is working for\n few other small tables."
},
{
"msg_contents": "Hi,\n\n> We have facing some discrepancy in Postgresql database related to the autovacuum functionality.\n>\n> By default autovacuum was enable on Postgres which is used to remove the dead tuples from the database.\n>\n> We have observed autovaccum cleaning dead rows from table_A but same was not functioning correctly for table_B which have a large size(100+GB) in comparision to table_A.\n>\n> All the threshold level requirements for autovacuum was meet and there are about Million’s of dead tuples but autovacuum was unable to clear them, which cause performance issue on production server.\n>\n> Is autovacuum not working against large sized tables or Is there any parameters which need to set to make autovacuum functioning?\n\n\nDo you have autovacuum logging enabled in this server? If so, would be\ngood if you could share them here.\n\nHaving the output from logs of autovacuum for these tables would give\nsome insights on where the problem might reside.\n\n-- \nMartín Marqués\nIt’s not that I have something to hide,\nit’s that I have nothing I want you to see\n\n\n",
"msg_date": "Wed, 16 Dec 2020 12:14:49 -0300",
"msg_from": "=?UTF-8?B?TWFydMOtbiBNYXJxdcOpcw==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum not functioning for large tables but it is working for\n few other small tables."
},
{
"msg_contents": "Absolutely check the logs, or do a manual vacuum verbose with setting cost\ndelay and cost limit (and maintenance work mem) the same as the values for\nauto vacuum runs. It should work out the same and you could time it for a\nperiod when the system is more lightly used it applicable.\n\nIf you have many very large indexes on the tables with a high number of\ntuples and bloat, that may be slowing the execution particularly if your\nallowed work memory for the operation doesn't allow a single pass of the\nindex.\n\nIf you are on PG12+, you can reindex concurrently and then run vacuum and\nsee how it goes.\n\nFreezing will automatically happen according to settings, but if it is near\nthe threshold then it could be that autovacuum is doing more work scanning\nold data. A manual vacuum freeze would mitigate that. That may not be\nsignificant though.\n\nFor your larger tables, or system in general, turning down your scale\nfactor settings will qualify tables for autovacuum sooner. If it hurts, you\naren't doing it often enough.\n\nAlso, reducing cost delays may be needed to pause for less time in the\nmiddle of autovacuum executions. The default changed from 20ms to 2ms with\nPG12 but if your I/O system can handle it, lower may be prudent to get the\nwork done more quickly.\n\nAbsolutely check the logs, or do a manual vacuum verbose with setting cost delay and cost limit (and maintenance work mem) the same as the values for auto vacuum runs. It should work out the same and you could time it for a period when the system is more lightly used it applicable.If you have many very large indexes on the tables with a high number of tuples and bloat, that may be slowing the execution particularly if your allowed work memory for the operation doesn't allow a single pass of the index.If you are on PG12+, you can reindex concurrently and then run vacuum and see how it goes.Freezing will automatically happen according to settings, but if it is near the threshold then it could be that autovacuum is doing more work scanning old data. A manual vacuum freeze would mitigate that. That may not be significant though.For your larger tables, or system in general, turning down your scale factor settings will qualify tables for autovacuum sooner. If it hurts, you aren't doing it often enough.Also, reducing cost delays may be needed to pause for less time in the middle of autovacuum executions. The default changed from 20ms to 2ms with PG12 but if your I/O system can handle it, lower may be prudent to get the work done more quickly.",
"msg_date": "Wed, 16 Dec 2020 08:28:04 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum not functioning for large tables but it is working for\n few other small tables."
},
{
"msg_contents": "On Wed, Dec 16, 2020 at 6:55 AM M Tarkeshwar Rao <\[email protected]> wrote:\n\n> ...\n>\n\n>\n> All the threshold level requirements for autovacuum was meet and there are\n> about Million’s of dead tuples but autovacuum was unable to clear them,\n> which cause performance issue on production server.\n>\n\nIt might be helpful for us to see what data you are looking at to reach\nthis conclusion.\n\n\n>\n>\n> Is autovacuum not working against large sized tables or Is there any\n> parameters which need to set to make autovacuum functioning?\n>\n\nAutovacuum is not inherently broken for large tables. But vacuuming them\ntakes longer than for small tables. If it is frequently interrupted by\nthings like CREATE INDEX, ALTER TABLE, or database shutdown and restart,\nthen it might never get through the entire table without interruption. If\nit is getting interrupted, you should see messages in the log file about\nit. You can also check pg_stat_user_tables to see when it was last\nsuccessfully (to completion) auto vacuumed, and on new enough versions you\ncan look in pg_stat_progress_vacuum to monitor the vacuuming while it\noccurs.\n\nCheers,\n\nJeff\n\n>\n\nOn Wed, Dec 16, 2020 at 6:55 AM M Tarkeshwar Rao <[email protected]> wrote:\n\n\n... \nAll the threshold level requirements for autovacuum was meet and there are about Million’s of dead tuples but autovacuum was unable to clear them, which cause performance issue on production server.It might be helpful for us to see what data you are looking at to reach this conclusion. \n \nIs autovacuum not working against large sized tables or Is there any parameters which need to set to make autovacuum functioning?Autovacuum is not inherently broken for large tables. But vacuuming them takes longer than for small tables. If it is frequently interrupted by things like CREATE INDEX, ALTER TABLE, or database shutdown and restart, then it might never get through the entire table without interruption. If it is getting interrupted, you should see messages in the log file about it. You can also check pg_stat_user_tables to see when it was last successfully (to completion) auto vacuumed, and on new enough versions you can look in pg_stat_progress_vacuum to monitor the vacuuming while it occurs. Cheers,Jeff",
"msg_date": "Wed, 16 Dec 2020 13:18:00 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum not functioning for large tables but it is working for\n few other small tables."
},
{
"msg_contents": "On 12/16/20 12:55 PM, M Tarkeshwar Rao wrote:\n> Hi all,\n> \n> We have facing some discrepancy in Postgresql database related to the \n> autovacuum functionality.\n> \n> By default autovacuum was enable on Postgres which is used to remove the \n> dead tuples from the database.\n> \n> We have observed autovaccum cleaning dead rows from *table_A* but same \n> was not functioning correctly for *table_B* which have a large \n> size(100+GB) in comparision to table_A.\n> \n> All the threshold level requirements for autovacuum was meet and there \n> are about Million’s of dead tuples but autovacuum was unable to clear \n> them, which cause performance issue on production server.\n> \n> Is autovacuum not working against large sized tables or Is there any \n> parameters which need to set to make autovacuum functioning?\n> \n\nNo, autovacuum should work for tables with any size. The most likely \nexplanation is that the rows in the large table were deleted more \nrecently and there is a long-running transaction blocking the cleanup. \nOr maybe not, hard to say with the info you provided.\n\nA couple suggestions:\n\n1) enable logging for autovacuum by setting\n\n log_autovacuum_min_duration = 10ms (or similar low value)\n\n2) check that the autovacuum is actually executed on the large table \n(there's last_autovacuum in pg_stat_all_tables)\n\n3) try running VACUUM VERBOSE on the large table, it may tell you that \nthe rows can't be cleaned up yet.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 17 Dec 2020 02:46:16 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum not functioning for large tables but it is working for\n few other small tables."
},
{
"msg_contents": "Hi all,\r\n\r\nAs we know, the VACUUM VERBOSE output has a lot of dependencies from production end and is indefinite as of now. We don’t have any clue till now on why exactly the auto-vacuum is not working for the table. So we need to have a work around to move ahead for the time being.\r\n \r\nCan you please suggest any workaround so that we can resolve the issue or any other way by which we can avoid this situation?\r\n\r\nRegards\r\nTarkeshwar\r\n\r\n-----Original Message-----\r\nFrom: Tomas Vondra <[email protected]> \r\nSent: Thursday, December 17, 2020 7:16 AM\r\nTo: M Tarkeshwar Rao <[email protected]>; [email protected]\r\nCc: Neeraj Gupta G <[email protected]>; Atul Parashar <[email protected]>; Shishir Singh <[email protected]>; Ankit Sharma <[email protected]>\r\nSubject: Re: Autovacuum not functioning for large tables but it is working for few other small tables.\r\n\r\nOn 12/16/20 12:55 PM, M Tarkeshwar Rao wrote:\r\n> Hi all,\r\n> \r\n> We have facing some discrepancy in Postgresql database related to the \r\n> autovacuum functionality.\r\n> \r\n> By default autovacuum was enable on Postgres which is used to remove \r\n> the dead tuples from the database.\r\n> \r\n> We have observed autovaccum cleaning dead rows from *table_A* but same \r\n> was not functioning correctly for *table_B* which have a large\r\n> size(100+GB) in comparision to table_A.\r\n> \r\n> All the threshold level requirements for autovacuum was meet and there \r\n> are about Million’s of dead tuples but autovacuum was unable to clear \r\n> them, which cause performance issue on production server.\r\n> \r\n> Is autovacuum not working against large sized tables or Is there any \r\n> parameters which need to set to make autovacuum functioning?\r\n> \r\n\r\nNo, autovacuum should work for tables with any size. The most likely explanation is that the rows in the large table were deleted more recently and there is a long-running transaction blocking the cleanup. \r\nOr maybe not, hard to say with the info you provided.\r\n\r\nA couple suggestions:\r\n\r\n1) enable logging for autovacuum by setting\r\n\r\n log_autovacuum_min_duration = 10ms (or similar low value)\r\n\r\n2) check that the autovacuum is actually executed on the large table (there's last_autovacuum in pg_stat_all_tables)\r\n\r\n3) try running VACUUM VERBOSE on the large table, it may tell you that the rows can't be cleaned up yet.\r\n\r\n\r\nregards\r\n\r\n-- \r\nTomas Vondra\r\nEnterpriseDB: http://www.enterprisedb.com\r\nThe Enterprise PostgreSQL Company\r\n",
"msg_date": "Fri, 8 Jan 2021 11:59:57 +0000",
"msg_from": "M Tarkeshwar Rao <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Autovacuum not functioning for large tables but it is working for\n few other small tables."
},
{
"msg_contents": "Hi all,\r\n\r\nAs we know, the VACUUM VERBOSE output has a lot of dependencies from production end and is indefinite as of now. We don’t have any clue till now on why exactly the auto-vacuum is not working for the table. So we need to have a work around to move ahead for the time being.\r\n \r\nCan you please suggest any workaround so that we can resolve the issue or any other way by which we can avoid this situation?\r\n\r\nRegards\r\nTarkeshwar\r\n\r\n-----Original Message-----\r\nFrom: Tomas Vondra <[email protected]>\r\nSent: Thursday, December 17, 2020 7:16 AM\r\nTo: M Tarkeshwar Rao <[email protected]>; [email protected]\r\nCc: Neeraj Gupta G <[email protected]>; Atul Parashar <[email protected]>; Shishir Singh <[email protected]>; Ankit Sharma <[email protected]>\r\nSubject: Re: Autovacuum not functioning for large tables but it is working for few other small tables.\r\n\r\nOn 12/16/20 12:55 PM, M Tarkeshwar Rao wrote:\r\n> Hi all,\r\n> \r\n> We have facing some discrepancy in Postgresql database related to the \r\n> autovacuum functionality.\r\n> \r\n> By default autovacuum was enable on Postgres which is used to remove \r\n> the dead tuples from the database.\r\n> \r\n> We have observed autovaccum cleaning dead rows from *table_A* but same \r\n> was not functioning correctly for *table_B* which have a large\r\n> size(100+GB) in comparision to table_A.\r\n> \r\n> All the threshold level requirements for autovacuum was meet and there \r\n> are about Million’s of dead tuples but autovacuum was unable to clear \r\n> them, which cause performance issue on production server.\r\n> \r\n> Is autovacuum not working against large sized tables or Is there any \r\n> parameters which need to set to make autovacuum functioning?\r\n> \r\n\r\nNo, autovacuum should work for tables with any size. The most likely explanation is that the rows in the large table were deleted more recently and there is a long-running transaction blocking the cleanup. \r\nOr maybe not, hard to say with the info you provided.\r\n\r\nA couple suggestions:\r\n\r\n1) enable logging for autovacuum by setting\r\n\r\n log_autovacuum_min_duration = 10ms (or similar low value)\r\n\r\n2) check that the autovacuum is actually executed on the large table (there's last_autovacuum in pg_stat_all_tables)\r\n\r\n3) try running VACUUM VERBOSE on the large table, it may tell you that the rows can't be cleaned up yet.\r\n\r\n\r\nregards\r\n\r\n--\r\nTomas Vondra\r\nEnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company\r\n",
"msg_date": "Fri, 8 Jan 2021 12:01:54 +0000",
"msg_from": "M Tarkeshwar Rao <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Autovacuum not functioning for large tables but it is working for\n few other small tables."
},
{
"msg_contents": "By the way, please do not top-post (reply above, quoting the full email\nafter) in these groups.\n\n\nOn Fri, Jan 8, 2021 at 5:00 AM M Tarkeshwar Rao <\[email protected]> wrote:\n\n> Hi all,\n>\n> As we know, the VACUUM VERBOSE output has a lot of dependencies from\n> production end and is indefinite as of now.\n\n\nWhat do you mean by this statement?\n\n\n> We don’t have any clue till now on why exactly the auto-vacuum is not\n> working for the table. So we need to have a work around to move ahead for\n> the time being.\n>\n> Can you please suggest any workaround so that we can resolve the issue or\n> any other way by which we can avoid this situation?\n>\n\nHave you tried any of the suggestions already given 3+ weeks ago? Do you\nhave answers to any of the questions posed by me or the other three people\nwho responded?\n\nBy the way, please do not top-post (reply above, quoting the full email after) in these groups.On Fri, Jan 8, 2021 at 5:00 AM M Tarkeshwar Rao <[email protected]> wrote:Hi all,\n\nAs we know, the VACUUM VERBOSE output has a lot of dependencies from production end and is indefinite as of now.What do you mean by this statement? We don’t have any clue till now on why exactly the auto-vacuum is not working for the table. So we need to have a work around to move ahead for the time being.\n\nCan you please suggest any workaround so that we can resolve the issue or any other way by which we can avoid this situation?Have you tried any of the suggestions already given 3+ weeks ago? Do you have answers to any of the questions posed by me or the other three people who responded?",
"msg_date": "Fri, 8 Jan 2021 08:52:36 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum not functioning for large tables but it is working for\n few other small tables."
},
{
"msg_contents": "Hi,\r\n\r\nPlease find the Vacuum(verbose) output. Can you please suggest what is the reason?\r\nHow can we avoid these scenarios?\r\n\r\nThe customer tried to run the VACUUM(verbose) last night, but it was running continuously for 5 hours without any visible progress. So they had to abort it as it was going to exhaust their maintenance window.\r\n\r\n db_Server14=# VACUUM (VERBOSE) audittraillogentry;\r\nINFO: vacuuming \"mmsuper.audittraillogentry\"\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184539 row versions\r\nDETAIL: CPU 25.24s/49.11u sec elapsed 81.33 sec\r\nINFO: scanned index \"audit_intime_index\" to remove 11184539 row versions\r\nDETAIL: CPU 23.27s/59.28u sec elapsed 88.63 sec\r\nINFO: scanned index \"audit_outtime_index\" to remove 11184539 row versions\r\nDETAIL: CPU 27.02s/55.10u sec elapsed 92.04 sec\r\nINFO: scanned index \"audit_sourceid_index\" to remove 11184539 row versions\r\nDETAIL: CPU 110.81s/72.29u sec elapsed 260.71 sec\r\nINFO: scanned index \"audit_destid_index\" to remove 11184539 row versions\r\nDETAIL: CPU 100.49s/87.03u sec elapsed 265.00 sec\r\nINFO: \"audittraillogentry\": removed 11184539 row versions in 247622 pages\r\nDETAIL: CPU 3.23s/0.89u sec elapsed 6.64 sec\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184545 row versions\r\nDETAIL: CPU 25.73s/45.72u sec elapsed 86.59 sec\r\nINFO: scanned index \"audit_intime_index\" to remove 11184545 row versions\r\nDETAIL: CPU 34.65s/56.52u sec elapsed 113.52 sec\r\nINFO: scanned index \"audit_outtime_index\" to remove 11184545 row versions\r\nDETAIL: CPU 35.55s/61.96u sec elapsed 113.89 sec\r\nINFO: scanned index \"audit_sourceid_index\" to remove 11184545 row versions\r\nDETAIL: CPU 120.60s/75.17u sec elapsed 286.78 sec\r\nINFO: scanned index \"audit_destid_index\" to remove 11184545 row versions\r\nDETAIL: CPU 111.87s/93.74u sec elapsed 295.05 sec\r\nINFO: \"audittraillogentry\": removed 11184545 row versions in 1243407 pages\r\nDETAIL: CPU 20.35s/6.45u sec elapsed 71.61 sec\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184547 row versions\r\nDETAIL: CPU 21.84s/43.36u sec elapsed 71.72 sec\r\nINFO: scanned index \"audit_intime_index\" to remove 11184547 row versions\r\nDETAIL: CPU 33.37s/57.07u sec elapsed 99.50 sec\r\nINFO: scanned index \"audit_outtime_index\" to remove 11184547 row versions\r\nDETAIL: CPU 35.08s/60.08u sec elapsed 110.08 sec\r\nINFO: scanned index \"audit_sourceid_index\" to remove 11184547 row versions\r\nDETAIL: CPU 117.72s/72.75u sec elapsed 256.31 sec\r\nINFO: scanned index \"audit_destid_index\" to remove 11184547 row versions\r\nDETAIL: CPU 103.46s/77.43u sec elapsed 247.23 sec\r\nINFO: \"audittraillogentry\": removed 11184547 row versions in 268543 pages\r\nDETAIL: CPU 4.36s/1.35u sec elapsed 9.61 sec\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184521 row versions\r\nDETAIL: CPU 26.64s/45.46u sec elapsed 80.51 sec\r\nINFO: scanned index \"audit_intime_index\" to remove 11184521 row versions\r\nDETAIL: CPU 35.05s/59.11u sec elapsed 111.23 sec\r\nINFO: scanned index \"audit_outtime_index\" to remove 11184521 row versions\r\nDETAIL: CPU 32.98s/56.41u sec elapsed 105.93 sec\r\nINFO: scanned index \"audit_sourceid_index\" to remove 11184521 row versions\r\nDETAIL: CPU 117.13s/71.14u sec elapsed 254.33 sec\r\nINFO: scanned index \"audit_destid_index\" to remove 11184521 row versions\r\nDETAIL: CPU 99.93s/81.77u sec elapsed 241.83 sec\r\nINFO: \"audittraillogentry\": removed 11184521 row versions in 268593 pages\r\nDETAIL: CPU 3.49s/1.14u sec elapsed 6.87 sec\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184534 row versions\r\nDETAIL: CPU 22.73s/42.41u sec elapsed 69.12 sec\r\nINFO: scanned index \"audit_intime_index\" to remove 11184534 row versions\r\nDETAIL: CPU 36.78s/68.04u sec elapsed 121.60 sec\r\nINFO: scanned index \"audit_outtime_index\" to remove 11184534 row versions\r\nDETAIL: CPU 31.11s/52.88u sec elapsed 93.93 sec\r\nINFO: scanned index \"audit_sourceid_index\" to remove 11184534 row versions\r\nDETAIL: CPU 117.95s/72.65u sec elapsed 247.44 sec\r\nINFO: scanned index \"audit_destid_index\" to remove 11184534 row versions\r\nDETAIL: CPU 104.25s/82.63u sec elapsed 248.43 sec\r\nINFO: \"audittraillogentry\": removed 11184534 row versions in 268598 pages\r\nDETAIL: CPU 3.74s/1.17u sec elapsed 9.45 sec\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184546 row versions\r\nDETAIL: CPU 21.24s/40.72u sec elapsed 68.78 sec\r\nINFO: scanned index \"audit_intime_index\" to remove 11184546 row versions\r\nDETAIL: CPU 34.29s/56.72u sec elapsed 99.63 sec\r\nINFO: scanned index \"audit_outtime_index\" to remove 11184546 row versions\r\nDETAIL: CPU 33.83s/60.99u sec elapsed 105.22 sec\r\nINFO: scanned index \"audit_sourceid_index\" to remove 11184546 row versions\r\nDETAIL: CPU 114.26s/70.11u sec elapsed 239.56 sec\r\nINFO: scanned index \"audit_destid_index\" to remove 11184546 row versions\r\nDETAIL: CPU 100.73s/73.28u sec elapsed 228.37 sec\r\nINFO: \"audittraillogentry\": removed 11184546 row versions in 268538 pages\r\nDETAIL: CPU 3.80s/1.18u sec elapsed 7.79 sec\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184523 row versions\r\nDETAIL: CPU 25.78s/47.23u sec elapsed 77.60 sec\r\nINFO: scanned index \"audit_intime_index\" to remove 11184523 row versions\r\nDETAIL: CPU 35.39s/56.45u sec elapsed 103.70 sec\r\nINFO: scanned index \"audit_outtime_index\" to remove 11184523 row versions\r\nDETAIL: CPU 31.16s/52.24u sec elapsed 90.21 sec\r\nINFO: scanned index \"audit_sourceid_index\" to remove 11184523 row versions\r\nDETAIL: CPU 114.71s/70.03u sec elapsed 260.11 sec\r\nINFO: scanned index \"audit_destid_index\" to remove 11184523 row versions\r\nDETAIL: CPU 105.71s/76.33u sec elapsed 228.59 sec\r\nINFO: \"audittraillogentry\": removed 11184523 row versions in 268611 pages\r\nDETAIL: CPU 3.40s/1.17u sec elapsed 7.10 sec\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184554 row versions\r\nDETAIL: CPU 22.80s/39.22u sec elapsed 67.26 sec\r\nINFO: scanned index \"audit_intime_index\" to remove 11184554 row versions\r\nDETAIL: CPU 35.38s/57.31u sec elapsed 106.01 sec\r\nINFO: scanned index \"audit_outtime_index\" to remove 11184554 row versions\r\nDETAIL: CPU 34.15s/54.73u sec elapsed 97.79 sec\r\nINFO: scanned index \"audit_sourceid_index\" to remove 11184554 row versions\r\nDETAIL: CPU 118.37s/71.55u sec elapsed 243.34 sec\r\nINFO: scanned index \"audit_destid_index\" to remove 11184554 row versions\r\nDETAIL: CPU 100.43s/72.41u sec elapsed 252.42 sec\r\nINFO: \"audittraillogentry\": removed 11184554 row versions in 268590 pages\r\nDETAIL: CPU 4.40s/1.34u sec elapsed 9.00 sec\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184533 row versions\r\nDETAIL: CPU 25.01s/40.12u sec elapsed 72.19 sec\r\nINFO: scanned index \"audit_intime_index\" to remove 11184533 row versions\r\nDETAIL: CPU 34.13s/52.89u sec elapsed 93.53 sec\r\nINFO: scanned index \"audit_outtime_index\" to remove 11184533 row versions\r\nDETAIL: CPU 31.29s/50.04u sec elapsed 88.22 sec\r\nINFO: scanned index \"audit_sourceid_index\" to remove 11184533 row versions\r\nDETAIL: CPU 119.38s/66.95u sec elapsed 257.04 sec\r\nINFO: scanned index \"audit_destid_index\" to remove 11184533 row versions\r\nDETAIL: CPU 102.33s/74.23u sec elapsed 230.70 sec\r\nINFO: \"audittraillogentry\": removed 11184533 row versions in 268627 pages\r\nDETAIL: CPU 3.94s/1.28u sec elapsed 7.74 sec\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184536 row versions\r\nDETAIL: CPU 22.67s/38.49u sec elapsed 66.67 sec\r\nINFO: scanned index \"audit_intime_index\" to remove 11184536 row versions\r\nDETAIL: CPU 37.17s/61.79u sec elapsed 107.70 sec\r\nINFO: scanned index \"audit_outtime_index\" to remove 11184536 row versions\r\nDETAIL: CPU 32.23s/51.13u sec elapsed 90.93 sec\r\nINFO: scanned index \"audit_sourceid_index\" to remove 11184536 row versions\r\nDETAIL: CPU 117.68s/70.04u sec elapsed 239.51 sec\r\nINFO: scanned index \"audit_destid_index\" to remove 11184536 row versions\r\nDETAIL: CPU 103.82s/72.82u sec elapsed 228.64 sec\r\nINFO: \"audittraillogentry\": removed 11184536 row versions in 268597 pages\r\nDETAIL: CPU 4.01s/1.34u sec elapsed 8.74 sec\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184533 row versions\r\nDETAIL: CPU 26.34s/39.03u sec elapsed 70.76 sec\r\nINFO: scanned index \"audit_intime_index\" to remove 11184533 row versions\r\nDETAIL: CPU 35.98s/53.60u sec elapsed 99.27 sec\r\nINFO: scanned index \"audit_outtime_index\" to remove 11184533 row versions\r\nDETAIL: CPU 32.57s/50.71u sec elapsed 90.61 sec\r\nINFO: scanned index \"audit_sourceid_index\" to remove 11184533 row versions\r\nDETAIL: CPU 122.50s/64.66u sec elapsed 254.06 sec\r\nINFO: scanned index \"audit_destid_index\" to remove 11184533 row versions\r\nDETAIL: CPU 100.87s/78.60u sec elapsed 237.31 sec\r\nINFO: \"audittraillogentry\": removed 11184533 row versions in 268643 pages\r\nDETAIL: CPU 4.01s/1.23u sec elapsed 7.69 sec\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184535 row versions\r\nDETAIL: CPU 22.65s/36.84u sec elapsed 61.70 sec\r\nINFO: scanned index \"audit_intime_index\" to remove 11184535 row versions\r\nDETAIL: CPU 37.86s/59.20u sec elapsed 104.94 sec\r\nINFO: scanned index \"audit_outtime_index\" to remove 11184535 row versions\r\nDETAIL: CPU 32.06s/48.99u sec elapsed 88.31 sec\r\nINFO: scanned index \"audit_sourceid_index\" to remove 11184535 row versions\r\nDETAIL: CPU 120.01s/69.92u sec elapsed 245.13 sec\r\nINFO: scanned index \"audit_destid_index\" to remove 11184535 row versions\r\nDETAIL: CPU 102.99s/69.48u sec elapsed 216.71 sec\r\nINFO: \"audittraillogentry\": removed 11184535 row versions in 268574 pages\r\nDETAIL: CPU 4.27s/1.41u sec elapsed 9.40 sec\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184545 row versions\r\nDETAIL: CPU 26.12s/39.21u sec elapsed 71.64 sec\r\nINFO: scanned index \"audit_intime_index\" to remove 11184545 row versions\r\nDETAIL: CPU 35.67s/52.12u sec elapsed 95.95 sec\r\nINFO: scanned index \"audit_outtime_index\" to remove 11184545 row versions\r\nDETAIL: CPU 32.68s/47.59u sec elapsed 86.58 sec\r\nINFO: scanned index \"audit_sourceid_index\" to remove 11184545 row versions\r\nDETAIL: CPU 118.72s/64.51u sec elapsed 249.14 sec\r\nINFO: scanned index \"audit_destid_index\" to remove 11184545 row versions\r\nDETAIL: CPU 103.10s/76.75u sec elapsed 248.05 sec\r\nINFO: \"audittraillogentry\": removed 11184545 row versions in 268662 pages\r\nDETAIL: CPU 3.69s/1.18u sec elapsed 7.75 sec\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184521 row versions\r\nDETAIL: CPU 22.80s/35.86u sec elapsed 61.23 sec\r\nINFO: scanned index \"audit_intime_index\" to remove 11184521 row versions\r\nDETAIL: CPU 35.79s/53.76u sec elapsed 97.45 sec\r\nINFO: scanned index \"audit_outtime_index\" to remove 11184521 row versions\r\nDETAIL: CPU 33.41s/46.93u sec elapsed 93.18 sec\r\nINFO: scanned index \"audit_sourceid_index\" to remove 11184521 row versions\r\nDETAIL: CPU 117.29s/66.18u sec elapsed 224.79 sec\r\nINFO: scanned index \"audit_destid_index\" to remove 11184521 row versions\r\nDETAIL: CPU 104.67s/68.33u sec elapsed 226.39 sec\r\nINFO: \"audittraillogentry\": removed 11184521 row versions in 268576 pages\r\nDETAIL: CPU 3.76s/1.08u sec elapsed 7.49 sec\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184525 row versions\r\nDETAIL: CPU 25.06s/39.94u sec elapsed 70.43 sec\r\nINFO: scanned index \"audit_intime_index\" to remove 11184525 row versions\r\nDETAIL: CPU 35.01s/50.04u sec elapsed 94.04 sec\r\nINFO: scanned index \"audit_outtime_index\" to remove 11184525 row versions\r\nDETAIL: CPU 31.41s/45.69u sec elapsed 84.37 sec\r\nINFO: scanned index \"audit_sourceid_index\" to remove 11184525 row versions\r\nDETAIL: CPU 118.28s/63.16u sec elapsed 244.28 sec\r\nINFO: scanned index \"audit_destid_index\" to remove 11184525 row versions\r\nDETAIL: CPU 105.60s/73.95u sec elapsed 227.47 sec\r\nINFO: \"audittraillogentry\": removed 11184525 row versions in 268660 pages\r\nDETAIL: CPU 3.91s/1.25u sec elapsed 7.51 sec\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184538 row versions\r\nDETAIL: CPU 23.79s/34.59u sec elapsed 62.01 sec\r\nINFO: scanned index \"audit_intime_index\" to remove 11184538 row versions\r\nDETAIL: CPU 36.86s/51.24u sec elapsed 99.10 sec\r\nINFO: scanned index \"audit_outtime_index\" to remove 11184538 row versions\r\nDETAIL: CPU 34.95s/53.11u sec elapsed 98.44 sec\r\nINFO: scanned index \"audit_sourceid_index\" to remove 11184538 row versions\r\nDETAIL: CPU 115.09s/62.14u sec elapsed 229.85 sec\r\nINFO: scanned index \"audit_destid_index\" to remove 11184538 row versions\r\nDETAIL: CPU 107.02s/65.97u sec elapsed 218.05 sec\r\nINFO: \"audittraillogentry\": removed 11184538 row versions in 268584 pages\r\nDETAIL: CPU 3.46s/1.30u sec elapsed 7.03 sec\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184546 row versions\r\nDETAIL: CPU 23.68s/33.59u sec elapsed 60.67 sec\r\nINFO: scanned index \"audit_intime_index\" to remove 11184546 row versions\r\nDETAIL: CPU 39.63s/54.93u sec elapsed 106.66 sec\r\nINFO: scanned index \"audit_outtime_index\" to remove 11184546 row versions\r\nDETAIL: CPU 32.55s/44.43u sec elapsed 84.53 sec\r\nINFO: scanned index \"audit_sourceid_index\" to remove 11184546 row versions\r\nDETAIL: CPU 122.49s/63.49u sec elapsed 235.39 sec\r\nINFO: scanned index \"audit_destid_index\" to remove 11184546 row versions\r\nDETAIL: CPU 108.09s/69.68u sec elapsed 227.05 sec\r\nINFO: \"audittraillogentry\": removed 11184546 row versions in 269472 pages\r\nDETAIL: CPU 4.32s/1.33u sec elapsed 8.72 sec\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184536 row versions\r\nDETAIL: CPU 23.70s/32.98u sec elapsed 62.22 sec\r\nINFO: scanned index \"audit_intime_index\" to remove 11184536 row versions\r\nDETAIL: CPU 35.77s/46.57u sec elapsed 88.27 sec\r\nINFO: scanned index \"audit_outtime_index\" to remove 11184536 row versions\r\nDETAIL: CPU 32.59s/43.16u sec elapsed 82.06 sec\r\nINFO: scanned index \"audit_sourceid_index\" to remove 11184536 row versions\r\nDETAIL: CPU 126.27s/60.18u sec elapsed 258.72 sec\r\nINFO: scanned index \"audit_destid_index\" to remove 11184536 row versions\r\nDETAIL: CPU 112.57s/65.24u sec elapsed 232.06 sec\r\nINFO: \"audittraillogentry\": removed 11184536 row versions in 269319 pages\r\nDETAIL: CPU 3.73s/1.29u sec elapsed 7.58 sec\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184538 row versions\r\nDETAIL: CPU 23.22s/32.16u sec elapsed 60.22 sec\r\nINFO: scanned index \"audit_intime_index\" to remove 11184538 row versions\r\nDETAIL: CPU 38.42s/51.43u sec elapsed 101.53 sec\r\nINFO: scanned index \"audit_outtime_index\" to remove 11184538 row versions\r\nDETAIL: CPU 33.29s/42.79u sec elapsed 88.70 sec\r\nINFO: scanned index \"audit_sourceid_index\" to remove 11184538 row versions\r\nDETAIL: CPU 124.04s/62.06u sec elapsed 230.83 sec\r\nINFO: scanned index \"audit_destid_index\" to remove 11184538 row versions\r\nDETAIL: CPU 105.41s/64.14u sec elapsed 223.93 sec\r\nINFO: \"audittraillogentry\": removed 11184538 row versions in 269384 pages\r\nDETAIL: CPU 3.69s/1.11u sec elapsed 7.79 sec\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184520 row versions\r\nDETAIL: CPU 26.60s/34.89u sec elapsed 64.47 sec\r\nINFO: scanned index \"audit_intime_index\" to remove 11184520 row versions\r\nDETAIL: CPU 36.01s/45.24u sec elapsed 88.69 sec\r\nINFO: scanned index \"audit_outtime_index\" to remove 11184520 row versions\r\nDETAIL: CPU 33.00s/41.31u sec elapsed 83.02 sec\r\nINFO: scanned index \"audit_sourceid_index\" to remove 11184520 row versions\r\nDETAIL: CPU 124.80s/58.92u sec elapsed 246.98 sec\r\nINFO: scanned index \"audit_destid_index\" to remove 11184520 row versions\r\nDETAIL: CPU 106.35s/71.38u sec elapsed 249.67 sec\r\nINFO: \"audittraillogentry\": removed 11184520 row versions in 269050 pages\r\nDETAIL: CPU 3.74s/1.16u sec elapsed 8.87 sec\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184523 row versions\r\nDETAIL: CPU 21.95s/30.36u sec elapsed 59.88 sec\r\nINFO: scanned index \"audit_intime_index\" to remove 11184523 row versions\r\nDETAIL: CPU 33.84s/42.86u sec elapsed 88.67 sec\r\nINFO: scanned index \"audit_outtime_index\" to remove 11184523 row versions\r\nDETAIL: CPU 35.71s/44.46u sec elapsed 95.35 sec\r\nINFO: scanned index \"audit_sourceid_index\" to remove 11184523 row versions\r\nDETAIL: CPU 120.51s/61.81u sec elapsed 249.04 sec\r\nINFO: scanned index \"audit_destid_index\" to remove 11184523 row versions\r\nDETAIL: CPU 103.16s/62.69u sec elapsed 231.34 sec\r\nINFO: \"audittraillogentry\": removed 11184523 row versions in 266741 pages\r\nDETAIL: CPU 4.27s/1.24u sec elapsed 8.26 sec\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184551 row versions\r\nDETAIL: CPU 25.89s/37.48u sec elapsed 69.65 sec\r\nINFO: scanned index \"audit_intime_index\" to remove 11184551 row versions\r\nDETAIL: CPU 35.74s/43.70u sec elapsed 100.58 sec\r\nINFO: scanned index \"audit_outtime_index\" to remove 11184551 row versions\r\nDETAIL: CPU 31.45s/40.14u sec elapsed 84.00 sec\r\n\r\ndb_Server14=# SELECT pid, datname, usename, state, backend_xmin FROM pg_stat_activity WHERE backend_xmin IS NOT NULL ORDER BY age(backend_xmin) DESC;\r\n pid | datname | usename | state | backend_xmin\r\n-------+----------------+----------+--------+--------------\r\n73583 | fm_db_Server14 | mmsuper | active | 63548809\r\n31359 | fm_db_Server14 | postgres | active | 63548812\r\n52761 | fm_db_Server14 | mmsuper | active | 63548814\r\n53197 | fm_db_Server14 | mmsuper | active | 63548815\r\n53409 | fm_db_Server14 | mmsuper | active | 63548815\r\n38917 | fm_db_Server14 | mmsuper | active | 63548818\r\n(6 rows)\r\n\r\ndb_Server14=# SELECT slot_name, slot_type, database, xmin FROM pg_replication_slots ORDER BY age(xmin) DESC;\r\nslot_name | slot_type | database | xmin\r\n-----------+-----------+----------+------\r\n(0 rows)\r\n\r\ndb_Server14=# SELECT gid, prepared, owner, database, transaction AS xmin FROM pg_prepared_xacts ORDER BY age(transaction) DESC;\r\ngid | prepared | owner | database | xmin\r\n-----+----------+-------+----------+------\r\n(0 rows)\r\n\r\nRegards\r\nTarkeshwar\r\n\n\n\n\n\n\n\n\n\n\n\n\nHi,\n \nPlease find the Vacuum(verbose) output. Can you please suggest what is the reason?\nHow can we avoid these scenarios? \n \nThe customer tried to run the VACUUM(verbose) last night, but it was running continuously for 5 hours without any visible progress. So they had to abort it as it was going to exhaust their maintenance window.\n \n db_Server14=# VACUUM (VERBOSE) audittraillogentry;\nINFO: vacuuming \"mmsuper.audittraillogentry\"\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184539 row versions\nDETAIL: CPU 25.24s/49.11u sec elapsed 81.33 sec\nINFO: scanned index \"audit_intime_index\" to remove 11184539 row versions\nDETAIL: CPU 23.27s/59.28u sec elapsed 88.63 sec\nINFO: scanned index \"audit_outtime_index\" to remove 11184539 row versions\nDETAIL: CPU 27.02s/55.10u sec elapsed 92.04 sec\nINFO: scanned index \"audit_sourceid_index\" to remove 11184539 row versions\nDETAIL: CPU 110.81s/72.29u sec elapsed 260.71 sec\nINFO: scanned index \"audit_destid_index\" to remove 11184539 row versions\nDETAIL: CPU 100.49s/87.03u sec elapsed 265.00 sec\nINFO: \"audittraillogentry\": removed 11184539 row versions in 247622 pages\nDETAIL: CPU 3.23s/0.89u sec elapsed 6.64 sec\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184545 row versions\nDETAIL: CPU 25.73s/45.72u sec elapsed 86.59 sec\nINFO: scanned index \"audit_intime_index\" to remove 11184545 row versions\nDETAIL: CPU 34.65s/56.52u sec elapsed 113.52 sec\nINFO: scanned index \"audit_outtime_index\" to remove 11184545 row versions\nDETAIL: CPU 35.55s/61.96u sec elapsed 113.89 sec\nINFO: scanned index \"audit_sourceid_index\" to remove 11184545 row versions\nDETAIL: CPU 120.60s/75.17u sec elapsed 286.78 sec\nINFO: scanned index \"audit_destid_index\" to remove 11184545 row versions\nDETAIL: CPU 111.87s/93.74u sec elapsed 295.05 sec\nINFO: \"audittraillogentry\": removed 11184545 row versions in 1243407 pages\nDETAIL: CPU 20.35s/6.45u sec elapsed 71.61 sec\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184547 row versions\nDETAIL: CPU 21.84s/43.36u sec elapsed 71.72 sec\nINFO: scanned index \"audit_intime_index\" to remove 11184547 row versions\nDETAIL: CPU 33.37s/57.07u sec elapsed 99.50 sec\nINFO: scanned index \"audit_outtime_index\" to remove 11184547 row versions\nDETAIL: CPU 35.08s/60.08u sec elapsed 110.08 sec\nINFO: scanned index \"audit_sourceid_index\" to remove 11184547 row versions\nDETAIL: CPU 117.72s/72.75u sec elapsed 256.31 sec\nINFO: scanned index \"audit_destid_index\" to remove 11184547 row versions\nDETAIL: CPU 103.46s/77.43u sec elapsed 247.23 sec\nINFO: \"audittraillogentry\": removed 11184547 row versions in 268543 pages\nDETAIL: CPU 4.36s/1.35u sec elapsed 9.61 sec\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184521 row versions\nDETAIL: CPU 26.64s/45.46u sec elapsed 80.51 sec\nINFO: scanned index \"audit_intime_index\" to remove 11184521 row versions\nDETAIL: CPU 35.05s/59.11u sec elapsed 111.23 sec\nINFO: scanned index \"audit_outtime_index\" to remove 11184521 row versions\nDETAIL: CPU 32.98s/56.41u sec elapsed 105.93 sec\nINFO: scanned index \"audit_sourceid_index\" to remove 11184521 row versions\nDETAIL: CPU 117.13s/71.14u sec elapsed 254.33 sec\nINFO: scanned index \"audit_destid_index\" to remove 11184521 row versions\nDETAIL: CPU 99.93s/81.77u sec elapsed 241.83 sec\nINFO: \"audittraillogentry\": removed 11184521 row versions in 268593 pages\nDETAIL: CPU 3.49s/1.14u sec elapsed 6.87 sec\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184534 row versions\nDETAIL: CPU 22.73s/42.41u sec elapsed 69.12 sec\nINFO: scanned index \"audit_intime_index\" to remove 11184534 row versions\nDETAIL: CPU 36.78s/68.04u sec elapsed 121.60 sec\nINFO: scanned index \"audit_outtime_index\" to remove 11184534 row versions\nDETAIL: CPU 31.11s/52.88u sec elapsed 93.93 sec\nINFO: scanned index \"audit_sourceid_index\" to remove 11184534 row versions\nDETAIL: CPU 117.95s/72.65u sec elapsed 247.44 sec\nINFO: scanned index \"audit_destid_index\" to remove 11184534 row versions\nDETAIL: CPU 104.25s/82.63u sec elapsed 248.43 sec\nINFO: \"audittraillogentry\": removed 11184534 row versions in 268598 pages\nDETAIL: CPU 3.74s/1.17u sec elapsed 9.45 sec\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184546 row versions\nDETAIL: CPU 21.24s/40.72u sec elapsed 68.78 sec\nINFO: scanned index \"audit_intime_index\" to remove 11184546 row versions\nDETAIL: CPU 34.29s/56.72u sec elapsed 99.63 sec\nINFO: scanned index \"audit_outtime_index\" to remove 11184546 row versions\nDETAIL: CPU 33.83s/60.99u sec elapsed 105.22 sec\nINFO: scanned index \"audit_sourceid_index\" to remove 11184546 row versions\nDETAIL: CPU 114.26s/70.11u sec elapsed 239.56 sec\nINFO: scanned index \"audit_destid_index\" to remove 11184546 row versions\nDETAIL: CPU 100.73s/73.28u sec elapsed 228.37 sec\nINFO: \"audittraillogentry\": removed 11184546 row versions in 268538 pages\nDETAIL: CPU 3.80s/1.18u sec elapsed 7.79 sec\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184523 row versions\nDETAIL: CPU 25.78s/47.23u sec elapsed 77.60 sec\nINFO: scanned index \"audit_intime_index\" to remove 11184523 row versions\nDETAIL: CPU 35.39s/56.45u sec elapsed 103.70 sec\nINFO: scanned index \"audit_outtime_index\" to remove 11184523 row versions\nDETAIL: CPU 31.16s/52.24u sec elapsed 90.21 sec\nINFO: scanned index \"audit_sourceid_index\" to remove 11184523 row versions\nDETAIL: CPU 114.71s/70.03u sec elapsed 260.11 sec\nINFO: scanned index \"audit_destid_index\" to remove 11184523 row versions\nDETAIL: CPU 105.71s/76.33u sec elapsed 228.59 sec\nINFO: \"audittraillogentry\": removed 11184523 row versions in 268611 pages\nDETAIL: CPU 3.40s/1.17u sec elapsed 7.10 sec\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184554 row versions\nDETAIL: CPU 22.80s/39.22u sec elapsed 67.26 sec\nINFO: scanned index \"audit_intime_index\" to remove 11184554 row versions\nDETAIL: CPU 35.38s/57.31u sec elapsed 106.01 sec\nINFO: scanned index \"audit_outtime_index\" to remove 11184554 row versions\nDETAIL: CPU 34.15s/54.73u sec elapsed 97.79 sec\nINFO: scanned index \"audit_sourceid_index\" to remove 11184554 row versions\nDETAIL: CPU 118.37s/71.55u sec elapsed 243.34 sec\nINFO: scanned index \"audit_destid_index\" to remove 11184554 row versions\nDETAIL: CPU 100.43s/72.41u sec elapsed 252.42 sec\nINFO: \"audittraillogentry\": removed 11184554 row versions in 268590 pages\nDETAIL: CPU 4.40s/1.34u sec elapsed 9.00 sec\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184533 row versions\nDETAIL: CPU 25.01s/40.12u sec elapsed 72.19 sec\nINFO: scanned index \"audit_intime_index\" to remove 11184533 row versions\nDETAIL: CPU 34.13s/52.89u sec elapsed 93.53 sec\nINFO: scanned index \"audit_outtime_index\" to remove 11184533 row versions\nDETAIL: CPU 31.29s/50.04u sec elapsed 88.22 sec\nINFO: scanned index \"audit_sourceid_index\" to remove 11184533 row versions\nDETAIL: CPU 119.38s/66.95u sec elapsed 257.04 sec\nINFO: scanned index \"audit_destid_index\" to remove 11184533 row versions\nDETAIL: CPU 102.33s/74.23u sec elapsed 230.70 sec\nINFO: \"audittraillogentry\": removed 11184533 row versions in 268627 pages\nDETAIL: CPU 3.94s/1.28u sec elapsed 7.74 sec\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184536 row versions\nDETAIL: CPU 22.67s/38.49u sec elapsed 66.67 sec\nINFO: scanned index \"audit_intime_index\" to remove 11184536 row versions\nDETAIL: CPU 37.17s/61.79u sec elapsed 107.70 sec\nINFO: scanned index \"audit_outtime_index\" to remove 11184536 row versions\nDETAIL: CPU 32.23s/51.13u sec elapsed 90.93 sec\nINFO: scanned index \"audit_sourceid_index\" to remove 11184536 row versions\nDETAIL: CPU 117.68s/70.04u sec elapsed 239.51 sec\nINFO: scanned index \"audit_destid_index\" to remove 11184536 row versions\nDETAIL: CPU 103.82s/72.82u sec elapsed 228.64 sec\nINFO: \"audittraillogentry\": removed 11184536 row versions in 268597 pages\nDETAIL: CPU 4.01s/1.34u sec elapsed 8.74 sec\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184533 row versions\nDETAIL: CPU 26.34s/39.03u sec elapsed 70.76 sec\nINFO: scanned index \"audit_intime_index\" to remove 11184533 row versions\nDETAIL: CPU 35.98s/53.60u sec elapsed 99.27 sec\nINFO: scanned index \"audit_outtime_index\" to remove 11184533 row versions\nDETAIL: CPU 32.57s/50.71u sec elapsed 90.61 sec\nINFO: scanned index \"audit_sourceid_index\" to remove 11184533 row versions\nDETAIL: CPU 122.50s/64.66u sec elapsed 254.06 sec\nINFO: scanned index \"audit_destid_index\" to remove 11184533 row versions\nDETAIL: CPU 100.87s/78.60u sec elapsed 237.31 sec\nINFO: \"audittraillogentry\": removed 11184533 row versions in 268643 pages\nDETAIL: CPU 4.01s/1.23u sec elapsed 7.69 sec\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184535 row versions\nDETAIL: CPU 22.65s/36.84u sec elapsed 61.70 sec\nINFO: scanned index \"audit_intime_index\" to remove 11184535 row versions\nDETAIL: CPU 37.86s/59.20u sec elapsed 104.94 sec\nINFO: scanned index \"audit_outtime_index\" to remove 11184535 row versions\nDETAIL: CPU 32.06s/48.99u sec elapsed 88.31 sec\nINFO: scanned index \"audit_sourceid_index\" to remove 11184535 row versions\nDETAIL: CPU 120.01s/69.92u sec elapsed 245.13 sec\nINFO: scanned index \"audit_destid_index\" to remove 11184535 row versions\nDETAIL: CPU 102.99s/69.48u sec elapsed 216.71 sec\nINFO: \"audittraillogentry\": removed 11184535 row versions in 268574 pages\nDETAIL: CPU 4.27s/1.41u sec elapsed 9.40 sec\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184545 row versions\nDETAIL: CPU 26.12s/39.21u sec elapsed 71.64 sec\nINFO: scanned index \"audit_intime_index\" to remove 11184545 row versions\nDETAIL: CPU 35.67s/52.12u sec elapsed 95.95 sec\nINFO: scanned index \"audit_outtime_index\" to remove 11184545 row versions\nDETAIL: CPU 32.68s/47.59u sec elapsed 86.58 sec\nINFO: scanned index \"audit_sourceid_index\" to remove 11184545 row versions\nDETAIL: CPU 118.72s/64.51u sec elapsed 249.14 sec\nINFO: scanned index \"audit_destid_index\" to remove 11184545 row versions\nDETAIL: CPU 103.10s/76.75u sec elapsed 248.05 sec\nINFO: \"audittraillogentry\": removed 11184545 row versions in 268662 pages\nDETAIL: CPU 3.69s/1.18u sec elapsed 7.75 sec\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184521 row versions\nDETAIL: CPU 22.80s/35.86u sec elapsed 61.23 sec\nINFO: scanned index \"audit_intime_index\" to remove 11184521 row versions\nDETAIL: CPU 35.79s/53.76u sec elapsed 97.45 sec\nINFO: scanned index \"audit_outtime_index\" to remove 11184521 row versions\nDETAIL: CPU 33.41s/46.93u sec elapsed 93.18 sec\nINFO: scanned index \"audit_sourceid_index\" to remove 11184521 row versions\nDETAIL: CPU 117.29s/66.18u sec elapsed 224.79 sec\nINFO: scanned index \"audit_destid_index\" to remove 11184521 row versions\nDETAIL: CPU 104.67s/68.33u sec elapsed 226.39 sec\nINFO: \"audittraillogentry\": removed 11184521 row versions in 268576 pages\nDETAIL: CPU 3.76s/1.08u sec elapsed 7.49 sec\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184525 row versions\nDETAIL: CPU 25.06s/39.94u sec elapsed 70.43 sec\nINFO: scanned index \"audit_intime_index\" to remove 11184525 row versions\nDETAIL: CPU 35.01s/50.04u sec elapsed 94.04 sec\nINFO: scanned index \"audit_outtime_index\" to remove 11184525 row versions\nDETAIL: CPU 31.41s/45.69u sec elapsed 84.37 sec\nINFO: scanned index \"audit_sourceid_index\" to remove 11184525 row versions\nDETAIL: CPU 118.28s/63.16u sec elapsed 244.28 sec\nINFO: scanned index \"audit_destid_index\" to remove 11184525 row versions\nDETAIL: CPU 105.60s/73.95u sec elapsed 227.47 sec\nINFO: \"audittraillogentry\": removed 11184525 row versions in 268660 pages\nDETAIL: CPU 3.91s/1.25u sec elapsed 7.51 sec\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184538 row versions\nDETAIL: CPU 23.79s/34.59u sec elapsed 62.01 sec\nINFO: scanned index \"audit_intime_index\" to remove 11184538 row versions\nDETAIL: CPU 36.86s/51.24u sec elapsed 99.10 sec\nINFO: scanned index \"audit_outtime_index\" to remove 11184538 row versions\nDETAIL: CPU 34.95s/53.11u sec elapsed 98.44 sec\nINFO: scanned index \"audit_sourceid_index\" to remove 11184538 row versions\nDETAIL: CPU 115.09s/62.14u sec elapsed 229.85 sec\nINFO: scanned index \"audit_destid_index\" to remove 11184538 row versions\nDETAIL: CPU 107.02s/65.97u sec elapsed 218.05 sec\nINFO: \"audittraillogentry\": removed 11184538 row versions in 268584 pages\nDETAIL: CPU 3.46s/1.30u sec elapsed 7.03 sec\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184546 row versions\nDETAIL: CPU 23.68s/33.59u sec elapsed 60.67 sec\nINFO: scanned index \"audit_intime_index\" to remove 11184546 row versions\nDETAIL: CPU 39.63s/54.93u sec elapsed 106.66 sec\nINFO: scanned index \"audit_outtime_index\" to remove 11184546 row versions\nDETAIL: CPU 32.55s/44.43u sec elapsed 84.53 sec\nINFO: scanned index \"audit_sourceid_index\" to remove 11184546 row versions\nDETAIL: CPU 122.49s/63.49u sec elapsed 235.39 sec\nINFO: scanned index \"audit_destid_index\" to remove 11184546 row versions\nDETAIL: CPU 108.09s/69.68u sec elapsed 227.05 sec\nINFO: \"audittraillogentry\": removed 11184546 row versions in 269472 pages\nDETAIL: CPU 4.32s/1.33u sec elapsed 8.72 sec\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184536 row versions\nDETAIL: CPU 23.70s/32.98u sec elapsed 62.22 sec\nINFO: scanned index \"audit_intime_index\" to remove 11184536 row versions\nDETAIL: CPU 35.77s/46.57u sec elapsed 88.27 sec\nINFO: scanned index \"audit_outtime_index\" to remove 11184536 row versions\nDETAIL: CPU 32.59s/43.16u sec elapsed 82.06 sec\nINFO: scanned index \"audit_sourceid_index\" to remove 11184536 row versions\nDETAIL: CPU 126.27s/60.18u sec elapsed 258.72 sec\nINFO: scanned index \"audit_destid_index\" to remove 11184536 row versions\nDETAIL: CPU 112.57s/65.24u sec elapsed 232.06 sec\nINFO: \"audittraillogentry\": removed 11184536 row versions in 269319 pages\nDETAIL: CPU 3.73s/1.29u sec elapsed 7.58 sec\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184538 row versions\nDETAIL: CPU 23.22s/32.16u sec elapsed 60.22 sec\nINFO: scanned index \"audit_intime_index\" to remove 11184538 row versions\nDETAIL: CPU 38.42s/51.43u sec elapsed 101.53 sec\nINFO: scanned index \"audit_outtime_index\" to remove 11184538 row versions\nDETAIL: CPU 33.29s/42.79u sec elapsed 88.70 sec\nINFO: scanned index \"audit_sourceid_index\" to remove 11184538 row versions\nDETAIL: CPU 124.04s/62.06u sec elapsed 230.83 sec\nINFO: scanned index \"audit_destid_index\" to remove 11184538 row versions\nDETAIL: CPU 105.41s/64.14u sec elapsed 223.93 sec\nINFO: \"audittraillogentry\": removed 11184538 row versions in 269384 pages\nDETAIL: CPU 3.69s/1.11u sec elapsed 7.79 sec\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184520 row versions\nDETAIL: CPU 26.60s/34.89u sec elapsed 64.47 sec\nINFO: scanned index \"audit_intime_index\" to remove 11184520 row versions\nDETAIL: CPU 36.01s/45.24u sec elapsed 88.69 sec\nINFO: scanned index \"audit_outtime_index\" to remove 11184520 row versions\nDETAIL: CPU 33.00s/41.31u sec elapsed 83.02 sec\nINFO: scanned index \"audit_sourceid_index\" to remove 11184520 row versions\nDETAIL: CPU 124.80s/58.92u sec elapsed 246.98 sec\nINFO: scanned index \"audit_destid_index\" to remove 11184520 row versions\nDETAIL: CPU 106.35s/71.38u sec elapsed 249.67 sec\nINFO: \"audittraillogentry\": removed 11184520 row versions in 269050 pages\nDETAIL: CPU 3.74s/1.16u sec elapsed 8.87 sec\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184523 row versions\nDETAIL: CPU 21.95s/30.36u sec elapsed 59.88 sec\nINFO: scanned index \"audit_intime_index\" to remove 11184523 row versions\nDETAIL: CPU 33.84s/42.86u sec elapsed 88.67 sec\nINFO: scanned index \"audit_outtime_index\" to remove 11184523 row versions\nDETAIL: CPU 35.71s/44.46u sec elapsed 95.35 sec\nINFO: scanned index \"audit_sourceid_index\" to remove 11184523 row versions\nDETAIL: CPU 120.51s/61.81u sec elapsed 249.04 sec\nINFO: scanned index \"audit_destid_index\" to remove 11184523 row versions\nDETAIL: CPU 103.16s/62.69u sec elapsed 231.34 sec\nINFO: \"audittraillogentry\": removed 11184523 row versions in 266741 pages\nDETAIL: CPU 4.27s/1.24u sec elapsed 8.26 sec\nINFO: scanned index \"audittraillogentry_pkey\" to remove 11184551 row versions\nDETAIL: CPU 25.89s/37.48u sec elapsed 69.65 sec\nINFO: scanned index \"audit_intime_index\" to remove 11184551 row versions\nDETAIL: CPU 35.74s/43.70u sec elapsed 100.58 sec\nINFO: scanned index \"audit_outtime_index\" to remove 11184551 row versions\nDETAIL: CPU 31.45s/40.14u sec elapsed 84.00 sec\n \ndb_Server14=# SELECT pid, datname, usename, state, backend_xmin FROM pg_stat_activity WHERE backend_xmin IS NOT NULL ORDER BY age(backend_xmin) DESC;\n pid | datname | usename | state | backend_xmin\n-------+----------------+----------+--------+--------------\n73583 | fm_db_Server14 | mmsuper | active | 63548809\n31359 | fm_db_Server14 | postgres | active | 63548812\n52761 | fm_db_Server14 | mmsuper | active | 63548814\n53197 | fm_db_Server14 | mmsuper | active | 63548815\n53409 | fm_db_Server14 | mmsuper | active | 63548815\n38917 | fm_db_Server14 | mmsuper | active | 63548818\n(6 rows)\n \ndb_Server14=# SELECT slot_name, slot_type, database, xmin FROM pg_replication_slots ORDER BY age(xmin) DESC;\nslot_name | slot_type | database | xmin\n-----------+-----------+----------+------\n(0 rows)\n \ndb_Server14=# SELECT gid, prepared, owner, database, transaction AS xmin FROM pg_prepared_xacts ORDER BY age(transaction) DESC;\ngid | prepared | owner | database | xmin\n-----+----------+-------+----------+------\n(0 rows)\n \nRegards\nTarkeshwar",
"msg_date": "Fri, 19 Feb 2021 10:51:32 +0000",
"msg_from": "M Tarkeshwar Rao <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Autovacuum not functioning for large tables but it is working for\n few other small tables."
},
{
"msg_contents": "On Fri, 2021-02-19 at 10:51 +0000, M Tarkeshwar Rao wrote:\n> Please find the Vacuum(verbose) output. Can you please suggest what is the reason?\n> How can we avoid these scenarios?\n> \n> The customer tried to run the VACUUM(verbose) last night, but it was running\n> continuously for 5 hours without any visible progress. So they had to abort it\n> as it was going to exhaust their maintenance window.\n> \n> db_Server14=# VACUUM (VERBOSE) audittraillogentry;\n> INFO: vacuuming \"mmsuper.audittraillogentry\"\n> INFO: scanned index \"audittraillogentry_pkey\" to remove 11184539 row versions\n> DETAIL: CPU 25.24s/49.11u sec elapsed 81.33 sec\n> INFO: scanned index \"audit_intime_index\" to remove 11184539 row versions\n> DETAIL: CPU 23.27s/59.28u sec elapsed 88.63 sec\n> INFO: scanned index \"audit_outtime_index\" to remove 11184539 row versions\n> DETAIL: CPU 27.02s/55.10u sec elapsed 92.04 sec\n> INFO: scanned index \"audit_sourceid_index\" to remove 11184539 row versions\n> DETAIL: CPU 110.81s/72.29u sec elapsed 260.71 sec\n> [and so on, the same 6 indexes are repeatedly scanned]\n\nPostgreSQL performs VACUUM in batches of \"maintenance_work_mem\" size\nof tuple identifiers. If that parameter is small, the indexes have\nto be scanned often.\n\nTry increasing \"maintenance_work_mem\" to 1GB (if you have enough RAM),\nthat will make it faster.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Fri, 19 Feb 2021 14:29:03 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum not functioning for large tables but it is working\n for few other small tables."
}
] |
[
{
"msg_contents": "Hi\n\nPlease help me with document for oracle to postgresql\nuseing Ora2pgtool\n\n-- \nThanks and Regards\nRajshekhariah Umesh\n\nHi Please help me with document for oracle to postgresqluseing Ora2pgtool-- Thanks and RegardsRajshekhariah Umesh",
"msg_date": "Fri, 18 Dec 2020 16:13:05 +0530",
"msg_from": "bangalore umesh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Oracle to postgresql"
},
{
"msg_contents": "2020年12月18日(金) 19:43 bangalore umesh <[email protected]>:\n>\n> Hi\n>\n> Please help me with document for oracle to postgresql\n> useing Ora2pgtool\n\nAFAIK the main documentation is the extensive README file, as available here:\n\n https://github.com/darold/ora2pg\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Fri, 18 Dec 2020 20:14:49 +0900",
"msg_from": "Ian Lawrence Barwick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Oracle to postgresql"
}
] |
[
{
"msg_contents": "Hi. I'm wondering if this is normal or at least known behavior?\nBasically, if I'm specifying a LIMIT and also NULLS FIRST (or NULLS LAST\nwith a descending sort), I get a sequence scan and a couple of orders of\nmagnitude slower query. Perhaps not relevantly, but definitely ironically,\nthe sort field in question is defined to be NOT NULL.\n\nThis is on 9.6.20. I tried a couple of different tables in a couple of\ndatabases, with similar results.\n\nThanks in advance for any insight!\n\nKen\n\n\n=> EXPLAIN ANALYZE SELECT * FROM tbl_entry WHERE NOT is_deleted ORDER BY\nentered_at NULLS LAST LIMIT 60;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.29..2.78 rows=60 width=143) (actual time=0.027..0.260\nrows=60 loops=1)\n -> Index Scan using index_tbl_entry_entered_at on tbl_entry\n (cost=0.29..4075.89 rows=98443 width=143) (actual time=0.023..0.105\nrows=60 loops=1)\n Planning time: 0.201 ms\n\n* Execution time: 0.366 ms*(4 rows)\n\n=> EXPLAIN ANALYZE SELECT * FROM tbl_entry WHERE NOT is_deleted ORDER BY\nentered_at NULLS FIRST LIMIT 60;\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=5927.55..5927.70 rows=60 width=143) (actual\ntime=269.088..269.302 rows=60 loops=1)\n -> Sort (cost=5927.55..6173.65 rows=98443 width=143) (actual\ntime=269.085..269.157 rows=60 loops=1)\n Sort Key: entered_at NULLS FIRST\n Sort Method: top-N heapsort Memory: 33kB\n -> Seq Scan on tbl_entry (cost=0.00..2527.87 rows=98443\nwidth=143) (actual time=0.018..137.028 rows=98107 loops=1)\n Filter: (NOT is_deleted)\n Rows Removed by Filter: 1074\n Planning time: 0.209 ms\n *Execution time: 269.423 ms*\n(9 rows)\n\n=> \\d tbl_entry\n Table \"public.tbl_entry\"\n Column | Type |\n Modifiers\n---------------------+--------------------------------+--------------------------------------------------------------\n entry_id | bigint | not null default\nnextval('tbl_entry_entry_id_seq'::regclass)\n entered_at | timestamp without time zone | not null\n exited_at | timestamp without time zone |\n client_id | integer | not null\n issue_no | integer |\n source | character(1) |\n entry_location_code | character varying(10) | not null\n added_by | integer | not null default\nsys_user()\n added_at | timestamp(0) without time zone | not null default\nnow()\n changed_by | integer | not null default\nsys_user()\n changed_at | timestamp(0) without time zone | not null default\nnow()\n is_deleted | boolean | not null default\nfalse\n deleted_at | timestamp(0) without time zone |\n deleted_by | integer |\n deleted_comment | text |\n sys_log | text |\nIndexes:\n \"tbl_entry_pkey\" PRIMARY KEY, btree (entry_id)\n \"index_tbl_entry_client_id\" btree (client_id) WHERE NOT is_deleted\n \"index_tbl_entry_client_id_entered_at\" btree (client_id, entered_at)\nWHERE NOT is_deleted\n \"index_tbl_entry_entered_at\" btree (entered_at) WHERE NOT is_deleted\n \"index_tbl_entry_entry_location_code\" btree (entry_location_code) WHERE\nNOT is_deleted\n \"index_tbl_entry_is_deleted\" btree (is_deleted)\nCheck constraints:\n \"tbl_entry_check\" CHECK (NOT is_deleted AND deleted_at IS NULL OR\nis_deleted AND deleted_at IS NOT NULL)\n \"tbl_entry_check1\" CHECK (NOT is_deleted AND deleted_by IS NULL OR\nis_deleted AND deleted_by IS NOT NULL)\nForeign-key constraints:\n \"tbl_entry_added_by_fkey\" FOREIGN KEY (added_by) REFERENCES\ntbl_staff(staff_id)\n \"tbl_entry_changed_by_fkey\" FOREIGN KEY (changed_by) REFERENCES\ntbl_staff(staff_id)\n \"tbl_entry_client_id_fkey\" FOREIGN KEY (client_id) REFERENCES\ntbl_client(client_id)\n \"tbl_entry_deleted_by_fkey\" FOREIGN KEY (deleted_by) REFERENCES\ntbl_staff(staff_id)\n \"tbl_entry_entry_location_code_fkey\" FOREIGN KEY (entry_location_code)\nREFERENCES tbl_l_entry_location(entry_location_code)\nTriggers:\n tbl_entry_alert_notify AFTER INSERT OR DELETE OR UPDATE ON tbl_entry\nFOR EACH ROW EXECUTE PROCEDURE table_alert_notify()\n tbl_entry_log_chg AFTER DELETE OR UPDATE ON tbl_entry FOR EACH ROW\nEXECUTE PROCEDURE table_log()\n\n\n\n-- \nAGENCY Software\nA Free Software data system\nBy and for non-profits\n*http://agency-software.org/ <http://agency-software.org/>*\n*https://demo.agency-software.org/client\n<https://demo.agency-software.org/client>*\[email protected]\n(253) 245-3801\n\nSubscribe to the mailing list\n<[email protected]?body=subscribe> to\nlearn more about AGENCY or\nfollow the discussion.\n\nHi. I'm wondering if this is normal or at least known behavior? Basically, if I'm specifying a LIMIT and also NULLS FIRST (or NULLS LAST with a descending sort), I get a sequence scan and a couple of orders of magnitude slower query. Perhaps not relevantly, but definitely ironically, the sort field in question is defined to be NOT NULL.This is on 9.6.20. I tried a couple of different tables in a couple of databases, with similar results.Thanks in advance for any insight!Ken=> EXPLAIN ANALYZE SELECT * FROM tbl_entry WHERE NOT is_deleted ORDER BY entered_at NULLS LAST LIMIT 60; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=0.29..2.78 rows=60 width=143) (actual time=0.027..0.260 rows=60 loops=1) -> Index Scan using index_tbl_entry_entered_at on tbl_entry (cost=0.29..4075.89 rows=98443 width=143) (actual time=0.023..0.105 rows=60 loops=1) Planning time: 0.201 ms Execution time: 0.366 ms(4 rows)=> EXPLAIN ANALYZE SELECT * FROM tbl_entry WHERE NOT is_deleted ORDER BY entered_at NULLS FIRST LIMIT 60; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------ Limit (cost=5927.55..5927.70 rows=60 width=143) (actual time=269.088..269.302 rows=60 loops=1) -> Sort (cost=5927.55..6173.65 rows=98443 width=143) (actual time=269.085..269.157 rows=60 loops=1) Sort Key: entered_at NULLS FIRST Sort Method: top-N heapsort Memory: 33kB -> Seq Scan on tbl_entry (cost=0.00..2527.87 rows=98443 width=143) (actual time=0.018..137.028 rows=98107 loops=1) Filter: (NOT is_deleted) Rows Removed by Filter: 1074 Planning time: 0.209 ms Execution time: 269.423 ms(9 rows)=> \\d tbl_entry Table \"public.tbl_entry\" Column | Type | Modifiers ---------------------+--------------------------------+-------------------------------------------------------------- entry_id | bigint | not null default nextval('tbl_entry_entry_id_seq'::regclass) entered_at | timestamp without time zone | not null exited_at | timestamp without time zone | client_id | integer | not null issue_no | integer | source | character(1) | entry_location_code | character varying(10) | not null added_by | integer | not null default sys_user() added_at | timestamp(0) without time zone | not null default now() changed_by | integer | not null default sys_user() changed_at | timestamp(0) without time zone | not null default now() is_deleted | boolean | not null default false deleted_at | timestamp(0) without time zone | deleted_by | integer | deleted_comment | text | sys_log | text | Indexes: \"tbl_entry_pkey\" PRIMARY KEY, btree (entry_id) \"index_tbl_entry_client_id\" btree (client_id) WHERE NOT is_deleted \"index_tbl_entry_client_id_entered_at\" btree (client_id, entered_at) WHERE NOT is_deleted \"index_tbl_entry_entered_at\" btree (entered_at) WHERE NOT is_deleted \"index_tbl_entry_entry_location_code\" btree (entry_location_code) WHERE NOT is_deleted \"index_tbl_entry_is_deleted\" btree (is_deleted)Check constraints: \"tbl_entry_check\" CHECK (NOT is_deleted AND deleted_at IS NULL OR is_deleted AND deleted_at IS NOT NULL) \"tbl_entry_check1\" CHECK (NOT is_deleted AND deleted_by IS NULL OR is_deleted AND deleted_by IS NOT NULL)Foreign-key constraints: \"tbl_entry_added_by_fkey\" FOREIGN KEY (added_by) REFERENCES tbl_staff(staff_id) \"tbl_entry_changed_by_fkey\" FOREIGN KEY (changed_by) REFERENCES tbl_staff(staff_id) \"tbl_entry_client_id_fkey\" FOREIGN KEY (client_id) REFERENCES tbl_client(client_id) \"tbl_entry_deleted_by_fkey\" FOREIGN KEY (deleted_by) REFERENCES tbl_staff(staff_id) \"tbl_entry_entry_location_code_fkey\" FOREIGN KEY (entry_location_code) REFERENCES tbl_l_entry_location(entry_location_code)Triggers: tbl_entry_alert_notify AFTER INSERT OR DELETE OR UPDATE ON tbl_entry FOR EACH ROW EXECUTE PROCEDURE table_alert_notify() tbl_entry_log_chg AFTER DELETE OR UPDATE ON tbl_entry FOR EACH ROW EXECUTE PROCEDURE table_log()-- AGENCY Software A Free Software data systemBy and for non-profitshttp://agency-software.org/https://demo.agency-software.org/[email protected](253) 245-3801Subscribe to the mailing list tolearn more about AGENCY orfollow the discussion.",
"msg_date": "Fri, 18 Dec 2020 17:53:03 -0800",
"msg_from": "Ken Tanzer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Reversing NULLS in ORDER causes index not to be used?"
},
{
"msg_contents": "Ken Tanzer <[email protected]> writes:\n> Hi. I'm wondering if this is normal or at least known behavior?\n> Basically, if I'm specifying a LIMIT and also NULLS FIRST (or NULLS LAST\n> with a descending sort), I get a sequence scan and a couple of orders of\n> magnitude slower query. Perhaps not relevantly, but definitely ironically,\n> the sort field in question is defined to be NOT NULL.\n\nThe index won't get credit for matching the requested ordering if it's\ngot the wrong null-ordering polarity. There's not an exception for\nNOT NULL columns. If you know the column hasn't got nulls, why are\nyou bothering with a nondefault null-ordering request?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 18 Dec 2020 21:02:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reversing NULLS in ORDER causes index not to be used?"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 6:03 PM Tom Lane <[email protected]> wrote:\n\n> Ken Tanzer <[email protected]> writes:\n> > Hi. I'm wondering if this is normal or at least known behavior?\n> > Basically, if I'm specifying a LIMIT and also NULLS FIRST (or NULLS LAST\n> > with a descending sort), I get a sequence scan and a couple of orders of\n> > magnitude slower query. Perhaps not relevantly, but definitely\n> ironically,\n> > the sort field in question is defined to be NOT NULL.\n>\n> The index won't get credit for matching the requested ordering if it's\n> got the wrong null-ordering polarity. There's not an exception for\n> NOT NULL columns. If you know the column hasn't got nulls, why are\n> you bothering with a nondefault null-ordering request?\n>\n>\nI didn't write the query. I was just trying to troubleshoot one (an d not\nthe one I sent--that was a simplified example). In this case it didn't\nmatter. It just hadn't ever occurred to me that NULLS FIRST/LAST could\nhave performance impacts, and I couldn't see why.\n\nI also see now that CREATE INDEX has NULLS FIRST/LAST options, which now\nmakes perfect sense but was news to me.\n\nStill though is there no optimization gain to be had for being able to\nhandle nulls either first or last in an index? I blissfully know nothing\nabout how such things _actually_ work, but since they're all together at\neither the beginning or the end, it seems like there'd be at most one skip\nin the order of the values to account for, which seems like in many cases\nwould be better than not using an index at all. But there's probably good\nreasons why that doesn't hold water. :)\n\nThanks!\n\nKen\n\n\n\n-- \nAGENCY Software\nA Free Software data system\nBy and for non-profits\n*http://agency-software.org/ <http://agency-software.org/>*\n*https://demo.agency-software.org/client\n<https://demo.agency-software.org/client>*\[email protected]\n(253) 245-3801\n\nSubscribe to the mailing list\n<[email protected]?body=subscribe> to\nlearn more about AGENCY or\nfollow the discussion.\n\nOn Fri, Dec 18, 2020 at 6:03 PM Tom Lane <[email protected]> wrote:Ken Tanzer <[email protected]> writes:\n> Hi. I'm wondering if this is normal or at least known behavior?\n> Basically, if I'm specifying a LIMIT and also NULLS FIRST (or NULLS LAST\n> with a descending sort), I get a sequence scan and a couple of orders of\n> magnitude slower query. Perhaps not relevantly, but definitely ironically,\n> the sort field in question is defined to be NOT NULL.\n\nThe index won't get credit for matching the requested ordering if it's\ngot the wrong null-ordering polarity. There's not an exception for\nNOT NULL columns. If you know the column hasn't got nulls, why are\nyou bothering with a nondefault null-ordering request?\nI didn't write the query. I was just trying to troubleshoot one (an d not the one I sent--that was a simplified example). In this case it didn't matter. It just hadn't ever occurred to me that NULLS FIRST/LAST could have performance impacts, and I couldn't see why.I also see now that CREATE INDEX has NULLS FIRST/LAST options, which now makes perfect sense but was news to me.Still though is there no optimization gain to be had for being able to handle nulls either first or last in an index? I blissfully know nothing about how such things _actually_ work, but since they're all together at either the beginning or the end, it seems like there'd be at most one skip in the order of the values to account for, which seems like in many cases would be better than not using an index at all. But there's probably good reasons why that doesn't hold water. :)Thanks!Ken-- AGENCY Software A Free Software data systemBy and for non-profitshttp://agency-software.org/https://demo.agency-software.org/[email protected](253) 245-3801Subscribe to the mailing list tolearn more about AGENCY orfollow the discussion.",
"msg_date": "Fri, 18 Dec 2020 18:30:16 -0800",
"msg_from": "Ken Tanzer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reversing NULLS in ORDER causes index not to be used?"
}
] |
[
{
"msg_contents": "Hi all, I have a general question on scaling PostgreSQL for unlimited \nthroughput, based on some experience.\n\nTL;DR: My question is: given that the work-load on any secondary/standby \ndatabase server is almost the same as that of the master database \nserver, is there any point to bother with PgPool-II to route query \nactivity over the hot standby's, is it instead not better to just \nincrease the power of the master database system? Is there any trick \nthat can really get to massive scalability with the database?\n\nBackground: Let's say we're building a massive NSA citizens surveillance \nsystem where we process every call and every email of every person on \nearth to find dissidents and link this to their financial transactions \ntravel logs like airline bookings and Uber rides, to find some bogus \ncharges that FBI agents could use to put any dissident in prison as soon \nas possible. And so we need a system that infinitely scales.\n\nOK, I'm kidding (but keep thinking of adverse effects of our IT work) \nbut the point is a massive data processing system. We know that \nparallelizing all work flows is the key to keep up. Processing the \nspeech and emails and messages and other transactions is mostly doable \nby throwing hardware at it to parallelize the workflows.\n\nThere's always going to be one common bottleneck: that database.\n\nIn my experience of parallelizing workflow processes, I can hammer my \nPostgreSQL database and all I can do to keep up with that is broadening \nthe IO pipeline and looking at the balance of CPU and IO to make sure \nthat it's balanced at near 100% and as I add more CPU bandwidth I add \nmore IO bandwidth and so on to keep those gigabytes flowing and the CPUs \nchurning. But it's much harder with the database than with the message \ntransformations (natural language understanding, data extraction, image \nprocessing, etc.)\n\nI have set up a hot standby database which I thought would just keep \ntrack with the master, and which I could use to run queries while the \ninsert, update, and delete operations would all go against the master \ndb. What I discovered is that the stress on the hot standby systems is \nsignificant just to keep up! The replaying of these logs takes \nsignificant resources, so much that if I use a less powerful hardware \nfor the secondary, it tends to fall behind and ultimately bails out \nbecause it cannot process the log stream.\n\nSo, if my secondary is so busy already with just keeping up to date with \nthe master db, and I cannot use a significantly smaller hardware, how \ncan I put a lot of extra query load on these secondary systems? My \nargument is GENERAL not \"show me your schema\", etc. I am talking about \nprinciples. I read it somewhere that you need to dimension these \nsecondaries / standby servers about the same capacity as the master \nserver. And that means that the standby servers are about as busy as the \nmaster server. And that means that as you scale this up, the scaling is \nactually quite inefficient. I have to copy all that data while the \nreceiving end of all that data is as busy receiving this data as the \nmaster server is with processing the actual transactions.\n\nDoesn't that mean that it's better to just scale up the master system as \nmuch as possible while the standby servers are only a means of fault \ntolerance but never actually improved performance? In other words there \nis no real benefit of running read/query-only workloads on the \nsecondaries and routing updates to the primary, because the background \nworkload is replicated with every standby server and is not \nsignificantly less than the workload on the master server.\n\nAnd in other words, isn't there a way to replicate that is more \nefficient? Or are there hard limits? Again, I'm talking principles.\n\nFor example, if I just make exact disk copies of the data tables on the \nSCSI bus level (like RAID-1) for block write transactions while I \ndistribute the block read transactions over the RAID-1 spindles, again, \nmost of my disks are still occupied with the write transactions because \nthey all must write everything while I can distribute only the read \nactivity. I suppose I can use some tricks to avoid seek time by \nscheduling reads to those disks that are currently writing to the same \ncylinder (I know that's moot with SSDs but there is some locality issues \neven for DDR RAM access, so the principle still holds). I suppose I \ncould tweak the mirrors, to track the master write with a slight delay \nso as to allow some potential to re-organize blocks so as to write \ncontiguous blocks or blocks that go to the same track. But this type of \nwrite scheduling is what OSs do out of a cache.\n\nSo, my question is: isn't there any trick that can really get to massive \nscalability with the database? Should I even bother with PgPool-II to \nroute query activity over hot standbys? I can buy two boxes of n CPUs \nand disk volume to run as master and slave, or I can spend the same \nmoney to buy a single system with twice the CPU cores and a twice as \nwide IO path and disks. Why would I do anything other but to just \nincrease that master db server?\n\nregards,\n-Gunther\n\n\n\n\n",
"msg_date": "Wed, 23 Dec 2020 13:01:07 -0500",
"msg_from": "Gunther Schadow <[email protected]>",
"msg_from_op": true,
"msg_subject": "Conundrum with scaling out of bottleneck with hot standby, PgPool-II,\n etc."
},
{
"msg_contents": "Hello,\n\nI asked the same question to myself in the past years.\n\nI think that the question boils down to:\nHow can I achieve unlimited database scalability?\nIs it possible to have linear scalability (i.e. throughput increases\nproportionally to the number of nodes)?\n\nThe answer is \"sharding\". It can be a custom solution or a database that\nsupports it automatically. In this way you can actually split data across\nmultiple nodes and the client contacts only the relevant servers for that\ndata (based on a shard key). See also\nhttps://kubernetes-rails.com/#conclusion about database. Take a look at how\nCassandra, MongoDB, CouchDB and Redis Cluster work for example:\nhowever there are huge limitations / drawbacks that come along with their\nunlimited-scalability strategies.\n\nFor hot standbys, those are only useful if you have a relatively small\nnumber of writes compared to reads. With that slave nodes you only scale\nthe *read* throughput.\n\nHope it helps,\nMarco Colli\n\n Hello,I asked the same question to myself in the past years.I think that the question boils down to: How can I achieve unlimited database scalability? Is it possible to have linear scalability (i.e. throughput increases proportionally to the number of nodes)?The answer is \"sharding\". It can be a custom solution or a database that supports it automatically. In this way you can actually split data across multiple nodes and the client contacts only the relevant servers for that data (based on a shard key). See also https://kubernetes-rails.com/#conclusion about database. Take a look at how Cassandra, MongoDB, CouchDB and Redis Cluster work for example: however there are huge limitations / drawbacks that come along with their unlimited-scalability strategies.For hot standbys, those are only useful if you have a relatively small number of writes compared to reads. With that slave nodes you only scale the *read* throughput.Hope it helps,Marco Colli",
"msg_date": "Wed, 23 Dec 2020 19:34:01 +0100",
"msg_from": "Marco Colli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Conundrum with scaling out of bottleneck with hot standby,\n PgPool-II, etc."
},
{
"msg_contents": "On Wed, Dec 23, 2020 at 07:34:01PM +0100, Marco Colli wrote:\n> �Hello,\n> \n> I asked the same question to myself in the past years.\n> \n> I think that the question boils down to:�\n> How can I achieve unlimited database scalability?�\n> Is it possible to have linear scalability (i.e. throughput increases\n> proportionally to the number of nodes)?\n> \n> The answer is \"sharding\". It can be a custom solution or a database that\n> supports it automatically. In this way you can actually split data across\n> multiple nodes and the client contacts only the relevant servers for that data\n> (based on a shard key). See also�https://kubernetes-rails.com/#conclusion about\n> database. Take a look at how Cassandra, MongoDB, CouchDB and Redis Cluster work\n> for example: however�there are huge limitations / drawbacks that come along\n> with their unlimited-scalability strategies.\n> \n> For hot standbys, those are only useful if you have a relatively small number\n> of writes compared to reads. With that slave nodes you only scale the *read*\n> throughput.\n\nAgreed. There are really two parameters:\n\n1. percentage of reads vs writes\n2. number of standbys\n\nIf you have a high value for #1, it makes sense to use pgpool, but\nhaving only one standby doesn't buy you much; add three, and you will\nsee an impact. Second, if writes are high, only scaling up the primary\nor adding sharding will help you. It is kind of an odd calculus, but it\nmakes sense.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 23 Dec 2020 14:09:19 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Conundrum with scaling out of bottleneck with hot standby,\n PgPool-II, etc."
}
] |
[
{
"msg_contents": "Hi,\n\nI have a performance problem with a query. I've uploaded it, along with\nthe EXPLAIN ANALYZE output here [1].\n\n1: https://explain.depesz.com/s/w5vP\n\nI think the query is taking longer than I'd like, as PostgreSQL is not\ngenerating a great plan, in particular the estimated rows in parts of\nthe plan are way off from the actual number of rows produced, and I\nthink PostgreSQL might pick a better approach if it made more reasonable\nestimates.\n\nLooking at the row numbers on [1], I think things start to go wrong at\nrow 11. The first part of the recursive CTE is estimated to generate\n22,122 rows, and this isn't far off the 15,670 rows it generates. The\nWorkTable scan expects to generate 10x that though, at 221,220. Maybe\nthat's reasonable, I'm unsure?\n\nThings seem to continue getting out of hand from this point. The nested\nloop on line 10 generates 232x less rows than expected, and this results\nin an overall overestimate of 4,317x.\n\nMy theory as to why the row estimates is way off is that the planner is\nusing the n_distinct counts for the derivation_inputs table, and they\nare way off [2]. I've set the stats target to the highest value for this\ntable, so I'm unsure what I can do to improve these estimates?\n\nI've included relevant numbers for the two important tables below\n[2][3], as well as descriptions of the tables [4][5].\n\nDoes anyone know if my theory as to why the row estimates in the plan is\nright, or have any ideas as how I can make the query more performant?\n\nThanks,\n\nChris\n\n\n2: \nderivation_inputs:\n COUNT(*): 285422539\n reltuples: 285422528\n\n derivation_id:\n COUNT(DISTINCT): 7508610\n n_distinct: 4336644 (~57% of the true value)\n\n derivation_output_id:\n COUNT(DISTINCT): 5539406\n n_distinct: 473762 (~8% of the true value)\n\nderivation_outputs:\n COUNT(*): 8206019\n reltuples: 8205873\n\n3:\nguix_data_service=> SELECT attname, n_distinct::integer FROM pg_stats WHERE tablename = 'derivation_outputs';\n attname | n_distinct \n------------------------------+------------\n derivation_id | -1\n name | 54\n derivation_output_details_id | 372225\n id | -1\n(4 rows)\n\n\n4:\nguix_data_service@[local]:5432/patches_guix_data_service> \\d+ derivation_inputs\n Table \"guix_data_service.derivation_inputs\"\n┌──────────────────────┬─────────┬───────────┬──────────┬─────────┬─────────┬──────────────┬─────────────┐\n│ Column │ Type │ Collation │ Nullable │ Default │ Storage │ Stats target │ Description │\n├──────────────────────┼─────────┼───────────┼──────────┼─────────┼─────────┼──────────────┼─────────────┤\n│ derivation_id │ integer │ │ not null │ │ plain │ 10000 │ │\n│ derivation_output_id │ integer │ │ not null │ │ plain │ 10000 │ │\n└──────────────────────┴─────────┴───────────┴──────────┴─────────┴─────────┴──────────────┴─────────────┘\nIndexes:\n \"derivation_inputs_pkey\" PRIMARY KEY, btree (derivation_id, derivation_output_id) WITH (fillfactor='100')\n \"derivation_inputs_derivation_output_id_idx\" btree (derivation_output_id) WITH (fillfactor='100')\nForeign-key constraints:\n \"derivation_id_fk\" FOREIGN KEY (derivation_id) REFERENCES derivations(id)\n \"derivation_output_id_fk\" FOREIGN KEY (derivation_output_id) REFERENCES derivation_outputs(id)\nAccess method: heap\nOptions: autovacuum_vacuum_scale_factor=0.01, autovacuum_analyze_scale_factor=0.01\n\n\n5:\nguix_data_service@[local]:5432/patches_guix_data_service> \\d+ derivation_outputs\n Table \"guix_data_service.derivation_outputs\"\n┌──────────────────────────────┬───────────────────┬───────────┬──────────┬──────────────────────────────┬──────────┬──────────────┬─────────────┐\n│ Column │ Type │ Collation │ Nullable │ Default │ Storage │ Stats target │ Description │\n├──────────────────────────────┼───────────────────┼───────────┼──────────┼──────────────────────────────┼──────────┼──────────────┼─────────────┤\n│ derivation_id │ integer │ │ not null │ │ plain │ │ │\n│ name │ character varying │ │ not null │ │ extended │ │ │\n│ derivation_output_details_id │ integer │ │ not null │ │ plain │ │ │\n│ id │ integer │ │ not null │ generated always as identity │ plain │ │ │\n└──────────────────────────────┴───────────────────┴───────────┴──────────┴──────────────────────────────┴──────────┴──────────────┴─────────────┘\nIndexes:\n \"derivation_outputs_pkey\" PRIMARY KEY, btree (derivation_id, name)\n \"derivation_outputs_derivation_id_idx\" btree (derivation_id)\n \"derivation_outputs_derivation_output_details_id_idx\" btree (derivation_output_details_id)\n \"derivation_outputs_unique_id\" UNIQUE CONSTRAINT, btree (id)\nForeign-key constraints:\n \"derivation_outputs_derivation_id_fk\" FOREIGN KEY (derivation_id) REFERENCES derivations(id)\n \"derivation_outputs_derivation_output_details_id_fk\" FOREIGN KEY (derivation_output_details_id) REFERENCES derivation_output_details(id)\nReferenced by:\n TABLE \"derivation_inputs\" CONSTRAINT \"derivation_output_id_fk\" FOREIGN KEY (derivation_output_id) REFERENCES derivation_outputs(id)\nAccess method: heap\nOptions: autovacuum_vacuum_scale_factor=0.01, autovacuum_analyze_scale_factor=0.01",
"msg_date": "Mon, 28 Dec 2020 14:50:55 +0000",
"msg_from": "Christopher Baines <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow recursive CTE query questions, with row estimate and\n n_distinct issues"
},
{
"msg_contents": "On Mon, Dec 28, 2020 at 7:51 AM Christopher Baines <[email protected]> wrote:\n\n> derivation_inputs:\n> COUNT(*): 285422539\n> reltuples: 285422528\n>\n> derivation_id:\n> COUNT(DISTINCT): 7508610\n> n_distinct: 4336644 (~57% of the true value)\n>\n> derivation_output_id:\n> COUNT(DISTINCT): 5539406\n> n_distinct: 473762 (~8% of the true value)\n>\n\nIf you expect the ratio of distinct of derivation_output_id values to be\nroughly linear going forward, you can set a custom value for n_distinct on\nthe column (currently seems like -.0194, aka distinct count\nof derivation_output_id divided by reltuples of the table). You could also\ndo this analysis every month or six and set the custom value as needed.\n\nhttps://www.postgresql.org/docs/current/sql-altertable.html\n\nI am not sure if it will resolve your query problems though.\n\nOn Mon, Dec 28, 2020 at 7:51 AM Christopher Baines <[email protected]> wrote:derivation_inputs:\n COUNT(*): 285422539\n reltuples: 285422528\n\n derivation_id:\n COUNT(DISTINCT): 7508610\n n_distinct: 4336644 (~57% of the true value)\n\n derivation_output_id:\n COUNT(DISTINCT): 5539406\n n_distinct: 473762 (~8% of the true value)If you expect the ratio of distinct of derivation_output_id values to be roughly linear going forward, you can set a custom value for n_distinct on the column (currently seems like -.0194, aka distinct count of derivation_output_id divided by reltuples of the table). You could also do this analysis every month or six and set the custom value as needed.https://www.postgresql.org/docs/current/sql-altertable.htmlI am not sure if it will resolve your query problems though.",
"msg_date": "Mon, 28 Dec 2020 08:12:40 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow recursive CTE query questions, with row estimate and\n n_distinct issues"
},
{
"msg_contents": "On Mon, Dec 28, 2020 at 7:51 AM Christopher Baines <[email protected]> wrote:\n\n> Hi,\n>\n> I have a performance problem with a query. I've uploaded it, along with\n> the EXPLAIN ANALYZE output here [1].\n>\n> 1: https://explain.depesz.com/s/w5vP\n>\n> I think the query is taking longer than I'd like, as PostgreSQL is not\n> generating a great plan, in particular the estimated rows in parts of\n> the plan are way off from the actual number of rows produced, and I\n> think PostgreSQL might pick a better approach if it made more reasonable\n> estimates.\n>\n> Looking at the row numbers on [1], I think things start to go wrong at\n> row 11. The first part of the recursive CTE is estimated to generate\n> 22,122 rows, and this isn't far off the 15,670 rows it generates. The\n> WorkTable scan expects to generate 10x that though, at 221,220. Maybe\n> that's reasonable, I'm unsure?\n>\n> Things seem to continue getting out of hand from this point. The nested\n> loop on line 10 generates 232x less rows than expected, and this results\n> in an overall overestimate of 4,317x.\n>\n> My theory as to why the row estimates is way off is that the planner is\n> using the n_distinct counts for the derivation_inputs table, and they\n> are way off [2]. I've set the stats target to the highest value for this\n> table, so I'm unsure what I can do to improve these estimates?\n>\n> I've included relevant numbers for the two important tables below\n> [2][3], as well as descriptions of the tables [4][5].\n>\n> Does anyone know if my theory as to why the row estimates in the plan is\n> right, or have any ideas as how I can make the query more performant?\n>\n> Thanks,\n>\n> Chris\n>\n>\nHi Chris\n\nWhat is your realistic target maximum execution time for the query?\n\nIdeas off the top of my head sans all the usual tuning of shared_buffers et\nal...\n\nWhen was the last time autovacuum vacuumed and analyzed the source tables?\n\nCan you leverage a materialized view refreshing when necessary? Thinking\nfor the 3 joined tables but could be applied to the whole result.\n\nIf you can guarantee no duplicate rows before and after current UNION,\nchange to UNION ALL.\n\nIf you have the luxury of faster disk, such as NVMe, create a tablespace\nand move data and/or indexes there. Adjust storage planner options\naccordingly.\n\nPersonally, I like to see guardrails on RECURSIVE CTE's but you know your\ndata and clients better than I so may not be necessary. Suggestions\ninclude:\n 1. Limit recursion depth. It does not seem to be the intent here though\nbut ask if really the whole is necessary. Not knowing the application but\nsay 10 deep is all that is needed initially. If the need for more is\nrequired, run the query again but has a different guix_revisions.commit\nstarting point. Think of it as pagination.\n 2. Cycle detection / prevention but depends on the hierarchy represented\nby the data. Doesn't seem to be the case here since the query as-is does\nreturn.\n\nHTH.\n\n-Greg\n\nOn Mon, Dec 28, 2020 at 7:51 AM Christopher Baines <[email protected]> wrote:Hi,\n\nI have a performance problem with a query. I've uploaded it, along with\nthe EXPLAIN ANALYZE output here [1].\n\n1: https://explain.depesz.com/s/w5vP\n\nI think the query is taking longer than I'd like, as PostgreSQL is not\ngenerating a great plan, in particular the estimated rows in parts of\nthe plan are way off from the actual number of rows produced, and I\nthink PostgreSQL might pick a better approach if it made more reasonable\nestimates.\n\nLooking at the row numbers on [1], I think things start to go wrong at\nrow 11. The first part of the recursive CTE is estimated to generate\n22,122 rows, and this isn't far off the 15,670 rows it generates. The\nWorkTable scan expects to generate 10x that though, at 221,220. Maybe\nthat's reasonable, I'm unsure?\n\nThings seem to continue getting out of hand from this point. The nested\nloop on line 10 generates 232x less rows than expected, and this results\nin an overall overestimate of 4,317x.\n\nMy theory as to why the row estimates is way off is that the planner is\nusing the n_distinct counts for the derivation_inputs table, and they\nare way off [2]. I've set the stats target to the highest value for this\ntable, so I'm unsure what I can do to improve these estimates?\n\nI've included relevant numbers for the two important tables below\n[2][3], as well as descriptions of the tables [4][5].\n\nDoes anyone know if my theory as to why the row estimates in the plan is\nright, or have any ideas as how I can make the query more performant?\n\nThanks,\n\nChrisHi ChrisWhat is your realistic target maximum execution time for the query?Ideas off the top of my head sans all the usual tuning of shared_buffers et al...When was the last time autovacuum vacuumed and analyzed the source tables?Can you leverage a materialized view refreshing when necessary? Thinking for the 3 joined tables but could be applied to the whole result.If you can guarantee no duplicate rows before and after current UNION, change to UNION ALL.If you have the luxury of faster disk, such as NVMe, create a tablespace and move data and/or indexes there. Adjust storage planner options accordingly.Personally, I like to see guardrails on RECURSIVE CTE's but you know your data and clients better than I so may not be necessary. Suggestions include: 1. Limit recursion depth. It does not seem to be the intent here though but ask if really the whole is necessary. Not knowing the application but say 10 deep is all that is needed initially. If the need for more is required, run the query again but has a different guix_revisions.commit starting point. Think of it as pagination. 2. Cycle detection / prevention but depends on the hierarchy represented by the data. Doesn't seem to be the case here since the query as-is does return.HTH.-Greg",
"msg_date": "Mon, 28 Dec 2020 09:09:09 -0700",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow recursive CTE query questions, with row estimate and\n n_distinct issues"
},
{
"msg_contents": "Michael Lewis <[email protected]> writes:\n\n> On Mon, Dec 28, 2020 at 7:51 AM Christopher Baines <[email protected]> wrote:\n>\n>> derivation_inputs:\n>> COUNT(*): 285422539\n>> reltuples: 285422528\n>>\n>> derivation_id:\n>> COUNT(DISTINCT): 7508610\n>> n_distinct: 4336644 (~57% of the true value)\n>>\n>> derivation_output_id:\n>> COUNT(DISTINCT): 5539406\n>> n_distinct: 473762 (~8% of the true value)\n>>\n>\n> If you expect the ratio of distinct of derivation_output_id values to be\n> roughly linear going forward, you can set a custom value for n_distinct on\n> the column (currently seems like -.0194, aka distinct count\n> of derivation_output_id divided by reltuples of the table). You could also\n> do this analysis every month or six and set the custom value as needed.\n>\n> https://www.postgresql.org/docs/current/sql-altertable.html\n>\n> I am not sure if it will resolve your query problems though.\n\nThanks Michael, I didn't realise a custom value could be set, but I'll\nlook in to this.\n\nI actually managed to speed the query up enough by increasing\nwork_mem/shared_buffers. I didn't realise one of the sequential scans\nwas executing 14 times, but giving PostgreSQL more resources means it\njust executes once, which really helps.\n\nThanks again,\n\nChris",
"msg_date": "Tue, 29 Dec 2020 22:40:39 +0000",
"msg_from": "Christopher Baines <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow recursive CTE query questions, with row estimate and\n n_distinct issues"
},
{
"msg_contents": "Greg Spiegelberg <[email protected]> writes:\n\n> On Mon, Dec 28, 2020 at 7:51 AM Christopher Baines <[email protected]> wrote:\n>\n>> Hi,\n>>\n>> I have a performance problem with a query. I've uploaded it, along with\n>> the EXPLAIN ANALYZE output here [1].\n>>\n>> 1: https://explain.depesz.com/s/w5vP\n>>\n>> I think the query is taking longer than I'd like, as PostgreSQL is not\n>> generating a great plan, in particular the estimated rows in parts of\n>> the plan are way off from the actual number of rows produced, and I\n>> think PostgreSQL might pick a better approach if it made more reasonable\n>> estimates.\n>>\n>> Looking at the row numbers on [1], I think things start to go wrong at\n>> row 11. The first part of the recursive CTE is estimated to generate\n>> 22,122 rows, and this isn't far off the 15,670 rows it generates. The\n>> WorkTable scan expects to generate 10x that though, at 221,220. Maybe\n>> that's reasonable, I'm unsure?\n>>\n>> Things seem to continue getting out of hand from this point. The nested\n>> loop on line 10 generates 232x less rows than expected, and this results\n>> in an overall overestimate of 4,317x.\n>>\n>> My theory as to why the row estimates is way off is that the planner is\n>> using the n_distinct counts for the derivation_inputs table, and they\n>> are way off [2]. I've set the stats target to the highest value for this\n>> table, so I'm unsure what I can do to improve these estimates?\n>>\n>> I've included relevant numbers for the two important tables below\n>> [2][3], as well as descriptions of the tables [4][5].\n>>\n>> Does anyone know if my theory as to why the row estimates in the plan is\n>> right, or have any ideas as how I can make the query more performant?\n\n> Hi Chris\n>\n> What is your realistic target maximum execution time for the query?\n\nIt's for a web page. It can be slow, so 10 seconds is a reasonable\ntarget.\n\n> Ideas off the top of my head sans all the usual tuning of shared_buffers et\n> al...\n>\n> When was the last time autovacuum vacuumed and analyzed the source tables?\n>\n> Can you leverage a materialized view refreshing when necessary? Thinking\n> for the 3 joined tables but could be applied to the whole result.\n>\n> If you can guarantee no duplicate rows before and after current UNION,\n> change to UNION ALL.\n>\n> If you have the luxury of faster disk, such as NVMe, create a tablespace\n> and move data and/or indexes there. Adjust storage planner options\n> accordingly.\n>\n> Personally, I like to see guardrails on RECURSIVE CTE's but you know your\n> data and clients better than I so may not be necessary. Suggestions\n> include:\n> 1. Limit recursion depth. It does not seem to be the intent here though\n> but ask if really the whole is necessary. Not knowing the application but\n> say 10 deep is all that is needed initially. If the need for more is\n> required, run the query again but has a different guix_revisions.commit\n> starting point. Think of it as pagination.\n> 2. Cycle detection / prevention but depends on the hierarchy represented\n> by the data. Doesn't seem to be the case here since the query as-is does\n> return.\n\nThanks for your suggestions Greg. Turns out that I actually managed to\nspeed the query up enough by increasing work_mem/shared_buffers. I\ndidn't realise one of the sequential scans was executing 14 times, but\ngiving PostgreSQL more resources means it just executes once, which\nreally helps.\n\nThere's probably still room for improvement, but it's \"fast enough\" now.\n\nThanks again,\n\nChris",
"msg_date": "Tue, 29 Dec 2020 22:45:55 +0000",
"msg_from": "Christopher Baines <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow recursive CTE query questions, with row estimate and\n n_distinct issues"
}
] |
[
{
"msg_contents": "Good morning,\n\nThis week we've noticed that we're starting to see spikes where COMMITs are\ntaking much longer than usual. Sometimes, quite a few seconds to finish.\nAfter a few minutes they disappear but then return seemingly at random.\nThis becomes visible to the app and end user as a big stall in activity.\n\nThe checkpoints are still running for their full 5 min checkpoint_timeout\nduration (logs all say \"checkpoint starting: time\" and I'm not seeing any\nwarnings about them occurring too frequently.\n\nThis is PostgreSQL 12.4 on Ubuntu 18.04, all running in MS Azure (*not*\nmanaged by them).\n\n# select version();\n version\n---------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 12.4 (Ubuntu 12.4-1.pgdg18.04+1) on x86_64-pc-linux-gnu,\ncompiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit\n\nI have the stats_temp_directory in a tmpfs mount. I *do* have pg_wal on the\nsame premium SSD storage volume as the data directory. Normally I would\nknow to separate these but I was told with the cloud storage that it's all\nvirtualized anyway, plus storage IOPS are determined by disk size so having\na smaller volume just for pg_wal would hurt me in this case. The kind folks\nin the PG community Slack suggested just having one large premium cloud\nstorage mount for the data directory and leave pg_wal inside because this\nvirtualization removes any guarantee of true separation.\n\nI'm wondering if others have experience running self-managed PG in a cloud\nsetting (especially if in MS Azure) and what they might have seen/done in\ncases like this.\n\nThanks,\nDon.\n\n-- \nDon Seiler\nwww.seiler.us\n\nGood morning,This week we've noticed that we're starting to see spikes where COMMITs are taking much longer than usual. Sometimes, quite a few seconds to finish. After a few minutes they disappear but then return seemingly at random. This becomes visible to the app and end user as a big stall in activity.The checkpoints are still running for their full 5 min checkpoint_timeout duration (logs all say \"checkpoint starting: time\" and I'm not seeing any warnings about them occurring too frequently.This is PostgreSQL 12.4 on Ubuntu 18.04, all running in MS Azure (*not* managed by them). # select version(); version--------------------------------------------------------------------------------------------------------------------------------- PostgreSQL 12.4 (Ubuntu 12.4-1.pgdg18.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bitI have the stats_temp_directory in a tmpfs mount. I do have pg_wal on the same premium SSD storage volume as the data directory. Normally I would know to separate these but I was told with the cloud storage that it's all virtualized anyway, plus storage IOPS are determined by disk size so having a smaller volume just for pg_wal would hurt me in this case. The kind folks in the PG community Slack suggested just having one large premium cloud storage mount for the data directory and leave pg_wal inside because this virtualization removes any guarantee of true separation.I'm wondering if others have experience running self-managed PG in a cloud setting (especially if in MS Azure) and what they might have seen/done in cases like this.Thanks,Don.-- Don Seilerwww.seiler.us",
"msg_date": "Wed, 6 Jan 2021 10:19:19 -0600",
"msg_from": "Don Seiler <[email protected]>",
"msg_from_op": true,
"msg_subject": "High COMMIT times"
},
{
"msg_contents": "> I have the stats_temp_directory in a tmpfs mount. I *do* have pg_wal on\n> the same premium SSD storage volume as the data directory. Normally I would\n> know to separate these but I was told with the cloud storage that it's all\n> virtualized anyway, plus storage IOPS are determined by disk size so having\n> a smaller volume just for pg_wal would hurt me in this case. The kind folks\n> in the PG community Slack suggested just having one large premium cloud\n> storage mount for\n>\nthe data directory and leave pg_wal inside because this virtualization\n> removes any guarantee of true separation.\n>\n\nIt is true that the IO is virtualized but that does not mean that separate\nvolumes won't help. In cloud storage you are granted specific IOPS/MB/s per\nvolume. Separating pg_wal to a new volume mount will take pressure off of\npage writes and allow the wal to write within its own prioritization.\n\nJD\n\nI have the stats_temp_directory in a tmpfs mount. I do have pg_wal on the same premium SSD storage volume as the data directory. Normally I would know to separate these but I was told with the cloud storage that it's all virtualized anyway, plus storage IOPS are determined by disk size so having a smaller volume just for pg_wal would hurt me in this case. The kind folks in the PG community Slack suggested just having one large premium cloud storage mount for the data directory and leave pg_wal inside because this virtualization removes any guarantee of true separation.It is true that the IO is virtualized but that does not mean that separate volumes won't help. In cloud storage you are granted specific IOPS/MB/s per volume. Separating pg_wal to a new volume mount will take pressure off of page writes and allow the wal to write within its own prioritization.JD",
"msg_date": "Wed, 6 Jan 2021 08:51:06 -0800",
"msg_from": "Joshua Drake <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High COMMIT times"
},
{
"msg_contents": "On Wed, Jan 6, 2021 at 10:51 AM Joshua Drake <[email protected]> wrote:\n\n> I have the stats_temp_directory in a tmpfs mount. I *do* have pg_wal on\n>> the same premium SSD storage volume as the data directory. Normally I would\n>> know to separate these but I was told with the cloud storage that it's all\n>> virtualized anyway, plus storage IOPS are determined by disk size so having\n>> a smaller volume just for pg_wal would hurt me in this case. The kind folks\n>> in the PG community Slack suggested just having one large premium cloud\n>> storage mount for\n>>\n> the data directory and leave pg_wal inside because this virtualization\n>> removes any guarantee of true separation.\n>>\n>\n> It is true that the IO is virtualized but that does not mean that separate\n> volumes won't help. In cloud storage you are granted specific IOPS/MB/s per\n> volume. Separating pg_wal to a new volume mount will take pressure off of\n> page writes and allow the wal to write within its own prioritization.\n>\n\nLooking at the Azure portal metric, we are nowhere close to the advertised\nmaximum IOPS or MB/s throughput (under half of the maximum IOPS and under a\nquarter of the MB/s maximum). So there must be some other bottleneck in\nplay. The IOPS limit on this VM size is even higher so that shouldn't be it.\n\nIf I were to size a separate volume for just WAL, I would think 64GB but\nobviously the Azure storage IOPS are much less. On this particular DB host,\nwe're currently on a 2.0T P40 disk that is supposed to give 7500 IOPS and\n250MB/s [1] (but again, Azure's own usage graphs show us nowhere near those\nlimits). A smaller volume like 64GB would be provisioned at 240 IOPS in\nthis example. Doesn't seem like a lot. Really until you get a terabyte it\nseems like a real drop-off as far as Azure storage goes.\n\nI'd be interested to hear what others might have configured on their\nwrite-heavy cloud databases.\n\n[1] https://azure.microsoft.com/en-us/pricing/details/managed-disks/\n\nDon.\n\n-- \nDon Seiler\nwww.seiler.us\n\nOn Wed, Jan 6, 2021 at 10:51 AM Joshua Drake <[email protected]> wrote:I have the stats_temp_directory in a tmpfs mount. I do have pg_wal on the same premium SSD storage volume as the data directory. Normally I would know to separate these but I was told with the cloud storage that it's all virtualized anyway, plus storage IOPS are determined by disk size so having a smaller volume just for pg_wal would hurt me in this case. The kind folks in the PG community Slack suggested just having one large premium cloud storage mount for the data directory and leave pg_wal inside because this virtualization removes any guarantee of true separation.It is true that the IO is virtualized but that does not mean that separate volumes won't help. In cloud storage you are granted specific IOPS/MB/s per volume. Separating pg_wal to a new volume mount will take pressure off of page writes and allow the wal to write within its own prioritization.Looking at the Azure portal metric, we are nowhere close to the advertised maximum IOPS or MB/s throughput (under half of the maximum IOPS and under a quarter of the MB/s maximum). So there must be some other bottleneck in play. The IOPS limit on this VM size is even higher so that shouldn't be it.If I were to size a separate volume for just WAL, I would think 64GB but obviously the Azure storage IOPS are much less. On this particular DB host, we're currently on a 2.0T P40 disk that is supposed to give 7500 IOPS and 250MB/s [1] (but again, Azure's own usage graphs show us nowhere near those limits). A smaller volume like 64GB would be provisioned at 240 IOPS in this example. Doesn't seem like a lot. Really until you get a terabyte it seems like a real drop-off as far as Azure storage goes.I'd be interested to hear what others might have configured on their write-heavy cloud databases.[1] https://azure.microsoft.com/en-us/pricing/details/managed-disks/Don.-- Don Seilerwww.seiler.us",
"msg_date": "Wed, 6 Jan 2021 12:06:27 -0600",
"msg_from": "Don Seiler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High COMMIT times"
},
{
"msg_contents": ">\n> Looking at the Azure portal metric, we are nowhere close to the advertised\n> maximum IOPS or MB/s throughput (under half of the maximum IOPS and under a\n> quarter of the MB/s maximum). So there must be some other bottleneck in\n> play. The IOPS limit on this VM size is even higher so that shouldn't be it.\n>\n> If I were to size a separate volume for just WAL, I would think 64GB but\n> obviously the Azure storage IOPS are much less. On this particular DB host,\n> we're currently on a 2.0T P40 disk that is supposed to give 7500 IOPS and\n> 250MB/s [1] (but again, Azure's own usage graphs show us nowhere near those\n> limits). A smaller volume like 64GB would be provisioned at 240 IOPS in\n> this example. Doesn't seem like a lot. Really until you get a terabyte it\n> seems like a real drop-off as far as Azure storage goes.\n>\n>\nBased on those metrics, I would start looking at other things. For example,\nI once (it was years ago) experienced commit delays because the kernel\ncache on Linux was getting over run. Do you have any metrics on the system\nas a whole? Perhaps sar running every few minutes will help you identify a\ncorrelation?\n\nJD\n\n\n\n> I'd be interested to hear what others might have configured on their\n> write-heavy cloud databases.\n>\n> [1] https://azure.microsoft.com/en-us/pricing/details/managed-disks/\n>\n> Don.\n>\n> --\n> Don Seiler\n> www.seiler.us\n>\n\nLooking at the Azure portal metric, we are nowhere close to the advertised maximum IOPS or MB/s throughput (under half of the maximum IOPS and under a quarter of the MB/s maximum). So there must be some other bottleneck in play. The IOPS limit on this VM size is even higher so that shouldn't be it.If I were to size a separate volume for just WAL, I would think 64GB but obviously the Azure storage IOPS are much less. On this particular DB host, we're currently on a 2.0T P40 disk that is supposed to give 7500 IOPS and 250MB/s [1] (but again, Azure's own usage graphs show us nowhere near those limits). A smaller volume like 64GB would be provisioned at 240 IOPS in this example. Doesn't seem like a lot. Really until you get a terabyte it seems like a real drop-off as far as Azure storage goes.Based on those metrics, I would start looking at other things. For example, I once (it was years ago) experienced commit delays because the kernel cache on Linux was getting over run. Do you have any metrics on the system as a whole? Perhaps sar running every few minutes will help you identify a correlation?JD I'd be interested to hear what others might have configured on their write-heavy cloud databases.[1] https://azure.microsoft.com/en-us/pricing/details/managed-disks/Don.-- Don Seilerwww.seiler.us",
"msg_date": "Wed, 6 Jan 2021 10:53:27 -0800",
"msg_from": "Joshua Drake <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High COMMIT times"
},
{
"msg_contents": "On Wed, Jan 06, 2021 at 12:06:27PM -0600, Don Seiler wrote:\n> On Wed, Jan 6, 2021 at 10:51 AM Joshua Drake <[email protected]> wrote:\n> \n> Looking at the Azure portal metric, we are nowhere close to the advertised\n> maximum IOPS or MB/s throughput (under half of the maximum IOPS and under a\n> quarter of the MB/s maximum). So there must be some other bottleneck in\n> play. The IOPS limit on this VM size is even higher so that shouldn't be it.\n> \n\nHi Don,\n\nI may just be re-stating common knowledge, but the available IOPS would\nbe constrained by how tightly coupled the storage is to the CPU. Even a\nsmall increase can limit the maximum IOPS unless you can issue multiple\nrelatively independent queries at one. I know no details of how Azure\nimplements their storage tiers.\n\nRegards,\nKen\n\n\n",
"msg_date": "Wed, 6 Jan 2021 13:15:25 -0600",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High COMMIT times"
},
{
"msg_contents": "Azure VMs do have their own IOPS limits that increase with increasing VM\n\"size\". In this current case our VM size puts that VM IOPS limit well above\nanything the disks are rated at, so it shouldn't be a bottleneck.\n\nOn Wed, Jan 6, 2021 at 1:15 PM Kenneth Marshall <[email protected]> wrote:\n\n> On Wed, Jan 06, 2021 at 12:06:27PM -0600, Don Seiler wrote:\n> > On Wed, Jan 6, 2021 at 10:51 AM Joshua Drake <[email protected]>\n> wrote:\n> >\n> > Looking at the Azure portal metric, we are nowhere close to the\n> advertised\n> > maximum IOPS or MB/s throughput (under half of the maximum IOPS and\n> under a\n> > quarter of the MB/s maximum). So there must be some other bottleneck in\n> > play. The IOPS limit on this VM size is even higher so that shouldn't be\n> it.\n> >\n>\n> Hi Don,\n>\n> I may just be re-stating common knowledge, but the available IOPS would\n> be constrained by how tightly coupled the storage is to the CPU. Even a\n> small increase can limit the maximum IOPS unless you can issue multiple\n> relatively independent queries at one. I know no details of how Azure\n> implements their storage tiers.\n>\n> Regards,\n> Ken\n>\n\n\n-- \nDon Seiler\nwww.seiler.us\n\nAzure VMs do have their own IOPS limits that increase with increasing VM \"size\". In this current case our VM size puts that VM IOPS limit well above anything the disks are rated at, so it shouldn't be a bottleneck.On Wed, Jan 6, 2021 at 1:15 PM Kenneth Marshall <[email protected]> wrote:On Wed, Jan 06, 2021 at 12:06:27PM -0600, Don Seiler wrote:\n> On Wed, Jan 6, 2021 at 10:51 AM Joshua Drake <[email protected]> wrote:\n> \n> Looking at the Azure portal metric, we are nowhere close to the advertised\n> maximum IOPS or MB/s throughput (under half of the maximum IOPS and under a\n> quarter of the MB/s maximum). So there must be some other bottleneck in\n> play. The IOPS limit on this VM size is even higher so that shouldn't be it.\n> \n\nHi Don,\n\nI may just be re-stating common knowledge, but the available IOPS would\nbe constrained by how tightly coupled the storage is to the CPU. Even a\nsmall increase can limit the maximum IOPS unless you can issue multiple\nrelatively independent queries at one. I know no details of how Azure\nimplements their storage tiers.\n\nRegards,\nKen\n-- Don Seilerwww.seiler.us",
"msg_date": "Wed, 6 Jan 2021 14:06:07 -0600",
"msg_from": "Don Seiler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High COMMIT times"
},
{
"msg_contents": "On Wed, 2021-01-06 at 10:19 -0600, Don Seiler wrote:\n> This week we've noticed that we're starting to see spikes where COMMITs are taking much longer than usual.\n> Sometimes, quite a few seconds to finish.\n>\n> This is PostgreSQL 12.4 on Ubuntu 18.04, all running in MS Azure (*not* managed by them).\n\nUnless you are using WITH HOLD cursors on large result sets, this is very likely\nI/O overload. Use tools like \"sar\", \"vmstat\" and \"iostat\" to monitor your I/O load.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Thu, 07 Jan 2021 03:02:53 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High COMMIT times"
},
{
"msg_contents": "We had a similar situation recently and saw high commit times that were\ncaused by having unindexed foreign key columns when deleting data with\nlarge tables involved. You might check to see if any new foreign key\nconstraints have been added recently or if any foreign key indexes may have\ninadvertently been removed. Indexing the foreign keys resolved our issue.\n\nRegards,\n\nCraig\n\nOn Wed, Jan 6, 2021 at 9:19 AM Don Seiler <[email protected]> wrote:\n\n> Good morning,\n>\n> This week we've noticed that we're starting to see spikes where COMMITs\n> are taking much longer than usual. Sometimes, quite a few seconds to\n> finish. After a few minutes they disappear but then return seemingly at\n> random. This becomes visible to the app and end user as a big stall in\n> activity.\n>\n> The checkpoints are still running for their full 5 min checkpoint_timeout\n> duration (logs all say \"checkpoint starting: time\" and I'm not seeing any\n> warnings about them occurring too frequently.\n>\n> This is PostgreSQL 12.4 on Ubuntu 18.04, all running in MS Azure (*not*\n> managed by them).\n>\n> # select version();\n> version\n>\n> ---------------------------------------------------------------------------------------------------------------------------------\n> PostgreSQL 12.4 (Ubuntu 12.4-1.pgdg18.04+1) on x86_64-pc-linux-gnu,\n> compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit\n>\n> I have the stats_temp_directory in a tmpfs mount. I *do* have pg_wal on\n> the same premium SSD storage volume as the data directory. Normally I would\n> know to separate these but I was told with the cloud storage that it's all\n> virtualized anyway, plus storage IOPS are determined by disk size so having\n> a smaller volume just for pg_wal would hurt me in this case. The kind folks\n> in the PG community Slack suggested just having one large premium cloud\n> storage mount for the data directory and leave pg_wal inside because this\n> virtualization removes any guarantee of true separation.\n>\n> I'm wondering if others have experience running self-managed PG in a cloud\n> setting (especially if in MS Azure) and what they might have seen/done in\n> cases like this.\n>\n> Thanks,\n> Don.\n>\n> --\n> Don Seiler\n> www.seiler.us\n>\n\n\n-- \nCraig\n\n-- \nThis electronic communication and the information and any files transmitted \nwith it, or attached to it, are confidential and are intended solely for \nthe use of the individual or entity to whom it is addressed and may contain \ninformation that is confidential, legally privileged, protected by privacy \nlaws, or otherwise restricted from disclosure to anyone else. If you are \nnot the intended recipient or the person responsible for delivering the \ne-mail to the intended recipient, you are hereby notified that any use, \ncopying, distributing, dissemination, forwarding, printing, or copying of \nthis e-mail is strictly prohibited. If you received this e-mail in error, \nplease return the e-mail to the sender, delete it from your computer, and \ndestroy any printed copy of it.",
"msg_date": "Thu, 7 Jan 2021 10:49:58 -0700",
"msg_from": "Craig Jackson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High COMMIT times"
},
{
"msg_contents": "On Thu, Jan 7, 2021 at 11:50 AM Craig Jackson <[email protected]>\nwrote:\n\n> We had a similar situation recently and saw high commit times that were\n> caused by having unindexed foreign key columns when deleting data with\n> large tables involved. You might check to see if any new foreign key\n> constraints have been added recently or if any foreign key indexes may have\n> inadvertently been removed. Indexing the foreign keys resolved our issue.\n>\n\nInteresting, I'll run a check for any. Thanks!\n\nDon.\n\n-- \nDon Seiler\nwww.seiler.us\n\nOn Thu, Jan 7, 2021 at 11:50 AM Craig Jackson <[email protected]> wrote:We had a similar situation recently and saw high commit times that were caused by having unindexed foreign key columns when deleting data with large tables involved. You might check to see if any new foreign key constraints have been added recently or if any foreign key indexes may have inadvertently been removed. Indexing the foreign keys resolved our issue. Interesting, I'll run a check for any. Thanks!Don.-- Don Seilerwww.seiler.us",
"msg_date": "Thu, 7 Jan 2021 12:03:08 -0600",
"msg_from": "Don Seiler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High COMMIT times"
},
{
"msg_contents": "On Thu, 2021-01-07 at 10:49 -0700, Craig Jackson wrote:\n> We had a similar situation recently and saw high commit times that were caused\n> by having unindexed foreign key columns when deleting data with large tables involved.\n> You might check to see if any new foreign key constraints have been added\n> recently or if any foreign key indexes may have inadvertently been removed.\n> Indexing the foreign keys resolved our issue. \n\nWere these deferred foreign key constraints?\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Fri, 08 Jan 2021 10:05:53 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High COMMIT times"
},
{
"msg_contents": "On Thu, Jan 7, 2021 at 3:03 PM Don Seiler <[email protected]> wrote:\n\n> On Thu, Jan 7, 2021 at 11:50 AM Craig Jackson <[email protected]>\n> wrote:\n>\n>> We had a similar situation recently and saw high commit times that were\n>> caused by having unindexed foreign key columns when deleting data with\n>> large tables involved. You might check to see if any new foreign key\n>> constraints have been added recently or if any foreign key indexes may have\n>> inadvertently been removed. Indexing the foreign keys resolved our issue.\n>>\n>\n> Interesting, I'll run a check for any. Thanks!\n>\n> Don.\n>\n> --\n> Don Seiler\n> www.seiler.us\n>\n\nDo you have standby databases and synchronous_commit = 'remote_apply'?\n-- \nJosé Arthur Benetasso Villanova\n\nOn Thu, Jan 7, 2021 at 3:03 PM Don Seiler <[email protected]> wrote:On Thu, Jan 7, 2021 at 11:50 AM Craig Jackson <[email protected]> wrote:We had a similar situation recently and saw high commit times that were caused by having unindexed foreign key columns when deleting data with large tables involved. You might check to see if any new foreign key constraints have been added recently or if any foreign key indexes may have inadvertently been removed. Indexing the foreign keys resolved our issue. Interesting, I'll run a check for any. Thanks!Don.-- Don Seilerwww.seiler.us\nDo you have standby databases and synchronous_commit = 'remote_apply'?-- José Arthur Benetasso Villanova",
"msg_date": "Fri, 8 Jan 2021 14:51:55 -0300",
"msg_from": "=?UTF-8?Q?Jos=C3=A9_Arthur_Benetasso_Villanova?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High COMMIT times"
},
{
"msg_contents": "On Fri, Jan 8, 2021 at 11:52 AM José Arthur Benetasso Villanova <\[email protected]> wrote:\n\n> Do you have standby databases and synchronous_commit = 'remote_apply'?\n>\n\nWe have standby databases, but synchronous_commit is \"on\".\n\nI found a pair of unindexed foreign key child tables but they never run\ndeletes on those and they are very small. I did mention it to the app team\nas a best practice in the meantime but I don't think it would play a part\nin our current issue.\n\nDon.\n\n-- \nDon Seiler\nwww.seiler.us\n\nOn Fri, Jan 8, 2021 at 11:52 AM José Arthur Benetasso Villanova <[email protected]> wrote:Do you have standby databases and synchronous_commit = 'remote_apply'?We have standby databases, but synchronous_commit is \"on\".I found a pair of unindexed foreign key child tables but they never run deletes on those and they are very small. I did mention it to the app team as a best practice in the meantime but I don't think it would play a part in our current issue.Don.-- Don Seilerwww.seiler.us",
"msg_date": "Fri, 8 Jan 2021 13:04:30 -0600",
"msg_from": "Don Seiler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High COMMIT times"
},
{
"msg_contents": "Yes, these were deferrable foreign key constraints.\n\nOn Fri, Jan 8, 2021 at 2:05 AM Laurenz Albe <[email protected]>\nwrote:\n\n> On Thu, 2021-01-07 at 10:49 -0700, Craig Jackson wrote:\n> > We had a similar situation recently and saw high commit times that were\n> caused\n> > by having unindexed foreign key columns when deleting data with large\n> tables involved.\n> > You might check to see if any new foreign key constraints have been added\n> > recently or if any foreign key indexes may have inadvertently been\n> removed.\n> > Indexing the foreign keys resolved our issue.\n>\n> Were these deferred foreign key constraints?\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n>\n\n-- \nCraig\n\n-- \nThis electronic communication and the information and any files transmitted \nwith it, or attached to it, are confidential and are intended solely for \nthe use of the individual or entity to whom it is addressed and may contain \ninformation that is confidential, legally privileged, protected by privacy \nlaws, or otherwise restricted from disclosure to anyone else. If you are \nnot the intended recipient or the person responsible for delivering the \ne-mail to the intended recipient, you are hereby notified that any use, \ncopying, distributing, dissemination, forwarding, printing, or copying of \nthis e-mail is strictly prohibited. If you received this e-mail in error, \nplease return the e-mail to the sender, delete it from your computer, and \ndestroy any printed copy of it.",
"msg_date": "Fri, 8 Jan 2021 14:01:32 -0700",
"msg_from": "Craig Jackson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High COMMIT times"
},
{
"msg_contents": "On Wed, Jan 6, 2021 at 11:19 AM Don Seiler <[email protected]> wrote:\n\n> Good morning,\n>\n> This week we've noticed that we're starting to see spikes where COMMITs\n> are taking much longer than usual. Sometimes, quite a few seconds to\n> finish. After a few minutes they disappear but then return seemingly at\n> random. This becomes visible to the app and end user as a big stall in\n> activity.\n>\n\nHow are you monitoring the COMMIT times? What do you generally see in\npg_stat_activity.wait_event during the spikes/stalls?\n\nCheers,\n\nJeff\n\n>\n\nOn Wed, Jan 6, 2021 at 11:19 AM Don Seiler <[email protected]> wrote:Good morning,This week we've noticed that we're starting to see spikes where COMMITs are taking much longer than usual. Sometimes, quite a few seconds to finish. After a few minutes they disappear but then return seemingly at random. This becomes visible to the app and end user as a big stall in activity. How are you monitoring the COMMIT times? What do you generally see in pg_stat_activity.wait_event during the spikes/stalls?Cheers,Jeff",
"msg_date": "Sat, 9 Jan 2021 15:07:09 -0500",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High COMMIT times"
},
{
"msg_contents": "On Sat, Jan 9, 2021 at 2:07 PM Jeff Janes <[email protected]> wrote:\n\n>\n> How are you monitoring the COMMIT times? What do you generally see in\n> pg_stat_activity.wait_event during the spikes/stalls?\n>\n\nRight now we just observe the COMMIT duration posted in the postgresql log\n(we log anything over 100ms).\n\nOne other thing that I shamefully forgot to mention. When we see these slow\nCOMMITs in the log, they coincide with a connection storm (Cat 5 hurricane)\nfrom our apps where connections will go from ~200 to ~1200. This will\nprobably disgust many, but our PG server's max_connections is set to 2000.\nWe have a set of pgbouncers in front of this with a total\nmax_db_connections of 1600. I know many of you think this defeats the whole\npurpose of having pgbouncer and I agree. I've been trying to explain as\nmuch and that even with 32 CPUs on this DB host, we probably shouldn't\nexpect to be able to support more than 100-200 active connections, let\nalone 1600. I'm still pushing to have our app server instances (which also\nuse their own JDBC (Hikari) connection pool and *then* go through\npgbouncer) to lower their min/max connection settings but obviously it's\nsort of counterintuitive at first glance but hopefully everyone sees the\nbigger picture.\n\nOne nagging question I have is if the slow COMMIT is triggering the\nconnection storm (eg app sees slow response or timeout from a current\nconnection and fires off a new connection in its place), or vice-versa.\nWe're planning to deploy new performant cloud storage (Azure Ultra disk)\njust for WAL logs but I'm hesitant to say it'll be a silver bullet when we\nstill have this insane connection management strategy in place.\n\nCurious to know what others think (please pull no punches) and if others\nhave been in a similar scenario with anecdotes to share.\n\nThanks,\nDon.\n\n-- \nDon Seiler\nwww.seiler.us\n\nOn Sat, Jan 9, 2021 at 2:07 PM Jeff Janes <[email protected]> wrote:How are you monitoring the COMMIT times? What do you generally see in pg_stat_activity.wait_event during the spikes/stalls?Right now we just observe the COMMIT duration posted in the postgresql log (we log anything over 100ms).One other thing that I shamefully forgot to mention. When we see these slow COMMITs in the log, they coincide with a connection storm (Cat 5 hurricane) from our apps where connections will go from ~200 to ~1200. This will probably disgust many, but our PG server's max_connections is set to 2000. We have a set of pgbouncers in front of this with a total max_db_connections of 1600. I know many of you think this defeats the whole purpose of having pgbouncer and I agree. I've been trying to explain as much and that even with 32 CPUs on this DB host, we probably shouldn't expect to be able to support more than 100-200 active connections, let alone 1600. I'm still pushing to have our app server instances (which also use their own JDBC (Hikari) connection pool and *then* go through pgbouncer) to lower their min/max connection settings but obviously it's sort of counterintuitive at first glance but hopefully everyone sees the bigger picture.One nagging question I have is if the slow COMMIT is triggering the connection storm (eg app sees slow response or timeout from a current connection and fires off a new connection in its place), or vice-versa. We're planning to deploy new performant cloud storage (Azure Ultra disk) just for WAL logs but I'm hesitant to say it'll be a silver bullet when we still have this insane connection management strategy in place.Curious to know what others think (please pull no punches) and if others have been in a similar scenario with anecdotes to share.Thanks,Don.-- Don Seilerwww.seiler.us",
"msg_date": "Sun, 10 Jan 2021 18:42:24 -0600",
"msg_from": "Don Seiler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High COMMIT times"
},
{
"msg_contents": "How far apart are the min/max connection settings on your application\nconnection pool? We had a similar issue with connection storms in the past\non Oracle. One thing we did to minimize the storms was make sure there was\nnot a wide gap between the min/max, say no more than a 5-10 connection\ndifference between min/max.\n\nRegards,\n\nCraig\n\nOn Sun, Jan 10, 2021 at 5:42 PM Don Seiler <[email protected]> wrote:\n\n> On Sat, Jan 9, 2021 at 2:07 PM Jeff Janes <[email protected]> wrote:\n>\n>>\n>> How are you monitoring the COMMIT times? What do you generally see in\n>> pg_stat_activity.wait_event during the spikes/stalls?\n>>\n>\n> Right now we just observe the COMMIT duration posted in the postgresql log\n> (we log anything over 100ms).\n>\n> One other thing that I shamefully forgot to mention. When we see these\n> slow COMMITs in the log, they coincide with a connection storm (Cat 5\n> hurricane) from our apps where connections will go from ~200 to ~1200. This\n> will probably disgust many, but our PG server's max_connections is set to\n> 2000. We have a set of pgbouncers in front of this with a total\n> max_db_connections of 1600. I know many of you think this defeats the whole\n> purpose of having pgbouncer and I agree. I've been trying to explain as\n> much and that even with 32 CPUs on this DB host, we probably shouldn't\n> expect to be able to support more than 100-200 active connections, let\n> alone 1600. I'm still pushing to have our app server instances (which also\n> use their own JDBC (Hikari) connection pool and *then* go through\n> pgbouncer) to lower their min/max connection settings but obviously it's\n> sort of counterintuitive at first glance but hopefully everyone sees the\n> bigger picture.\n>\n> One nagging question I have is if the slow COMMIT is triggering the\n> connection storm (eg app sees slow response or timeout from a current\n> connection and fires off a new connection in its place), or vice-versa.\n> We're planning to deploy new performant cloud storage (Azure Ultra disk)\n> just for WAL logs but I'm hesitant to say it'll be a silver bullet when we\n> still have this insane connection management strategy in place.\n>\n> Curious to know what others think (please pull no punches) and if others\n> have been in a similar scenario with anecdotes to share.\n>\n> Thanks,\n> Don.\n>\n> --\n> Don Seiler\n> www.seiler.us\n>\n\n\n-- \nCraig\n\n-- \nThis electronic communication and the information and any files transmitted \nwith it, or attached to it, are confidential and are intended solely for \nthe use of the individual or entity to whom it is addressed and may contain \ninformation that is confidential, legally privileged, protected by privacy \nlaws, or otherwise restricted from disclosure to anyone else. If you are \nnot the intended recipient or the person responsible for delivering the \ne-mail to the intended recipient, you are hereby notified that any use, \ncopying, distributing, dissemination, forwarding, printing, or copying of \nthis e-mail is strictly prohibited. If you received this e-mail in error, \nplease return the e-mail to the sender, delete it from your computer, and \ndestroy any printed copy of it.",
"msg_date": "Mon, 11 Jan 2021 08:05:24 -0700",
"msg_from": "Craig Jackson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High COMMIT times"
},
{
"msg_contents": "On Mon, Jan 11, 2021 at 9:06 AM Craig Jackson <[email protected]>\nwrote:\n\n> How far apart are the min/max connection settings on your application\n> connection pool? We had a similar issue with connection storms in the past\n> on Oracle. One thing we did to minimize the storms was make sure there was\n> not a wide gap between the min/max, say no more than a 5-10 connection\n> difference between min/max.\n>\n\nApp instances by default are configured for 10 connections in their Hikari\nconnection pool, with no different setting for min so it's always 10. Some\nhave begun setting a minimumIdle to 5. I'm pushing them now to lower their\nminimumIdle to something like 2, and if they are scaled out to multiple app\ninstances then they should think about lowering their max from 10 to 5\nperhaps.\n\n From a recent spike from this morning (names have been changed, but other\ndata is real):\n\n From this morning's spike. At 06:03:15, service foo had 2 active sessions\n> and 127 idle sessions. At 06:03:30 (the next tick in grafana), it had 5\n> active sessions but 364 idle sessions. The foo_db DB overall had 9 active\n> and 337 idle at 06:03:15, and then 5 active and 788 idle overall in the\n> next tick. So a flood of new connections were created within that 15 second\n> interval (probably all within the same second) and more or less abandoned.\n\n\nDon.\n-- \nDon Seiler\nwww.seiler.us\n\nOn Mon, Jan 11, 2021 at 9:06 AM Craig Jackson <[email protected]> wrote:How far apart are the min/max connection settings on your application connection pool? We had a similar issue with connection storms in the past on Oracle. One thing we did to minimize the storms was make sure there was not a wide gap between the min/max, say no more than a 5-10 connection difference between min/max.App instances by default are configured for 10 connections in their Hikari connection pool, with no different setting for min so it's always 10. Some have begun setting a minimumIdle to 5. I'm pushing them now to lower their minimumIdle to something like 2, and if they are scaled out to multiple app instances then they should think about lowering their max from 10 to 5 perhaps.From a recent spike from this morning (names have been changed, but other data is real):From this morning's spike. At 06:03:15, service foo had 2 active sessions and 127 idle sessions. At 06:03:30 (the next tick in grafana), it had 5 active sessions but 364 idle sessions. The foo_db DB overall had 9 active and 337 idle at 06:03:15, and then 5 active and 788 idle overall in the next tick. So a flood of new connections were created within that 15 second interval (probably all within the same second) and more or less abandoned. Don.-- Don Seilerwww.seiler.us",
"msg_date": "Mon, 11 Jan 2021 09:23:19 -0600",
"msg_from": "Don Seiler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High COMMIT times"
},
{
"msg_contents": "Today another DBA suggested that perhaps increasing wal_buffers might be an\noption. Currently our wal_buffers is 32MB. Total system memory is 128GB,\nFYI.\n\nI'm curious if wal_buffers being full and/or undersized would present\nitself in higher COMMIT times as we are observing. We're considering trying\nthis with wal_buffers=64MB as a start. I don't see any actual limit defined\nin the documentation, but I'm assuming there's such a thing as \"too high\"\nhere. I'm curious what is and what negative side effects would indicate\nthat as well.\n\n From the documentation:\n\nOn systems with high log output, XLogFlush requests might not occur often\nenough to prevent XLogInsertRecord from having to do writes. On such\nsystems one should increase the number of WAL buffers by modifying the\nwal_buffers parameter. When full_page_writes is set and the system is very\nbusy, setting wal_buffers higher will help smooth response times during the\nperiod immediately following each checkpoint.\n\n\nWe do have full_page_writes=on as well, so this sounds like worth exploring.\n\nWe also currently have wal_compression=off. We have plenty of CPU headroom\nand so are considering enabling that first (since it doesn't require a DB\nrestart like changing wal_buffers does). I imagine enabling wal_compression\nwould save some IO.\n\nI'm interested to hear what you folks think about either or both of these\nsuggestions.\n\nThanks,\nDon.\n\n-- \nDon Seiler\nwww.seiler.us\n\nToday another DBA suggested that perhaps increasing wal_buffers might be an option. Currently our wal_buffers is 32MB. Total system memory is 128GB, FYI.I'm curious if wal_buffers being full and/or undersized would present itself in higher COMMIT times as we are observing. We're considering trying this with wal_buffers=64MB as a start. I don't see any actual limit defined in the documentation, but I'm assuming there's such a thing as \"too high\" here. I'm curious what is and what negative side effects would indicate that as well.From the documentation:On systems with high log output, XLogFlush requests might not occur often enough to prevent XLogInsertRecord from having to do writes. On such systems one should increase the number of WAL buffers by modifying the wal_buffers parameter. When full_page_writes is set and the system is very busy, setting wal_buffers higher will help smooth response times during the period immediately following each checkpoint.We do have full_page_writes=on as well, so this sounds like worth exploring.We also currently have wal_compression=off. We have plenty of CPU headroom and so are considering enabling that first (since it doesn't require a DB restart like changing wal_buffers does). I imagine enabling wal_compression would save some IO.I'm interested to hear what you folks think about either or both of these suggestions.Thanks,Don.-- Don Seilerwww.seiler.us",
"msg_date": "Fri, 15 Jan 2021 15:18:48 -0600",
"msg_from": "Don Seiler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High COMMIT times"
}
] |
[
{
"msg_contents": "Hi,\n\nThanks in advance for your help. I'm putting as much context and details as\npossible, but let me know if you have any questions.\n\nWhat?\n\nWe are experiencing some slow queries due to the query planner using an\nincorrect index. It is using an unoptimized index because the stats are\ncomputed during the night when the data is not the same as during the day.\n\nContext\n\nWe have a table conversations like that\n\n|id|status|user_id|\n\nand 2 indexes:\n\nCREATE INDEX index_conversations_on_user_id_and_status ON\npublic.conversations USING btree (user_id, status);\n\nCREATE INDEX index_conversations_on_status ON public.conversations USING\nbtree (status)\n\nThe slow query is the following:\n\nSELECT id FROM conversations WHERE status = 'in_progress' AND user_id = 123\n\nWe expect the query planner to use the\nindex_conversations_on_user_id_and_status but it sometimes uses the other\none.\n\nWhat's happening ?\n\nThere are hundreds of conversations with a status 'in_progress' at a given\ntime during the day but virtually none during the night.\n\nSo when the analyze is run during the night, PG then thinks that using the\nindex_conversations_on_status will return almost no rows and so it uses\nthis index instead of the combined one.\n\nWhen the analyze is run during the day, PG correctly uses the right index\n(index_conversations_on_user_id_and_status)\n\n[With an analyze run during the day]\n\nLimit (cost=0.43..8.45 rows=1 width=8) (actual time=1.666..1.666 rows=0\nloops=1)\n\n-> Index Scan using index_conversations_on_user_id_and_status on\nconversations (cost=0.43..8.45 rows=1 width=8) (actual_time=1.665..1.665\nrows:0 loops:1)\n\nIndex Cond: ((user_id = 123) AND ((status)::text = 'in_progress'::text))\n\nFilter: (id <> 1)\n\nPlanning Time: 8.642 ms\n\nExecution Time: 1.693 ms\n\n[With an analyze run during the night]\n\nLimit (cost=0.43..8.46 rows=1 width=8) (actual time=272.812..272.812 rows=0\nloops=1)\n\n-> Index Scan using index_conversations_on_status on conversations\n(cost=0.43..8.46 rows=1 width=8) (actual_time=272.812..272.812 rows:0\nloops:1)\n\nIndex Cond: ((status)::text = 'in_progress'::text))\n\nFilter: (id <> 1) AND (user_id = 123)\n\nRows Removed by Filter: 559\n\nPlanning Time: 0.133 ms\n\nExecution Time: 272.886 ms\n\nThe question\n\nWe currently run a manual weekly vacuum analyze during the night. I'm\nwondering what are our possible solutions. One is to manually run the\nanalyze during the day but is there a way to tell PG to run the auto\nanalyze at a given time of the day for example ? I guess we are not the\nfirst ones to have data patterns that differ between when the analyze is\nrun and the query is run.\n\nConfig\nPostgres version: 11Table Metadata\n\nSELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\nrelhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE\nrelname='conversations';\n\n relname | relpages | reltuples | relallvisible | relkind | relnatts |\nrelhassubclass | reloptions | pg_table_size\n\n-------------+----------+------------+---------------+---------+----------+----------------+------------+---------------\n\nconversations | 930265 | 7.3366e+06 | 902732 | r | 16\n| f | | 7622991872\nMaintenance Setup\n\nWe have manual vacuum analyze every week during the night.\n\nGUC Settings\nUnsure what's necessary...\n\n \"autovacuum_analyze_threshold\" = \"50\"\n \"autovacuum_max_workers\" = \"3\",\n \"autovacuum_naptime\" = \"60\"\n \"autovacuum_vacuum_threshold\" = \"50\"\n\n\nStatistics: n_distinct, MCV, histogram\n\nSELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV,\ntablename, attname, inherited, null_frac, n_distinct,\narray_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1)\nn_hist, correlation FROM pg_stats WHERE attname='status' AND\ntablename='conversations' ORDER BY 1 DESC;\n\nfrac_mcv | tablename | attname | inherited | null_frac | n_distinct |\nn_mcv | n_hist | correlation\n\n----------+-------------+---------+-----------+-----------+------------+-------+--------+-------------\n\n0.999967 | conversations | status | f | 0 | 6 |\n 5 | | 0.967121\n\nHi,Thanks in advance for your help. I'm putting as much context and details as possible, but let me know if you have any questions.What?We are experiencing some slow queries due to the query planner using an incorrect index. It is using an unoptimized index because the stats are computed during the night when the data is not the same as during the day.ContextWe have a table conversations like that|id|status|user_id|and 2 indexes:CREATE INDEX index_conversations_on_user_id_and_status ON public.conversations USING btree (user_id, status);CREATE INDEX index_conversations_on_status ON public.conversations USING btree (status)The slow query is the following: SELECT id FROM conversations WHERE status = 'in_progress' AND user_id = 123We expect the query planner to use the index_conversations_on_user_id_and_status but it sometimes uses the other one.What's happening ?There are hundreds of conversations with a status 'in_progress' at a given time during the day but virtually none during the night.So when the analyze is run during the night, PG then thinks that using the index_conversations_on_status will return almost no rows and so it uses this index instead of the combined one.When the analyze is run during the day, PG correctly uses the right index (index_conversations_on_user_id_and_status)[With an analyze run during the day]Limit (cost=0.43..8.45 rows=1 width=8) (actual time=1.666..1.666 rows=0 loops=1)-> Index Scan using index_conversations_on_user_id_and_status on conversations (cost=0.43..8.45 rows=1 width=8) (actual_time=1.665..1.665 rows:0 loops:1)Index Cond: ((user_id = 123) AND ((status)::text = 'in_progress'::text))Filter: (id <> 1)Planning Time: 8.642 msExecution Time: 1.693 ms[With an analyze run during the night]Limit (cost=0.43..8.46 rows=1 width=8) (actual time=272.812..272.812 rows=0 loops=1)-> Index Scan using index_conversations_on_status on conversations (cost=0.43..8.46 rows=1 width=8) (actual_time=272.812..272.812 rows:0 loops:1)Index Cond: ((status)::text = 'in_progress'::text))Filter: (id <> 1) AND (user_id = 123)Rows Removed by Filter: 559Planning Time: 0.133 msExecution Time: 272.886 msThe questionWe currently run a manual weekly vacuum analyze during the night. I'm wondering what are our possible solutions. One is to manually run the analyze during the day but is there a way to tell PG to run the auto analyze at a given time of the day for example ? I guess we are not the first ones to have data patterns that differ between when the analyze is run and the query is run.ConfigPostgres version: 11Table MetadataSELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='conversations'; relname | relpages | reltuples | relallvisible | relkind | relnatts | relhassubclass | reloptions | pg_table_size-------------+----------+------------+---------------+---------+----------+----------------+------------+---------------conversations | 930265 | 7.3366e+06 | 902732 | r | 16 | f | | 7622991872Maintenance SetupWe have manual vacuum analyze every week during the night.GUC SettingsUnsure what's necessary... \"autovacuum_analyze_threshold\" = \"50\" \"autovacuum_max_workers\" = \"3\", \"autovacuum_naptime\" = \"60\" \"autovacuum_vacuum_threshold\" = \"50\"Statistics: n_distinct, MCV, histogramSELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, inherited, null_frac, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist, correlation FROM pg_stats WHERE attname='status' AND tablename='conversations' ORDER BY 1 DESC;frac_mcv | tablename | attname | inherited | null_frac | n_distinct | n_mcv | n_hist | correlation----------+-------------+---------+-----------+-----------+------------+-------+--------+-------------0.999967 | conversations | status | f | 0 | 6 | 5 | | 0.967121",
"msg_date": "Mon, 11 Jan 2021 16:50:12 +0100",
"msg_from": "=?UTF-8?Q?R=C3=A9mi_Chatenay?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to deal with analyze gathering irrelevant stats"
},
{
"msg_contents": "On Mon, Jan 11, 2021 at 04:50:12PM +0100, R�mi Chatenay wrote:\n> We are experiencing some slow queries due to the query planner using an\n> incorrect index. It is using an unoptimized index because the stats are\n> computed during the night when the data is not the same as during the day.\n> \n> CREATE INDEX index_conversations_on_user_id_and_status ON\n> public.conversations USING btree (user_id, status);\n> \n> CREATE INDEX index_conversations_on_status ON public.conversations USING\n> btree (status)\n> \n> The slow query is the following:\n> \n> SELECT id FROM conversations WHERE status = 'in_progress' AND user_id = 123\n> \n> There are hundreds of conversations with a status 'in_progress' at a given\n> time during the day but virtually none during the night.\n> \n> So when the analyze is run during the night, PG then thinks that using the\n> index_conversations_on_status will return almost no rows and so it uses\n> this index instead of the combined one.\n> \n> When the analyze is run during the day, PG correctly uses the right index\n> (index_conversations_on_user_id_and_status)\n\n> We currently run a manual weekly vacuum analyze during the night. I'm\n> wondering what are our possible solutions. One is to manually run the\n> analyze during the day but is there a way to tell PG to run the auto\n> analyze at a given time of the day for example ? I guess we are not the\n> first ones to have data patterns that differ between when the analyze is\n> run and the query is run.\n\nI think you could run manual ANALYZE during the day just for this one column:\n ANALYZE conversations (status);\n\nIf it takes too long or causes a performance issue, you could do:\n SET default_statistics_target=10;\n ANALYZE conversations (status);\n\nYou could also change to make autovacuum do this on its own, by setting:\n ALTER TABLE conversations SET (autovacuum_analyze_scale_factor=0.005);\n\nIf that works but too slow, then maybe ALTER TABLE .. SET STATISTICS 10.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 11 Jan 2021 10:29:04 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to deal with analyze gathering irrelevant stats"
},
{
"msg_contents": "I'd personally bake an analyze call on that table (column) into whatever\njob is responsible for changing the state of the table that much, if it's\npossible to do it as a last step.\n\nI'd personally bake an analyze call on that table (column) into whatever job is responsible for changing the state of the table that much, if it's possible to do it as a last step.",
"msg_date": "Mon, 11 Jan 2021 11:37:33 -0500",
"msg_from": "Adam Brusselback <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to deal with analyze gathering irrelevant stats"
},
{
"msg_contents": "What is the usage pattern of the conversations table? Is getting many\ninserts during the day, or updates of status mostly?\n\nWhy have an index on the status column at all? My guess would be that there\nare 2-10 statuses, but many many rows in the table for most of those\nstatuses. Having a low cardinality index that changes frequently seems\nprone to mis-use by the system.\n\nWhat is the usage pattern of the conversations table? Is getting many inserts during the day, or updates of status mostly?Why have an index on the status column at all? My guess would be that there are 2-10 statuses, but many many rows in the table for most of those statuses. Having a low cardinality index that changes frequently seems prone to mis-use by the system.",
"msg_date": "Mon, 11 Jan 2021 09:47:36 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to deal with analyze gathering irrelevant stats"
},
{
"msg_contents": "I'd say it's a 1 insert for 5 - 10 updates.\n\nAs for the index on the status, it's because we have a job that runs every\nnight that deals with conversations in specific statuses. Having a low\ncardinality index that changes frequently seems prone to mis-use by the\nsystem. -> What would be an alternative ?\n\nOn Mon, Jan 11, 2021 at 5:48 PM Michael Lewis <[email protected]> wrote:\n\n> What is the usage pattern of the conversations table? Is getting many\n> inserts during the day, or updates of status mostly?\n>\n> Why have an index on the status column at all? My guess would be that\n> there are 2-10 statuses, but many many rows in the table for most of those\n> statuses. Having a low cardinality index that changes frequently seems\n> prone to mis-use by the system.\n>\n\nI'd say it's a 1 insert for 5 - 10 updates.As for the index on the status, it's because we have a job that runs every night that deals with conversations in specific statuses. Having a low cardinality index that changes frequently seems prone to mis-use by the system. -> What would be an alternative ?On Mon, Jan 11, 2021 at 5:48 PM Michael Lewis <[email protected]> wrote:What is the usage pattern of the conversations table? Is getting many inserts during the day, or updates of status mostly?Why have an index on the status column at all? My guess would be that there are 2-10 statuses, but many many rows in the table for most of those statuses. Having a low cardinality index that changes frequently seems prone to mis-use by the system.",
"msg_date": "Mon, 11 Jan 2021 17:52:25 +0100",
"msg_from": "=?UTF-8?Q?R=C3=A9mi_Chatenay?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to deal with analyze gathering irrelevant stats"
},
{
"msg_contents": "On Mon, Jan 11, 2021 at 9:52 AM Rémi Chatenay <[email protected]>\nwrote:\n\n> I'd say it's a 1 insert for 5 - 10 updates.\n>\n> As for the index on the status, it's because we have a job that runs every\n> night that deals with conversations in specific statuses. Having a low\n> cardinality index that changes frequently seems prone to mis-use by the\n> system. -> What would be an alternative ?\n>\n\nOne option would be a partial index on another field used in that query *where\nstatus in ( list_of_uncommon_statuses_queried_nightly )*\n\nSequential scan may be perfectly fine for a nightly script though.\n\nOn Mon, Jan 11, 2021 at 9:52 AM Rémi Chatenay <[email protected]> wrote:I'd say it's a 1 insert for 5 - 10 updates.As for the index on the status, it's because we have a job that runs every night that deals with conversations in specific statuses. Having a low cardinality index that changes frequently seems prone to mis-use by the system. -> What would be an alternative ?One option would be a partial index on another field used in that query where status in ( list_of_uncommon_statuses_queried_nightly )Sequential scan may be perfectly fine for a nightly script though.",
"msg_date": "Mon, 11 Jan 2021 11:26:58 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to deal with analyze gathering irrelevant stats"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nWe have got the result of the VACUUM (VERBOSE) as suggested, please find the output as following & suggest further.\r\n\r\nBut please note that this was done on an non production server where uncleaned data was there, although no dead tuples as it doesn’t run any configuration at present. However I can see it’s giving some error related to “stopping truncate” due to some lock conflict.\r\n\r\nEMMPR01:~# psql -d postDb1 -p 5492 -h 101.103.109.99 mmsuper\r\nPassword for user mmsuper:\r\npsql (9.4.9)\r\nType \"help\" for help.\r\n\r\npostDb1# \\dt+\r\n List of relations\r\nSchema | Name | Type | Owner | Size | Description\r\n---------+------------------------------+-------+---------+------------+-------------\r\nSchema1 | auditlogentry | table | super | 0 bytes |\r\nSchema1 | audittraillogentry | table | super | 163 GB |\r\nSchema1 | audittraillogentry_temp_join | table | super | 8192 bytes |\r\nSchema1 | cdrdetails | table | super | 909 MB |\r\nSchema1 | cdrlogentry | table | super | 8192 bytes |\r\nSchema1 | consolidatorlogentry | table | super | 24 kB |\r\nSchema1 | datalostchecklog | table | super | 0 bytes |\r\nSchema1 | eventlogentry | table | super | 56 kB |\r\nSchema1 | fileddtable_file | table | super | 0 bytes |\r\nSchema1 | filescksumcollected | table | super | 27 MB |\r\nSchema1 | filescollected | table | super | 0 bytes |\r\nSchema1 | inserviceperformance | table | super | 4552 kB |\r\nSchema1 | iostatlogentry | table | super | 0 bytes |\r\nSchema1 | loggedalarmentry | table | super | 21 MB |\r\nSchema1 | matchinglogentry | table | super | 8192 bytes |\r\nSchema1 | nrtrde_nerfile | table | super | 8192 bytes |\r\nSchema1 | nrtrde_tmp_nrin | table | super | 0 bytes |\r\nSchema1 | prstatlogentry | table | super | 0 bytes |\r\nSchema1 | statisticlogentry | table | super | 4400 kB |\r\nSchema1 | statisticupgradehistory | table | super | 40 kB |\r\nSchema1 | tpmcdrlog | table | super | 0 bytes |\r\nSchema1 | upgradehistory | table | super | 40 kB |\r\nSchema1 | vmstatlogentry | table | super | 0 bytes |\r\n(23 rows)\r\n\r\npostDb1# select * from audittraillogentry order by outtime ASC limit 5;\r\nevent | innodeid | innodename | sourceid | intime | outnodeid | outnodename | destinationid | outtime | bytes | cdrs | tableindex | noofsubfilesinfile | rec\r\nordsequencenumberlist\r\n-------+----------+------------+----------+--------+-----------+-------------+---------------+---------+-------+------+------------+--------------------+----\r\n----------------------\r\n(0 rows)\r\n\r\npostDb1# VACUUM (VERBOSE) audittraillogentry;\r\nINFO: vacuuming \"mmsuper.audittraillogentry\"\r\nINFO: scanned index \"audittraillogentry_pkey\" to remove 946137 row versions\r\nDETAIL: CPU 11.46s/2.92u sec elapsed 40.43 sec.\r\nINFO: scanned index \"audit_intime_index\" to remove 946137 row versions\r\nDETAIL: CPU 18.46s/4.57u sec elapsed 60.16 sec.\r\nINFO: scanned index \"audit_outtime_index\" to remove 946137 row versions\r\nDETAIL: CPU 18.28s/4.53u sec elapsed 56.35 sec.\r\nINFO: scanned index \"audit_sourceid_index\" to remove 946137 row versions\r\nDETAIL: CPU 52.15s/12.12u sec elapsed 176.57 sec.\r\nINFO: scanned index \"audit_destid_index\" to remove 946137 row versions\r\nDETAIL: CPU 46.18s/11.21u sec elapsed 163.85 sec.\r\nINFO: \"audittraillogentry\": removed 946137 row versions in 33096 pages\r\nDETAIL: CPU 2.02s/0.54u sec elapsed 18.75 sec.\r\nINFO: index \"audittraillogentry_pkey\" now contains 0 row versions in 815195 pages\r\nDETAIL: 946137 index row versions were removed.\r\n815155 index pages have been deleted, 801425 are currently reusable.\r\nCPU 0.00s/0.00u sec elapsed 0.10 sec.\r\nINFO: index \"audit_intime_index\" now contains 0 row versions in 1274980 pages\r\nDETAIL: 946137 index row versions were removed.\r\n1274868 index pages have been deleted, 1262921 are currently reusable.\r\nCPU 0.00s/0.00u sec elapsed 0.07 sec.\r\nINFO: index \"audit_outtime_index\" now contains 0 row versions in 1288204 pages\r\nDETAIL: 946137 index row versions were removed.\r\n1288086 index pages have been deleted, 1276659 are currently reusable.\r\nCPU 0.00s/0.00u sec elapsed 0.07 sec.\r\nINFO: index \"audit_sourceid_index\" now contains 0 row versions in 3711812 pages\r\nDETAIL: 946137 index row versions were removed.\r\n3711581 index pages have been deleted, 3700051 are currently reusable.\r\nCPU 0.00s/0.00u sec elapsed 0.02 sec.\r\nINFO: index \"audit_destid_index\" now contains 0 row versions in 3234747 pages\r\nDETAIL: 946137 index row versions were removed.\r\n3234422 index pages have been deleted, 3216227 are currently reusable.\r\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\r\nINFO: \"audittraillogentry\": found 291165 removable, 0 nonremovable row versions in 137466 out of 21356455 pages\r\nDETAIL: 0 dead row versions cannot be removed yet.\r\nThere were 5338303 unused item pointers.\r\n0 pages are entirely empty.\r\nCPU 152.39s/37.41u sec elapsed 549.50 sec.\r\nINFO: \"audittraillogentry\": stopping truncate due to conflicting lock request\r\nINFO: vacuuming \"pg_toast.pg_toast_16413\"\r\nINFO: index \"pg_toast_16413_index\" now contains 0 row versions in 1 pages\r\nDETAIL: 0 index row versions were removed.\r\n0 index pages have been deleted, 0 are currently reusable.\r\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\r\nINFO: \"pg_toast_16413\": found 0 removable, 0 nonremovable row versions in 0 out of 0 pages\r\nDETAIL: 0 dead row versions cannot be removed yet.\r\nThere were 0 unused item pointers.\r\n0 pages are entirely empty.\r\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\r\nVACUUM\r\npostDb1# SELECT pid, datname, usename, state, backend_xmin FROM pg_stat_activity WHERE backend_xmin IS NOT NULL ORDER BY age(backend_xmin) DESC;\r\n pid |datname | usename | state | backend_xmin\r\n-------+-----------------+---------+--------+--------------\r\n23278 | postDb1 | super | active | 1327734444\r\n31637 | postDb1 | super | active | 1327734444\r\n2458 | postDb1 | super | active | 1327734444\r\n11054 | postDb1 | super | active | 1327734444\r\n12080 | postDb1 | super | active | 1327734444\r\n14810 | postDb1 | super | active | 1327734444\r\n19528 | postDb1 | super | active | 1327734444\r\n16554 | postDb1 | super | active | 1327734444\r\n23303 | postDb1 | super | active | 1327734444\r\n19322 | postDb1 | super | active | 1327734444\r\n25109 | postDb1 | super | active | 1327734444\r\n17445 | postDb1 | super | active | 1327734444\r\n(12 rows)\r\n\r\npostDb1# SELECT slot_name, slot_type, database, xmin FROM pg_replication_slots ORDER BY age(xmin) DESC;\r\nslot_name | slot_type | database | xmin\r\n-----------+-----------+----------+------\r\n(0 rows)\r\n\r\npostDb1# SELECT gid, prepared, owner, database, transaction AS xmin FROM pg_prepared_xacts ORDER BY age(transaction) DESC;\r\ngid | prepared | owner | database | xmin\r\n-----+----------+-------+----------+------\r\n(0 rows)\r\n\r\npostDb1=# \\dt+\r\n List of relations\r\nSchema | Name | Type | Owner | Size | Description\r\n---------+------------------------------+-------+---------+------------+-------------\r\nSchema1 | auditlogentry | table | super | 0 bytes |\r\nSchema1 | audittraillogentry | table | super | 163 GB |\r\nSchema1 | audittraillogentry_temp_join | table | super | 8192 bytes |\r\nSchema1 | cdrdetails | table | super | 909 MB |\r\nSchema1 | cdrlogentry | table | super | 8192 bytes |\r\nSchema1 | consolidatorlogentry | table | super | 24 kB |\r\nSchema1 | datalostchecklog | table | super | 0 bytes |\r\nSchema1 | eventlogentry | table | super | 56 kB |\r\nSchema1 | fileddtable_file | table | super | 0 bytes |\r\nSchema1 | filescksumcollected | table | super | 27 MB |\r\nSchema1 | filescollected | table | super | 0 bytes |\r\nSchema1 | inserviceperformance | table | super | 4552 kB |\r\nSchema1 | iostatlogentry | table | super | 0 bytes |\r\nSchema1 | loggedalarmentry | table | super | 21 MB |\r\nSchema1 | matchinglogentry | table | super | 8192 bytes |\r\nSchema1 | nrtrde_nerfile | table | super | 8192 bytes |\r\nSchema1 | nrtrde_tmp_nrin | table | super | 0 bytes |\r\nSchema1 | prstatlogentry | table | super | 0 bytes |\r\nSchema1 | statisticlogentry | table | super | 4400 kB |\r\nSchema1 | statisticupgradehistory | table | super | 40 kB |\r\nSchema1 | tpmcdrlog | table | super | 0 bytes |\r\nSchema1 | upgradehistory | table | super | 40 kB |\r\nSchema1 | vmstatlogentry | table | super | 0 bytes |\r\n(23 rows)\r\n\r\npostDb1=#\r\n\r\nRegards\r\nTarkeshwar\r\n\n\n\n\n\n\n\n\n\n \nHi,\n \nWe have got the result of the VACUUM (VERBOSE) as suggested, please find the output as following & suggest further.\r\n\n \nBut please note that this was done on an non production server where uncleaned data was there, although no dead tuples as it doesn’t run any configuration at present. However I can see it’s giving some error related to “stopping truncate”\r\n due to some lock conflict.\n \nEMMPR01:~# psql -d postDb1 -p 5492 -h 101.103.109.99 mmsuper\nPassword for user mmsuper:\npsql (9.4.9)\nType \"help\" for help.\n \npostDb1# \\dt+\n List of relations\nSchema | Name | Type | Owner | Size | Description\n---------+------------------------------+-------+---------+------------+-------------\nSchema1 | auditlogentry | table | super | 0 bytes |\nSchema1 | audittraillogentry | table | super | 163 GB |\nSchema1 | audittraillogentry_temp_join | table | super | 8192 bytes |\nSchema1 | cdrdetails | table | super | 909 MB |\nSchema1 | cdrlogentry | table | super | 8192 bytes |\nSchema1 | consolidatorlogentry | table | super | 24 kB |\nSchema1 | datalostchecklog | table | super | 0 bytes |\nSchema1 | eventlogentry | table | super | 56 kB |\nSchema1 | fileddtable_file | table | super | 0 bytes |\nSchema1 | filescksumcollected | table | super | 27 MB |\nSchema1 | filescollected | table | super | 0 bytes |\nSchema1 | inserviceperformance | table | super | 4552 kB |\nSchema1 | iostatlogentry | table | super | 0 bytes |\nSchema1 | loggedalarmentry | table | super | 21 MB |\nSchema1 | matchinglogentry | table | super | 8192 bytes |\nSchema1 | nrtrde_nerfile | table | super | 8192 bytes |\nSchema1 | nrtrde_tmp_nrin | table | super | 0 bytes |\nSchema1 | prstatlogentry | table | super | 0 bytes |\nSchema1 | statisticlogentry | table | super | 4400 kB |\nSchema1 | statisticupgradehistory | table | super | 40 kB |\nSchema1 | tpmcdrlog | table | super | 0 bytes |\nSchema1 | upgradehistory | table | super | 40 kB |\nSchema1 | vmstatlogentry | table | super | 0 bytes |\n(23 rows)\n \npostDb1# select * from audittraillogentry order by outtime ASC limit 5;\nevent | innodeid | innodename | sourceid | intime | outnodeid | outnodename | destinationid | outtime | bytes | cdrs | tableindex | noofsubfilesinfile | rec\nordsequencenumberlist\n-------+----------+------------+----------+--------+-----------+-------------+---------------+---------+-------+------+------------+--------------------+----\n----------------------\n(0 rows)\n \npostDb1# VACUUM (VERBOSE) audittraillogentry;\nINFO: vacuuming \"mmsuper.audittraillogentry\"\nINFO: scanned index \"audittraillogentry_pkey\" to remove 946137 row versions\nDETAIL: CPU 11.46s/2.92u sec elapsed 40.43 sec.\nINFO: scanned index \"audit_intime_index\" to remove 946137 row versions\nDETAIL: CPU 18.46s/4.57u sec elapsed 60.16 sec.\nINFO: scanned index \"audit_outtime_index\" to remove 946137 row versions\nDETAIL: CPU 18.28s/4.53u sec elapsed 56.35 sec.\nINFO: scanned index \"audit_sourceid_index\" to remove 946137 row versions\nDETAIL: CPU 52.15s/12.12u sec elapsed 176.57 sec.\nINFO: scanned index \"audit_destid_index\" to remove 946137 row versions\nDETAIL: CPU 46.18s/11.21u sec elapsed 163.85 sec.\nINFO: \"audittraillogentry\": removed 946137 row versions in 33096 pages\nDETAIL: CPU 2.02s/0.54u sec elapsed 18.75 sec.\nINFO: index \"audittraillogentry_pkey\" now contains 0 row versions in 815195 pages\nDETAIL: 946137 index row versions were removed.\n815155 index pages have been deleted, 801425 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.10 sec.\nINFO: index \"audit_intime_index\" now contains 0 row versions in 1274980 pages\nDETAIL: 946137 index row versions were removed.\n1274868 index pages have been deleted, 1262921 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.07 sec.\nINFO: index \"audit_outtime_index\" now contains 0 row versions in 1288204 pages\nDETAIL: 946137 index row versions were removed.\n1288086 index pages have been deleted, 1276659 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.07 sec.\nINFO: index \"audit_sourceid_index\" now contains 0 row versions in 3711812 pages\nDETAIL: 946137 index row versions were removed.\n3711581 index pages have been deleted, 3700051 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.02 sec.\nINFO: index \"audit_destid_index\" now contains 0 row versions in 3234747 pages\nDETAIL: 946137 index row versions were removed.\n3234422 index pages have been deleted, 3216227 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"audittraillogentry\": found 291165 removable, 0 nonremovable row versions in 137466 out of 21356455 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 5338303 unused item pointers.\n0 pages are entirely empty.\nCPU 152.39s/37.41u sec elapsed 549.50 sec.\nINFO: \"audittraillogentry\": stopping truncate due to conflicting lock request\nINFO: vacuuming \"pg_toast.pg_toast_16413\"\nINFO: index \"pg_toast_16413_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_16413\": found 0 removable, 0 nonremovable row versions in 0 out of 0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\npostDb1# SELECT pid, datname, usename, state, backend_xmin FROM pg_stat_activity WHERE backend_xmin IS NOT NULL ORDER BY age(backend_xmin) DESC;\n pid |datname | usename | state | backend_xmin\n-------+-----------------+---------+--------+--------------\n23278 | postDb1 | super | active | 1327734444\n31637 | postDb1 | super | active | 1327734444\n2458 | postDb1 | super | active | 1327734444\n11054 | postDb1 | super | active | 1327734444\n12080 | postDb1 | super | active | 1327734444\n14810 | postDb1 | super | active | 1327734444\n19528 | postDb1 | super | active | 1327734444\n16554 | postDb1 | super | active | 1327734444\n23303 | postDb1 | super | active | 1327734444\n19322 | postDb1 | super | active | 1327734444\n25109 | postDb1 | super | active | 1327734444\n17445 | postDb1 | super | active | 1327734444\n(12 rows)\n \npostDb1# SELECT slot_name, slot_type, database, xmin FROM pg_replication_slots ORDER BY age(xmin) DESC;\nslot_name | slot_type | database | xmin\n-----------+-----------+----------+------\n(0 rows)\n \npostDb1# SELECT gid, prepared, owner, database, transaction AS xmin FROM pg_prepared_xacts ORDER BY age(transaction) DESC;\ngid | prepared | owner | database | xmin\n-----+----------+-------+----------+------\n(0 rows)\n \npostDb1=# \\dt+\n List of relations\nSchema | Name | Type | Owner | Size | Description\n---------+------------------------------+-------+---------+------------+-------------\nSchema1 | auditlogentry | table | super | 0 bytes |\nSchema1 | audittraillogentry | table | super | 163 GB |\nSchema1 | audittraillogentry_temp_join | table | super | 8192 bytes |\nSchema1 | cdrdetails | table | super | 909 MB |\nSchema1 | cdrlogentry | table | super | 8192 bytes |\nSchema1 | consolidatorlogentry | table | super | 24 kB |\nSchema1 | datalostchecklog | table | super | 0 bytes |\nSchema1 | eventlogentry | table | super | 56 kB |\nSchema1 | fileddtable_file | table | super | 0 bytes |\nSchema1 | filescksumcollected | table | super | 27 MB |\nSchema1 | filescollected | table | super | 0 bytes |\nSchema1 | inserviceperformance | table | super | 4552 kB |\nSchema1 | iostatlogentry | table | super | 0 bytes |\nSchema1 | loggedalarmentry | table | super | 21 MB |\nSchema1 | matchinglogentry | table | super | 8192 bytes |\nSchema1 | nrtrde_nerfile | table | super | 8192 bytes |\nSchema1 | nrtrde_tmp_nrin | table | super | 0 bytes |\nSchema1 | prstatlogentry | table | super | 0 bytes |\nSchema1 | statisticlogentry | table | super | 4400 kB |\nSchema1 | statisticupgradehistory | table | super | 40 kB |\nSchema1 | tpmcdrlog | table | super | 0 bytes |\nSchema1 | upgradehistory | table | super | 40 kB |\nSchema1 | vmstatlogentry | table | super | 0 bytes |\n(23 rows)\n \npostDb1=#\n \nRegards\nTarkeshwar",
"msg_date": "Thu, 14 Jan 2021 12:09:51 +0000",
"msg_from": "M Tarkeshwar Rao <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Need information on how MM frees up disk space (vaccum) after\n scheduled DB cleanup by BGwCronScript/BGwLogCleaner"
}
] |
[
{
"msg_contents": "Hi all,\n\n*My top-level query is*: I'm using logical replication under pg 9.6 to do a\nkind of change data capture and I'm seeing occasional extended periods of\nsignificant lag. I'm not sure what conceptual model I'm missing in order to\nunderstand why this happens.\n\n*The details:*\n\nI'm running Postgres 9.6.19 from the postgres debian apt repos & the\nwal2json extension.\n\nI have a custom client application which essentially executes\npg_logical_slot_get_changes() for some manually-created logical replication\nslot on a loop.\n\nI'm monitoring replication lag defined as pg_current_xlog_location() -\nconfirmed_flush_lsn for that slot.\n\nWhat I'm observing is - very occasionally - an extended period (hours long)\nwherein:\n\n* The normal database write load continues or slightly increases\n* calls to pg_logical_slot_get_changes() return no rows and\nconfirmed_flush_lsn doesn't move\n* the duration of a call to pg_logical_slot_get_changes() rises linearly\nover time\n\nI understand from the docs and research that this is usually caused by a\nlong-running write transaction, but I notice I'm still confused.\n\n* I'm not 100% sure - I'm still confirming - but I'm fairly confident that\nI don't have any egregiously long write transactions (at least on that\nscale of hours). Are there any other common scenarios that can result in a\nsimilar 'blockage'? e.g some categories of long read-only transactions, or\nadvisory locks, or other kinds of database activity like a vacuum?\n\n* Conversely, from experimenting, it seems as if not all long-running write\ntransactions cause pg_logical_slot_get_changes() to be unable to advance.\nIn fact, I'm not able so far to produce a minimal set of simple queries\nwhich show that behaviour.\n\nGiven the following sequence of queries I see changes emitted:\n\n-- session 1\nbegin;\ninsert into foo(bar,baz) values (1, 1);\n\n-- session 2\nbegin;\ninsert into foo(bar,baz) values (2,2);\ncommit;\n\n-- session 3\n select data from\npg_logical_slot_get_changes('example-slot', NULL, NULL, 'format-version',\n'2');\n\nSession 3 is able to return the row from session 2 despite session 1's\nongoing transaction starting first and not yet committing. Can you help me\nunderstand (or better yet point me to a resource which explains) the\nunderlying logic defining how logical decoding does in fact get blocked by\nin-flight transactions?\n\nThanks,\nPatrick\n\nHi all,My top-level query is: I'm using logical replication under pg 9.6 to do a kind of change data capture and I'm seeing occasional extended periods of significant lag. I'm not sure what conceptual model I'm missing in order to understand why this happens.The details:I'm running Postgres 9.6.19 from the postgres debian apt repos & the wal2json extension.I have a custom client application which essentially executes pg_logical_slot_get_changes() for some manually-created logical replication slot on a loop.I'm monitoring replication lag defined as pg_current_xlog_location() - confirmed_flush_lsn for that slot.What I'm observing is - very occasionally - an extended period (hours long) wherein:* The normal database write load continues or slightly increases* calls to pg_logical_slot_get_changes() return no rows and confirmed_flush_lsn doesn't move* the duration of a call to pg_logical_slot_get_changes() rises linearly over timeI understand from the docs and research that this is usually caused by a long-running write transaction, but I notice I'm still confused.* I'm not 100% sure - I'm still confirming - but I'm fairly confident that I don't have any egregiously long write transactions (at least on that scale of hours). Are there any other common scenarios that can result in a similar 'blockage'? e.g some categories of long read-only transactions, or advisory locks, or other kinds of database activity like a vacuum?* Conversely, from experimenting, it seems as if not all long-running write transactions cause pg_logical_slot_get_changes() to be unable to advance. In fact, I'm not able so far to produce a minimal set of simple queries which show that behaviour.Given the following sequence of queries I see changes emitted:-- session 1begin;insert into foo(bar,baz) values (1, 1);-- session 2begin;insert into foo(bar,baz) values (2,2);commit;-- session 3 select data from pg_logical_slot_get_changes('example-slot', NULL, NULL, 'format-version', '2');Session 3 is able to return the row from session 2 despite session 1's ongoing transaction starting first and not yet committing. Can you help me understand (or better yet point me to a resource which explains) the underlying logic defining how logical decoding does in fact get blocked by in-flight transactions?Thanks,Patrick",
"msg_date": "Thu, 21 Jan 2021 18:21:20 +0000",
"msg_from": "Patrick Molgaard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Understanding logical replication lag"
}
] |
[
{
"msg_contents": "Hi,\nI have a query performance issue, it takes a long time, and not even getting explain analyze the output. this query joining on 3 tables which have around a - 176223509\nb - 286887780\nc - 214219514\n\n\n\nexplainselect Count(a.\"individual_entity_proxy_id\")from \"prospect\" ainner join \"individual_demographic\" bon a.\"individual_entity_proxy_id\" = b.\"individual_entity_proxy_id\"inner join \"household_demographic\" c on a.\"household_entity_proxy_id\" = c.\"household_entity_proxy_id\"where (((a.\"last_contacted_anychannel_dttm\" is null) or (a.\"last_contacted_anychannel_dttm\" < TIMESTAMP '2020-11-23 0:00:00.000000')) and (a.\"shared_paddr_with_customer_ind\" = 'N') and (a.\"profane_wrd_ind\" = 'N') and (a.\"tmo_ofnsv_name_ind\" = 'N') and (a.\"has_individual_address\" = 'Y') and (a.\"has_last_name\" = 'Y') and (a.\"has_first_name\" = 'Y')) and ((b.\"tax_bnkrpt_dcsd_ind\" = 'N') and (b.\"govt_prison_ind\" = 'N') and (b.\"cstmr_prspct_ind\" = 'Prospect')) and (( c.\"hspnc_lang_prfrnc_cval\" in ('B', 'E', 'X') ) or (c.\"hspnc_lang_prfrnc_cval\" is null));-- Explain output\n \"Finalize Aggregate (cost=32813309.28..32813309.29 rows=1 width=8)\"\" -> Gather (cost=32813308.45..32813309.26 rows=8 width=8)\"\" Workers Planned: 8\"\" -> Partial Aggregate (cost=32812308.45..32812308.46 rows=1 width=8)\"\" -> Merge Join (cost=23870130.00..32759932.46 rows=20950395 width=8)\"\" Merge Cond: (a.individual_entity_proxy_id = b.individual_entity_proxy_id)\"\" -> Sort (cost=23870127.96..23922503.94 rows=20950395 width=8)\"\" Sort Key: a.individual_entity_proxy_id\"\" -> Hash Join (cost=13533600.42..21322510.26 rows=20950395 width=8)\"\" Hash Cond: (a.household_entity_proxy_id = c.household_entity_proxy_id)\"\" -> Parallel Seq Scan on prospect a (cost=0.00..6863735.60 rows=22171902 width=16)\"\" Filter: (((last_contacted_anychannel_dttm IS NULL) OR (last_contacted_anychannel_dttm < '2020-11-23 00:00:00'::timestamp without time zone)) AND (shared_paddr_with_customer_ind = 'N'::bpchar) AND (profane_wrd_ind = 'N'::bpchar) AND (tmo_ofnsv_name_ind = 'N'::bpchar) AND (has_individual_address = 'Y'::bpchar) AND (has_last_name = 'Y'::bpchar) AND (has_first_name = 'Y'::bpchar))\"\" -> Hash (cost=10801715.18..10801715.18 rows=166514899 width=8)\"\" -> Seq Scan on household_demographic c (cost=0.00..10801715.18 rows=166514899 width=8)\"\" Filter: (((hspnc_lang_prfrnc_cval)::text = ANY ('{B,E,X}'::text[])) OR (hspnc_lang_prfrnc_cval IS NULL))\"\" -> Index Only Scan using indx_individual_demographic_prxyid_taxind_prspctind_prsnind on individual_demographic b (cost=0.57..8019347.13 rows=286887776 width=8)\"\" Index Cond: ((tax_bnkrpt_dcsd_ind = 'N'::bpchar) AND (cstmr_prspct_ind = 'Prospect'::text) AND (govt_prison_ind = 'N'::bpchar))\" \nTables ddl are attached in dbfiddle -- Postgres 11 | db<>fiddle\n\n| \n| \n| | \nPostgres 11 | db<>fiddle\n\nFree online SQL environment for experimenting and sharing.\n |\n\n |\n\n |\n\n\n\n\nServer configuration is: Version: 10.11RAM - 320GBvCPU - 32 \"maintenance_work_mem\" 256MB\"work_mem\" 1GB\"shared_buffers\" 64GB\n\nAny suggestions? \n\nThanks,Rj\n\n\n\n\n\n\n\nHi,I have a query performance issue, it takes a long time, and not even getting explain analyze the output. this query joining on 3 tables which have around a - 176223509b - 286887780c - 214219514explainselect Count(a.\"individual_entity_proxy_id\")from \"prospect\" ainner join \"individual_demographic\" bon a.\"individual_entity_proxy_id\" = b.\"individual_entity_proxy_id\"inner join \"household_demographic\" c on a.\"household_entity_proxy_id\" = c.\"household_entity_proxy_id\"where (((a.\"last_contacted_anychannel_dttm\" is null) or (a.\"last_contacted_anychannel_dttm\" < TIMESTAMP '2020-11-23 0:00:00.000000')) and (a.\"shared_paddr_with_customer_ind\" = 'N') and (a.\"profane_wrd_ind\" = 'N') and (a.\"tmo_ofnsv_name_ind\" = 'N') and (a.\"has_individual_address\" = 'Y') and (a.\"has_last_name\" = 'Y') and (a.\"has_first_name\" = 'Y')) and ((b.\"tax_bnkrpt_dcsd_ind\" = 'N') and (b.\"govt_prison_ind\" = 'N') and (b.\"cstmr_prspct_ind\" = 'Prospect')) and (( c.\"hspnc_lang_prfrnc_cval\" in ('B', 'E', 'X') ) or (c.\"hspnc_lang_prfrnc_cval\" is null));-- Explain output \"Finalize Aggregate (cost=32813309.28..32813309.29 rows=1 width=8)\"\" -> Gather (cost=32813308.45..32813309.26 rows=8 width=8)\"\" Workers Planned: 8\"\" -> Partial Aggregate (cost=32812308.45..32812308.46 rows=1 width=8)\"\" -> Merge Join (cost=23870130.00..32759932.46 rows=20950395 width=8)\"\" Merge Cond: (a.individual_entity_proxy_id = b.individual_entity_proxy_id)\"\" -> Sort (cost=23870127.96..23922503.94 rows=20950395 width=8)\"\" Sort Key: a.individual_entity_proxy_id\"\" -> Hash Join (cost=13533600.42..21322510.26 rows=20950395 width=8)\"\" Hash Cond: (a.household_entity_proxy_id = c.household_entity_proxy_id)\"\" -> Parallel Seq Scan on prospect a (cost=0.00..6863735.60 rows=22171902 width=16)\"\" Filter: (((last_contacted_anychannel_dttm IS NULL) OR (last_contacted_anychannel_dttm < '2020-11-23 00:00:00'::timestamp without time zone)) AND (shared_paddr_with_customer_ind = 'N'::bpchar) AND (profane_wrd_ind = 'N'::bpchar) AND (tmo_ofnsv_name_ind = 'N'::bpchar) AND (has_individual_address = 'Y'::bpchar) AND (has_last_name = 'Y'::bpchar) AND (has_first_name = 'Y'::bpchar))\"\" -> Hash (cost=10801715.18..10801715.18 rows=166514899 width=8)\"\" -> Seq Scan on household_demographic c (cost=0.00..10801715.18 rows=166514899 width=8)\"\" Filter: (((hspnc_lang_prfrnc_cval)::text = ANY ('{B,E,X}'::text[])) OR (hspnc_lang_prfrnc_cval IS NULL))\"\" -> Index Only Scan using indx_individual_demographic_prxyid_taxind_prspctind_prsnind on individual_demographic b (cost=0.57..8019347.13 rows=286887776 width=8)\"\" Index Cond: ((tax_bnkrpt_dcsd_ind = 'N'::bpchar) AND (cstmr_prspct_ind = 'Prospect'::text) AND (govt_prison_ind = 'N'::bpchar))\" Tables ddl are attached in dbfiddle -- Postgres 11 | db<>fiddlePostgres 11 | db<>fiddleFree online SQL environment for experimenting and sharing.Server configuration is: Version: 10.11RAM - 320GBvCPU - 32 \"maintenance_work_mem\" 256MB\"work_mem\" 1GB\"shared_buffers\" 64GBAny suggestions? Thanks,Rj",
"msg_date": "Fri, 22 Jan 2021 01:53:26 +0000 (UTC)",
"msg_from": "Nagaraj Raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query performance issue"
},
{
"msg_contents": "On Fri, Jan 22, 2021 at 01:53:26AM +0000, Nagaraj Raj wrote:\n> Tables ddl are attached in dbfiddle --�Postgres 11 | db<>fiddle\n> Postgres 11 | db<>fiddle\n> Server configuration is:�Version: 10.11RAM - 320GBvCPU - 32�\"maintenance_work_mem\" 256MB\"work_mem\" � � � � � � 1GB\"shared_buffers\" 64GB\n\n> Aggregate (cost=31.54..31.55 rows=1 width=8) (actual time=0.010..0.012 rows=1 loops=1)\n> -> Nested Loop (cost=0.00..31.54 rows=1 width=8) (actual time=0.007..0.008 rows=0 loops=1)\n> Join Filter: (a.household_entity_proxy_id = c.household_entity_proxy_id)\n> -> Nested Loop (cost=0.00..21.36 rows=1 width=16) (actual time=0.006..0.007 rows=0 loops=1)\n> Join Filter: (a.individual_entity_proxy_id = b.individual_entity_proxy_id)\n> -> Seq Scan on prospect a (cost=0.00..10.82 rows=1 width=16) (actual time=0.006..0.006 rows=0 loops=1)\n> Filter: (((last_contacted_anychannel_dttm IS NULL) OR (last_contacted_anychannel_dttm < '2020-11-23 00:00:00'::timestamp without time zone)) AND (shared_paddr_with_customer_ind = 'N'::bpchar) AND (profane_wrd_ind = 'N'::bpchar) AND (tmo_ofnsv_name_ind = 'N'::bpchar) AND (has_individual_address = 'Y'::bpchar) AND (has_last_name = 'Y'::bpchar) AND (has_first_name = 'Y'::bpchar))\n> -> Seq Scan on individual_demographic b (cost=0.00..10.53 rows=1 width=8) (never executed)\n> Filter: ((tax_bnkrpt_dcsd_ind = 'N'::bpchar) AND (govt_prison_ind = 'N'::bpchar) AND ((cstmr_prspct_ind)::text = 'Prospect'::text))\n> -> Seq Scan on household_demographic c (cost=0.00..10.14 rows=3 width=8) (never executed)\n> Filter: (((hspnc_lang_prfrnc_cval)::text = ANY ('{B,E,X}'::text[])) OR (hspnc_lang_prfrnc_cval IS NULL))\n> Planning Time: 1.384 ms\n> Execution Time: 0.206 ms\n> 13 rows\n\nIt's doing nested loops with estimated rowcount=1, which indicates a bad\nunderestimate, and suggests that the conditions are redundant or correlated.\n\nMaybe you can handle this with MV stats on the correlated columns:\n\nCREATE STATISTICS prospect_stats (dependencies) ON\n shared_paddr_with_customer_ind, profane_wrd_ind, tmo_ofnsv_name_ind, has_individual_address, has_last_name, has_first_name\n FROM prospect;\nCREATE STATISTICS individual_demographic_stats (dependencies) ON\n tax_bnkrpt_dcsd_ind, govt_prison_ind, cstmr_prspct_ind\n FROM individual_demographic_stats \nANALYZE prospect, individual_demographic_stats ;\n\nSince it's expensive to compute stats on large number of columns, I'd then\ncheck *which* are correlated and then only compute MV stats on those. This\nwill show col1=>col2: X where X approaches 1, the conditions are highly\ncorrelated:\nSELECT * FROM pg_statistic_ext; -- pg_statistic_ext_data since v12\n\nAlso, as a diagnostic tool to get \"explain analyze\" to finish, you can\nSET enable_nestloop=off;\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 21 Jan 2021 20:35:14 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance issue"
},
{
"msg_contents": "\n\nOn 1/22/21 3:35 AM, Justin Pryzby wrote:\n> On Fri, Jan 22, 2021 at 01:53:26AM +0000, Nagaraj Raj wrote:\n>> Tables ddl are attached in dbfiddle -- Postgres 11 | db<>fiddle\n>> Postgres 11 | db<>fiddle\n>> Server configuration is: Version: 10.11RAM - 320GBvCPU - 32 \"maintenance_work_mem\" 256MB\"work_mem\" 1GB\"shared_buffers\" 64GB\n> \n>> Aggregate (cost=31.54..31.55 rows=1 width=8) (actual time=0.010..0.012 rows=1 loops=1)\n>> -> Nested Loop (cost=0.00..31.54 rows=1 width=8) (actual time=0.007..0.008 rows=0 loops=1)\n>> Join Filter: (a.household_entity_proxy_id = c.household_entity_proxy_id)\n>> -> Nested Loop (cost=0.00..21.36 rows=1 width=16) (actual time=0.006..0.007 rows=0 loops=1)\n>> Join Filter: (a.individual_entity_proxy_id = b.individual_entity_proxy_id)\n>> -> Seq Scan on prospect a (cost=0.00..10.82 rows=1 width=16) (actual time=0.006..0.006 rows=0 loops=1)\n>> Filter: (((last_contacted_anychannel_dttm IS NULL) OR (last_contacted_anychannel_dttm < '2020-11-23 00:00:00'::timestamp without time zone)) AND (shared_paddr_with_customer_ind = 'N'::bpchar) AND (profane_wrd_ind = 'N'::bpchar) AND (tmo_ofnsv_name_ind = 'N'::bpchar) AND (has_individual_address = 'Y'::bpchar) AND (has_last_name = 'Y'::bpchar) AND (has_first_name = 'Y'::bpchar))\n>> -> Seq Scan on individual_demographic b (cost=0.00..10.53 rows=1 width=8) (never executed)\n>> Filter: ((tax_bnkrpt_dcsd_ind = 'N'::bpchar) AND (govt_prison_ind = 'N'::bpchar) AND ((cstmr_prspct_ind)::text = 'Prospect'::text))\n>> -> Seq Scan on household_demographic c (cost=0.00..10.14 rows=3 width=8) (never executed)\n>> Filter: (((hspnc_lang_prfrnc_cval)::text = ANY ('{B,E,X}'::text[])) OR (hspnc_lang_prfrnc_cval IS NULL))\n>> Planning Time: 1.384 ms\n>> Execution Time: 0.206 ms\n>> 13 rows\n> \n> It's doing nested loops with estimated rowcount=1, which indicates a bad\n> underestimate, and suggests that the conditions are redundant or correlated.\n> \n\nNo, it's not. The dbfiddle does that because it's using empty tables, \nbut the plan shared by Nagaraj does not contain any nested loops.\n\nNagaraj, if the EXPLAIN ANALYZE does not complete, there are two things \nyou can do to determine which part of the plan is causing trouble.\n\nFirstly, you can profile the backend using perf or some other profiles, \nand if we're lucky the function will give us some hints about which node \ntype is using the CPU.\n\nSecondly, you can \"cut\" the query into smaller parts, to run only parts \nof the plan - essentially start from inner-most join, and incrementally \nadd more and more tables until it gets too long.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 14 Feb 2021 23:03:50 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance issue"
},
{
"msg_contents": "What indexes exist on those tables? How many rows do you expect to get back\nin total? Is the last_contacted_anychannel_dttm clause restrictive, or does\nthat include most of the prospect table (check pg_stats for the histogram\nif you don't know).\n\nand (a.\"shared_paddr_with_customer_ind\" = 'N')\n and (a.\"profane_wrd_ind\" = 'N')\n and (a.\"tmo_ofnsv_name_ind\" = 'N')\n and (a.\"has_individual_address\" = 'Y')\n and (a.\"has_last_name\" = 'Y')\n and (a.\"has_first_name\" = 'Y'))\n\nAre these conditions expected to throw out very few rows, or most of the\ntable?\n\nIf you change both joins to EXISTS clauses, do you get the same plan when\nyou run explain?\n\nWhat indexes exist on those tables? How many rows do you expect to get back in total? Is the last_contacted_anychannel_dttm clause restrictive, or does that include most of the prospect table (check pg_stats for the histogram if you don't know).and (a.\"shared_paddr_with_customer_ind\" = 'N') and (a.\"profane_wrd_ind\" = 'N') and (a.\"tmo_ofnsv_name_ind\" = 'N') and (a.\"has_individual_address\" = 'Y') and (a.\"has_last_name\" = 'Y') and (a.\"has_first_name\" = 'Y'))Are these conditions expected to throw out very few rows, or most of the table?If you change both joins to EXISTS clauses, do you get the same plan when you run explain?",
"msg_date": "Tue, 16 Feb 2021 09:40:08 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance issue"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm seeking guidance in how to improve the performance of a slow query and\nto have some other sets of eyes confirm that what I wrote does what I\nintend.\n\nAccording to the PostgreSQL wiki there is a set of metadata that I should\nprovide to help you help me. So let's begin there.\n\nPostgreSQL version: PostgreSQL 12.5 on x86_64-pc-linux-gnu, compiled by gcc\n(GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit\n\nFull table and index schema:\n\nCREATE TABLE attempt_scores (\n id bigint GENERATED ALWAYS AS IDENTITY PRIMARY KEY,\n attempt_report_id bigint NOT NULL,\n score_value double precision NOT NULL,\n created_at timestamptz NOT NULL DEFAULT now(),\n attempt_report_updated_at timestamptz NOT NULL,\n student_id int NOT NULL,\n course_id int NOT NULL,\n assignment_id int NOT NULL,\n score_name citext NOT NULL CHECK (length(trim(score_name)) > 0),\n attempted_by citext NOT NULL CHECK (length(trim(attempted_by)) > 0),\n CONSTRAINT for_upsert UNIQUE (attempt_report_id, score_name)\n);\nCREATE INDEX ON attempt_scores (attempt_report_updated_at);\nCOMMENT ON TABLE attempt_scores IS\n $$The collection of assignment scores extracted from the LMS database.$$;\nCOMMENT ON COLUMN attempt_scores.attempt_report_id IS\n $$Each assignment attempt has an associated attempt report (attempt_reports)\nwhere the scores of the attempt is recorded. This column is the pk value from\nthat table.$$;\nCOMMENT ON COLUMN attempt_scores.score_value IS $$The score's value.$$;\nCOMMENT ON COLUMN attempt_scores.created_at IS $$The timestamp the\nrecord was created.$$;\nCOMMENT ON COLUMN attempt_scores.student_id IS $$The student's ID.$$;\nCOMMENT ON COLUMN attempt_scores.course_id IS $$The course's primary\nkey in the LMS database.$$;\nCOMMENT ON COLUMN attempt_scores.assignment_id IS $$The assignment's\nprimary key in the LMS database.$$;\nCOMMENT ON COLUMN attempt_scores.score_name IS $$The source/name of\nthe score.$$;\nCOMMENT ON COLUMN attempt_scores.attempted_by IS 'The users.role column in LMS';\nCOMMENT ON COLUMN attempt_scores.attempt_report_updated_at IS\n $$The timestamp value of attempt_reports.updated_at on the LMS side. We use it\nto find new rows added since the last time we exported to Salesforce.$$;\n\n\nTable metadata:\nrelname | relpages | reltuples | relallvisible | relkind | relnatts |\nrelhassubclass | reloptions | pg_table_size\n----------------+----------+-----------+---------------+---------+----------+----------------+------------+---------------\n attempt_scores | 130235 | 9352640 | 0 | r | 10\n| f | [NULL] | 1067180032\n\nOther context: The PostgreSQL database is an Amazon RDS instance.\n\nNext up is the query and a description of what it's supposed to do.\n\n-- What this query is supposed to do is to compute averages for a set of\nscoring/learning metrics but it's not so\n-- straight forward. There is an umbrella metric that summarises the others\ncalled the student performance index (SPI)\n-- and the folks who want this info want the averages to be driven by the\nSPI. So the basic algorithm is that for each\n-- student/assignment pair, find the assignment that has the highest SPI\nthen use that to collect and average the\n-- component metrics.\nEXPLAIN (ANALYZE, BUFFERS)\nWITH max_spi AS (\n SELECT student_id, assignment_id, max(score_value) spi\n FROM attempt_scores\n WHERE score_name = 'student_performance_index'\n GROUP BY student_id, assignment_id\n HAVING max(score_value) > 0\n), reports AS (\n SELECT max(attempt_report_id) attempt_report_id, max(score_value) spi\n FROM max_spi m NATURAL JOIN attempt_scores\n WHERE score_value = m.spi\n GROUP BY student_id, assignment_id\n)\nSELECT\n avg(spi) spi,\n avg(CASE score_name WHEN 'digital_clinical_experience' THEN score_value\nEND) dce,\n avg(CASE score_name WHEN 'tier1_subjective_data_collection' THEN\nscore_value END) sdc\nFROM reports NATURAL JOIN attempt_scores;\n\nFinally, the EXPLAIN output and some links.\nQUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=672426.02..672426.03 rows=1 width=24) (actual\ntime=903359.923..903368.957 rows=1 loops=1)\n Buffers: shared hit=6167172 read=4199539, temp read=99551 written=99678\n I/O Timings: read=839121.853\n -> Nested Loop (cost=672389.80..672425.91 rows=8 width=37) (actual\ntime=36633.920..885232.956 rows=7034196 loops=1)\n Buffers: shared hit=6167172 read=4199539, temp read=99551\nwritten=99678\n I/O Timings: read=839121.853\n -> GroupAggregate (cost=672389.37..672389.39 rows=1 width=24)\n(actual time=36628.945..41432.502 rows=938244 loops=1)\n Group Key: attempt_scores_2.student_id,\nattempt_scores_2.assignment_id\n Buffers: shared hit=191072 read=329960, temp read=99551\nwritten=99678\n I/O Timings: read=18210.866\n -> Sort (cost=672389.37..672389.37 rows=1 width=24)\n(actual time=36628.111..39562.421 rows=2500309 loops=1)\n Sort Key: attempt_scores_2.student_id,\nattempt_scores_2.assignment_id\n Sort Method: external merge Disk: 83232kB\n Buffers: shared hit=191072 read=329960, temp\nread=99551 written=99678\n I/O Timings: read=18210.866\n -> Gather (cost=425676.58..672389.36 rows=1\nwidth=24) (actual time=25405.650..34716.694 rows=2500309 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=191072 read=329960, temp\nread=78197 written=78260\n I/O Timings: read=18210.866\n -> Hash Join (cost=424676.58..671389.26 rows=1\nwidth=24) (actual time=25169.930..34121.825 rows=833436 loops=3)\n Hash Cond: ((attempt_scores_1.student_id =\nattempt_scores_2.student_id) AND (attempt_scores_1.assignment_id =\nattempt_scores_2.assignment_id) AND (attempt_scores_1.score_value =\n(max(attempt_scores_2.score_value))))\n Buffers: shared hit=191072 read=329960,\ntemp read=78197 written=78260\n I/O Timings: read=18210.866\n -> Parallel Seq Scan on attempt_scores\nattempt_scores_1 (cost=0.00..169204.33 rows=3896933 width=24) (actual\ntime=0.013..5775.887 rows=3118127 loops=3)\n Buffers: shared hit=41594 read=88641\n I/O Timings: read=14598.128\n -> Hash (cost=419397.94..419397.94\nrows=235808 width=16) (actual time=25160.038..25160.555 rows=938244 loops=3)\n Buckets: 131072 (originally 131072)\n Batches: 16 (originally 4) Memory Usage: 3786kB\n Buffers: shared hit=149408\nread=241311, temp read=15801 written=27438\n I/O Timings: read=3610.261\n -> GroupAggregate\n (cost=392800.28..417039.86 rows=235808 width=16) (actual\ntime=23175.268..24589.121 rows=938244 loops=3)\n Group Key:\nattempt_scores_2.student_id, attempt_scores_2.assignment_id\n Filter:\n(max(attempt_scores_2.score_value) > '0'::double precision)\n Rows Removed by Filter: 2908\n Buffers: shared hit=149408\nread=241311, temp read=15801 written=15864\n I/O Timings: read=3610.261\n -> Sort\n (cost=392800.28..395879.64 rows=1231743 width=16) (actual\ntime=23174.917..23654.979 rows=1206355 loops=3)\n Sort Key:\nattempt_scores_2.student_id, attempt_scores_2.assignment_id\n Sort Method: external\nmerge Disk: 30760kB\n Worker 0: Sort Method:\nexternal merge Disk: 30760kB\n Worker 1: Sort Method:\nexternal merge Disk: 30760kB\n Buffers: shared\nhit=149408 read=241311, temp read=15801 written=15864\n I/O Timings:\nread=3610.261\n -> Seq Scan on\nattempt_scores attempt_scores_2 (cost=0.00..247143.00 rows=1231743\nwidth=16) (actual time=16980.832..21585.313 rows=1206355 loops=3)\n Filter:\n(score_name = 'student_performance_index'::citext)\n Rows Removed by\nFilter: 8148027\n Buffers: shared\nhit=149394 read=241311\n I/O Timings:\nread=3610.261\n -> Index Scan using for_upsert on attempt_scores\n (cost=0.43..36.42 rows=8 width=37) (actual time=0.394..0.896 rows=7\nloops=938244)\n Index Cond: (attempt_report_id =\n(max(attempt_scores_1.attempt_report_id)))\n Buffers: shared hit=5976100 read=3869579\n I/O Timings: read=820910.987\n Planning Time: 14.357 ms\n Execution Time: 903426.284 ms\n(55 rows)\n\nTime: 903571.689 ms (15:03.572)\nThe explain output visualized on explain.depesz.com:\nhttps://explain.depesz.com/s/onPZ\n\n--\n\nThank you for your time and consideration,\n\n\nDane\n\nHello,I'm seeking guidance in how to improve the performance of a slow query and to have some other sets of eyes confirm that what I wrote does what I intend.According to the PostgreSQL wiki there is a set of metadata that I should provide to help you help me. So let's begin there.PostgreSQL version: PostgreSQL 12.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bitFull table and index schema:CREATE TABLE attempt_scores ( id bigint GENERATED ALWAYS AS IDENTITY PRIMARY KEY, attempt_report_id bigint NOT NULL, score_value double precision NOT NULL, created_at timestamptz NOT NULL DEFAULT now(), attempt_report_updated_at timestamptz NOT NULL, student_id int NOT NULL, course_id int NOT NULL, assignment_id int NOT NULL, score_name citext NOT NULL CHECK (length(trim(score_name)) > 0), attempted_by citext NOT NULL CHECK (length(trim(attempted_by)) > 0), CONSTRAINT for_upsert UNIQUE (attempt_report_id, score_name));CREATE INDEX ON attempt_scores (attempt_report_updated_at);COMMENT ON TABLE attempt_scores IS $$The collection of assignment scores extracted from the LMS database.$$;COMMENT ON COLUMN attempt_scores.attempt_report_id IS $$Each assignment attempt has an associated attempt report (attempt_reports)where the scores of the attempt is recorded. This column is the pk value fromthat table.$$;COMMENT ON COLUMN attempt_scores.score_value IS $$The score's value.$$;COMMENT ON COLUMN attempt_scores.created_at IS $$The timestamp the record was created.$$;COMMENT ON COLUMN attempt_scores.student_id IS $$The student's ID.$$;COMMENT ON COLUMN attempt_scores.course_id IS $$The course's primary key in the LMS database.$$;COMMENT ON COLUMN attempt_scores.assignment_id IS $$The assignment's primary key in the LMS database.$$;COMMENT ON COLUMN attempt_scores.score_name IS $$The source/name of the score.$$;COMMENT ON COLUMN attempt_scores.attempted_by IS 'The users.role column in LMS';COMMENT ON COLUMN attempt_scores.attempt_report_updated_at IS $$The timestamp value of attempt_reports.updated_at on the LMS side. We use itto find new rows added since the last time we exported to Salesforce.$$;Table metadata:relname | relpages | reltuples | relallvisible | relkind | relnatts | relhassubclass | reloptions | pg_table_size ----------------+----------+-----------+---------------+---------+----------+----------------+------------+--------------- attempt_scores | 130235 | 9352640 | 0 | r | 10 | f | [NULL] | 1067180032Other context: The PostgreSQL database is an Amazon RDS instance.Next up is the query and a description of what it's supposed to do.-- What this query is supposed to do is to compute averages for a set of scoring/learning metrics but it's not so-- straight forward. There is an umbrella metric that summarises the others called the student performance index (SPI)-- and the folks who want this info want the averages to be driven by the SPI. So the basic algorithm is that for each-- student/assignment pair, find the assignment that has the highest SPI then use that to collect and average the-- component metrics.EXPLAIN (ANALYZE, BUFFERS)WITH max_spi AS ( SELECT student_id, assignment_id, max(score_value) spi FROM attempt_scores WHERE score_name = 'student_performance_index' GROUP BY student_id, assignment_id HAVING max(score_value) > 0), reports AS ( SELECT max(attempt_report_id) attempt_report_id, max(score_value) spi FROM max_spi m NATURAL JOIN attempt_scores WHERE score_value = m.spi GROUP BY student_id, assignment_id)SELECT avg(spi) spi, avg(CASE score_name WHEN 'digital_clinical_experience' THEN score_value END) dce, avg(CASE score_name WHEN 'tier1_subjective_data_collection' THEN score_value END) sdcFROM reports NATURAL JOIN attempt_scores;Finally, the EXPLAIN output and some links.QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=672426.02..672426.03 rows=1 width=24) (actual time=903359.923..903368.957 rows=1 loops=1) Buffers: shared hit=6167172 read=4199539, temp read=99551 written=99678 I/O Timings: read=839121.853 -> Nested Loop (cost=672389.80..672425.91 rows=8 width=37) (actual time=36633.920..885232.956 rows=7034196 loops=1) Buffers: shared hit=6167172 read=4199539, temp read=99551 written=99678 I/O Timings: read=839121.853 -> GroupAggregate (cost=672389.37..672389.39 rows=1 width=24) (actual time=36628.945..41432.502 rows=938244 loops=1) Group Key: attempt_scores_2.student_id, attempt_scores_2.assignment_id Buffers: shared hit=191072 read=329960, temp read=99551 written=99678 I/O Timings: read=18210.866 -> Sort (cost=672389.37..672389.37 rows=1 width=24) (actual time=36628.111..39562.421 rows=2500309 loops=1) Sort Key: attempt_scores_2.student_id, attempt_scores_2.assignment_id Sort Method: external merge Disk: 83232kB Buffers: shared hit=191072 read=329960, temp read=99551 written=99678 I/O Timings: read=18210.866 -> Gather (cost=425676.58..672389.36 rows=1 width=24) (actual time=25405.650..34716.694 rows=2500309 loops=1) Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=191072 read=329960, temp read=78197 written=78260 I/O Timings: read=18210.866 -> Hash Join (cost=424676.58..671389.26 rows=1 width=24) (actual time=25169.930..34121.825 rows=833436 loops=3) Hash Cond: ((attempt_scores_1.student_id = attempt_scores_2.student_id) AND (attempt_scores_1.assignment_id = attempt_scores_2.assignment_id) AND (attempt_scores_1.score_value = (max(attempt_scores_2.score_value)))) Buffers: shared hit=191072 read=329960, temp read=78197 written=78260 I/O Timings: read=18210.866 -> Parallel Seq Scan on attempt_scores attempt_scores_1 (cost=0.00..169204.33 rows=3896933 width=24) (actual time=0.013..5775.887 rows=3118127 loops=3) Buffers: shared hit=41594 read=88641 I/O Timings: read=14598.128 -> Hash (cost=419397.94..419397.94 rows=235808 width=16) (actual time=25160.038..25160.555 rows=938244 loops=3) Buckets: 131072 (originally 131072) Batches: 16 (originally 4) Memory Usage: 3786kB Buffers: shared hit=149408 read=241311, temp read=15801 written=27438 I/O Timings: read=3610.261 -> GroupAggregate (cost=392800.28..417039.86 rows=235808 width=16) (actual time=23175.268..24589.121 rows=938244 loops=3) Group Key: attempt_scores_2.student_id, attempt_scores_2.assignment_id Filter: (max(attempt_scores_2.score_value) > '0'::double precision) Rows Removed by Filter: 2908 Buffers: shared hit=149408 read=241311, temp read=15801 written=15864 I/O Timings: read=3610.261 -> Sort (cost=392800.28..395879.64 rows=1231743 width=16) (actual time=23174.917..23654.979 rows=1206355 loops=3) Sort Key: attempt_scores_2.student_id, attempt_scores_2.assignment_id Sort Method: external merge Disk: 30760kB Worker 0: Sort Method: external merge Disk: 30760kB Worker 1: Sort Method: external merge Disk: 30760kB Buffers: shared hit=149408 read=241311, temp read=15801 written=15864 I/O Timings: read=3610.261 -> Seq Scan on attempt_scores attempt_scores_2 (cost=0.00..247143.00 rows=1231743 width=16) (actual time=16980.832..21585.313 rows=1206355 loops=3) Filter: (score_name = 'student_performance_index'::citext) Rows Removed by Filter: 8148027 Buffers: shared hit=149394 read=241311 I/O Timings: read=3610.261 -> Index Scan using for_upsert on attempt_scores (cost=0.43..36.42 rows=8 width=37) (actual time=0.394..0.896 rows=7 loops=938244) Index Cond: (attempt_report_id = (max(attempt_scores_1.attempt_report_id))) Buffers: shared hit=5976100 read=3869579 I/O Timings: read=820910.987 Planning Time: 14.357 ms Execution Time: 903426.284 ms(55 rows)Time: 903571.689 ms (15:03.572)The explain output visualized on explain.depesz.com: https://explain.depesz.com/s/onPZ--Thank you for your time and consideration,Dane",
"msg_date": "Mon, 15 Feb 2021 12:49:29 -0500",
"msg_from": "Dane Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query and wrong row estimates for CTE"
},
{
"msg_contents": "On Mon, Feb 15, 2021 at 12:49:29PM -0500, Dane Foster wrote:\n> PostgreSQL version: PostgreSQL 12.5 on x86_64-pc-linux-gnu, compiled by gcc\n> (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit\n\n> EXPLAIN (ANALYZE, BUFFERS)\n> WITH max_spi AS (\n\nSince v12, CTEs are usually inlined by default.\nI suspect it doesn't help, but as an experiment you could try\nWITH .. AS MATERIALIZED.\n\nYou could try instead: CREATE TEMPORARY TABLE + ANALYZE, which will use\nstatistics that \"WITH\" CTE's don't have (like the rowcount after GROUPing).\n\n> Aggregate (cost=672426.02..672426.03 rows=1 width=24) (actual time=903359.923..903368.957 rows=1 loops=1)\n> Buffers: shared hit=6167172 read=4199539, temp read=99551 written=99678\n> I/O Timings: read=839121.853\n\nThis shows that most of time is spent in I/O (839s/903s)\n\n> -> Nested Loop (cost=672389.80..672425.91 rows=8 width=37) (actual time=36633.920..885232.956 rows=7034196 loops=1)\n> Buffers: shared hit=6167172 read=4199539, temp read=99551 written=99678\n...\n> -> Hash Join (cost=424676.58..671389.26 rows=1 width=24) (actual time=25169.930..34121.825 rows=833436 loops=3)\n> Hash Cond: ((attempt_scores_1.student_id = attempt_scores_2.student_id) AND (attempt_scores_1.assignment_id = attempt_scores_2.assignment_id) AND (attempt_scores_1.score_value = (max(attempt_scores_2.score_value))))\n\nThis shows that it estimated 1 row but got 833k, so the plan may be no good.\nAs another quick experiment, you could try SET enable_nestloop=off.\n\n> -> Index Scan using for_upsert on attempt_scores (cost=0.43..36.42 rows=8 width=37) (actual time=0.394..0.896 rows=7 loops=938244)\n> Index Cond: (attempt_report_id = (max(attempt_scores_1.attempt_report_id)))\n> Buffers: shared hit=5976100 read=3869579\n> I/O Timings: read=820910.987\n\nThis shows where most of your I/O time is from.\nI think you could maybe improve this by clustering the table on for_upsert and\nanalyzing. Very possibly your \"id\" and \"time\" columns are all correlated.\nThey might already/automatically be correlated - you can check the correlation\nstat:\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#Statistics:_n_distinct.2C_MCV.2C_histogram\n\nWithout looking closely, an index might help: student_id,assignment_id\nThat'd avoid the sort, and maybe change the shape of the whole plan.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 15 Feb 2021 16:32:41 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query and wrong row estimates for CTE"
},
{
"msg_contents": ">\n> Sort Method: external\n> merge Disk: 30760kB\n> Worker 0: Sort Method:\n> external merge Disk: 30760kB\n> Worker 1: Sort Method:\n> external merge Disk: 30760kB\n>\n\nIf you can increase work_mem, even setting it temporarily higher for the\nsession or transaction, that may dramatically change the plan. The advice\ngiven by Justin particularly about row estimates would be wise to pursue.\nI'd wonder how selective that condition of score_name =\n'student_performance_index' is in filtering out many of the 9.3 million\ntuples in that table and if an index with that as the leading column, or\njust an index on that column would be helpful. You'd need to look at\npg_stats for the table and see how many distinct values, and\nif student_performance_index is relatively high or low (or not present) in\nthe MCVs list.\n\nI am not sure if your query does what you want it to do as I admit I didn't\nfollow your explanation of the desired behavior. My hunch is that you want\nto make use of a window function and get rid of one of the CTEs.\n\n Sort Method: external merge Disk: 30760kB Worker 0: Sort Method: external merge Disk: 30760kB Worker 1: Sort Method: external merge Disk: 30760kBIf you can increase work_mem, even setting it temporarily higher for the session or transaction, that may dramatically change the plan. The advice given by Justin particularly about row estimates would be wise to pursue. I'd wonder how selective that condition of score_name = 'student_performance_index' is in filtering out many of the 9.3 million tuples in that table and if an index with that as the leading column, or just an index on that column would be helpful. You'd need to look at pg_stats for the table and see how many distinct values, and if student_performance_index is relatively high or low (or not present) in the MCVs list.I am not sure if your query does what you want it to do as I admit I didn't follow your explanation of the desired behavior. My hunch is that you want to make use of a window function and get rid of one of the CTEs.",
"msg_date": "Tue, 16 Feb 2021 08:12:55 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query and wrong row estimates for CTE"
},
{
"msg_contents": "On Mon, Feb 15, 2021 at 5:32 PM Justin Pryzby <[email protected]> wrote:\n\n> ...\n>\n> Without looking closely, an index might help: student_id,assignment_id\n> That'd avoid the sort, and maybe change the shape of the whole plan.\n>\nI tried that prior to posting on the forum and it didn't make a difference.\n🙁\n\nI'll try your other suggestions later today or tomorrow. I will keep you\nposted.\n\n-- \n> Justin\n>\n\nThanks,\n\nDane\n\nOn Mon, Feb 15, 2021 at 5:32 PM Justin Pryzby <[email protected]> wrote:...\n\nWithout looking closely, an index might help: student_id,assignment_id\nThat'd avoid the sort, and maybe change the shape of the whole plan.I tried that prior to posting on the forum and it didn't make a difference. 🙁 I'll try your other suggestions later today or tomorrow. I will keep you posted.\n-- \nJustinThanks,Dane",
"msg_date": "Tue, 16 Feb 2021 10:19:26 -0500",
"msg_from": "Dane Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query and wrong row estimates for CTE"
},
{
"msg_contents": "On Tue, Feb 16, 2021 at 10:13 AM Michael Lewis <[email protected]> wrote:\n\n> Sort Method: external\n>> merge Disk: 30760kB\n>> Worker 0: Sort\n>> Method: external merge Disk: 30760kB\n>> Worker 1: Sort\n>> Method: external merge Disk: 30760kB\n>>\n>\n> If you can increase work_mem, even setting it temporarily higher for the\n> session or transaction, that may dramatically change the plan.\n>\nI will try increasing work_mem for the session later today.\n\n> The advice given by Justin particularly about row estimates would be wise\n> to pursue.\n>\n\n\n> I'd wonder how selective that condition of score_name =\n> 'student_performance_index' is in filtering out many of the 9.3 million\n> tuples in that table and if an index with that as the leading column, or\n> just an index on that column would be helpful.\n>\nThere are 1,206,355 rows where score_name='student_performance_idex'.\n\n> You'd need to look at pg_stats for the table and see how many distinct\n> values, and if student_performance_index is relatively high or low (or not\n> present) in the MCVs list.\n>\nI will look into that.\n\n\n> I am not sure if your query does what you want it to do as I admit I\n> didn't follow your explanation of the desired behavior. My hunch is that\n> you want to make use of a window function and get rid of one of the CTEs.\n>\nIf you could tell me what part(s) are unclear I would appreciate it so that\nI can write a better comment.\n\nThank you sooo much for all the feedback. It is *greatly* appreciated!\nSincerely,\n\nDane\n\nOn Tue, Feb 16, 2021 at 10:13 AM Michael Lewis <[email protected]> wrote: Sort Method: external merge Disk: 30760kB Worker 0: Sort Method: external merge Disk: 30760kB Worker 1: Sort Method: external merge Disk: 30760kBIf you can increase work_mem, even setting it temporarily higher for the session or transaction, that may dramatically change the plan.I will try increasing work_mem for the session later today. The advice given by Justin particularly about row estimates would be wise to pursue. I'd wonder how selective that condition of score_name = 'student_performance_index' is in filtering out many of the 9.3 million tuples in that table and if an index with that as the leading column, or just an index on that column would be helpful.There are 1,206,355 rows where score_name='student_performance_idex'. You'd need to look at pg_stats for the table and see how many distinct values, and if student_performance_index is relatively high or low (or not present) in the MCVs list.I will look into that. I am not sure if your query does what you want it to do as I admit I didn't follow your explanation of the desired behavior. My hunch is that you want to make use of a window function and get rid of one of the CTEs.If you could tell me what part(s) are unclear I would appreciate it so that I can write a better comment. Thank you sooo much for all the feedback. It is greatly appreciated!Sincerely,Dane",
"msg_date": "Tue, 16 Feb 2021 10:25:09 -0500",
"msg_from": "Dane Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query and wrong row estimates for CTE"
},
{
"msg_contents": "Short conclusion:\nSwitching from CTEs to temporary tables and analyzing reduced the runtime\nfrom 15 minutes to about 1.5 minutes.\n\n\nLonger conclusion:\n\n@Justin Pryzby <[email protected]>\n\n - I experimented w/ materializing the CTEs and it helped at the margins\n but did not significantly contribute to a reduction in runtime.\n - No clustering was required because once I switched to temporary tables\n the new plan no longer used the for_upsert index.\n\n@Michael Lewis <[email protected]>\n\n - Increasing work_mem to 100MB (up from 4MB) helped at the margins\n (i.e., some 100's of millisecond improvement) but did not represent a\n significant reduction in the runtime.\n - It wasn't obvious to me which window function would be appropriate for\n the problem I was trying to solve therefore I didn't experiment w/ that\n approach.\n - The selectivity of score_name='student_performance_index' was not\n enough for the planner to choose an index over doing a FTS.\n\nFinally, thank you both for helping me bring this poor performing query to\nheel. Your insights were helpful and greatly appreciated.\n\nSincerely,\n\nDane\n\n\nOn Tue, Feb 16, 2021 at 10:25 AM Dane Foster <[email protected]> wrote:\n\n>\n> On Tue, Feb 16, 2021 at 10:13 AM Michael Lewis <[email protected]> wrote:\n>\n>> Sort Method: external\n>>> merge Disk: 30760kB\n>>> Worker 0: Sort\n>>> Method: external merge Disk: 30760kB\n>>> Worker 1: Sort\n>>> Method: external merge Disk: 30760kB\n>>>\n>>\n>> If you can increase work_mem, even setting it temporarily higher for the\n>> session or transaction, that may dramatically change the plan.\n>>\n> I will try increasing work_mem for the session later today.\n>\n>> The advice given by Justin particularly about row estimates would be wise\n>> to pursue.\n>>\n>\n>\n>> I'd wonder how selective that condition of score_name =\n>> 'student_performance_index' is in filtering out many of the 9.3 million\n>> tuples in that table and if an index with that as the leading column, or\n>> just an index on that column would be helpful.\n>>\n> There are 1,206,355 rows where score_name='student_performance_idex'.\n>\n>> You'd need to look at pg_stats for the table and see how many distinct\n>> values, and if student_performance_index is relatively high or low (or not\n>> present) in the MCVs list.\n>>\n> I will look into that.\n>\n>\n>> I am not sure if your query does what you want it to do as I admit I\n>> didn't follow your explanation of the desired behavior. My hunch is that\n>> you want to make use of a window function and get rid of one of the CTEs.\n>>\n> If you could tell me what part(s) are unclear I would appreciate it so\n> that I can write a better comment.\n>\n> Thank you sooo much for all the feedback. It is *greatly* appreciated!\n> Sincerely,\n>\n> Dane\n>\n>\n\nShort conclusion:Switching from CTEs to temporary tables and analyzing reduced the runtime from 15 minutes to about 1.5 minutes.Longer conclusion:@Justin PryzbyI experimented w/ materializing the CTEs and it helped at the margins but did not significantly contribute to a reduction in runtime.No clustering was required because once I switched to temporary tables the new plan no longer used the for_upsert index.@Michael LewisIncreasing work_mem to 100MB (up from 4MB) helped at the margins (i.e., some 100's of millisecond improvement) but did not represent a significant reduction in the runtime.It wasn't obvious to me which window function would be appropriate for the problem I was trying to solve therefore I didn't experiment w/ that approach.The selectivity of score_name='student_performance_index' was not enough for the planner to choose an index over doing a FTS.Finally, thank you both for helping me bring this poor performing query to heel. Your insights were helpful and greatly appreciated.Sincerely,DaneOn Tue, Feb 16, 2021 at 10:25 AM Dane Foster <[email protected]> wrote:On Tue, Feb 16, 2021 at 10:13 AM Michael Lewis <[email protected]> wrote: Sort Method: external merge Disk: 30760kB Worker 0: Sort Method: external merge Disk: 30760kB Worker 1: Sort Method: external merge Disk: 30760kBIf you can increase work_mem, even setting it temporarily higher for the session or transaction, that may dramatically change the plan.I will try increasing work_mem for the session later today. The advice given by Justin particularly about row estimates would be wise to pursue. I'd wonder how selective that condition of score_name = 'student_performance_index' is in filtering out many of the 9.3 million tuples in that table and if an index with that as the leading column, or just an index on that column would be helpful.There are 1,206,355 rows where score_name='student_performance_idex'. You'd need to look at pg_stats for the table and see how many distinct values, and if student_performance_index is relatively high or low (or not present) in the MCVs list.I will look into that. I am not sure if your query does what you want it to do as I admit I didn't follow your explanation of the desired behavior. My hunch is that you want to make use of a window function and get rid of one of the CTEs.If you could tell me what part(s) are unclear I would appreciate it so that I can write a better comment. Thank you sooo much for all the feedback. It is greatly appreciated!Sincerely,Dane",
"msg_date": "Tue, 16 Feb 2021 14:11:07 -0500",
"msg_from": "Dane Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query and wrong row estimates for CTE"
},
{
"msg_contents": "A small update (see below/inline).\n\n\nOn Tue, Feb 16, 2021 at 2:11 PM Dane Foster <[email protected]> wrote:\n\n> Short conclusion:\n> Switching from CTEs to temporary tables and analyzing reduced the runtime\n> from 15 minutes to about 1.5 minutes.\n>\n>\n> Longer conclusion:\n>\n> @Justin Pryzby <[email protected]>\n>\n> - I experimented w/ materializing the CTEs and it helped at the\n> margins but did not significantly contribute to a reduction in runtime.\n> - No clustering was required because once I switched to temporary\n> tables the new plan no longer used the for_upsert index.\n>\n> @Michael Lewis <[email protected]>\n>\n> - Increasing work_mem to 100MB (up from 4MB) helped at the margins\n> (i.e., some 100's of millisecond improvement) but did not represent a\n> significant reduction in the runtime.\n> - It wasn't obvious to me which window function would be appropriate\n> for the problem I was trying to solve therefore I didn't experiment w/ that\n> approach.\n>\n> I want to update/correct this statement:\n\n>\n> - The selectivity of score_name='student_performance_index' was not\n> enough for the planner to choose an index over doing a FTS.\n>\n> I added a partial index (WHERE\nscore_name='student_performance_index'::citext) and that had a *dramatic*\nimpact. That part of the query went from ~12 seconds to ~1 second.\n\n> Finally, thank you both for helping me bring this poor performing query to\n> heel. Your insights were helpful and greatly appreciated.\n>\n> Sincerely,\n>\n> Dane\n>\n>\n> On Tue, Feb 16, 2021 at 10:25 AM Dane Foster <[email protected]> wrote:\n>\n>>\n>> On Tue, Feb 16, 2021 at 10:13 AM Michael Lewis <[email protected]>\n>> wrote:\n>>\n>>> Sort Method: external\n>>>> merge Disk: 30760kB\n>>>> Worker 0: Sort\n>>>> Method: external merge Disk: 30760kB\n>>>> Worker 1: Sort\n>>>> Method: external merge Disk: 30760kB\n>>>>\n>>>\n>>> If you can increase work_mem, even setting it temporarily higher for the\n>>> session or transaction, that may dramatically change the plan.\n>>>\n>> I will try increasing work_mem for the session later today.\n>>\n>>> The advice given by Justin particularly about row estimates would be\n>>> wise to pursue.\n>>>\n>>\n>>\n>>> I'd wonder how selective that condition of score_name =\n>>> 'student_performance_index' is in filtering out many of the 9.3 million\n>>> tuples in that table and if an index with that as the leading column, or\n>>> just an index on that column would be helpful.\n>>>\n>> There are 1,206,355 rows where score_name='student_performance_idex'.\n>>\n>>> You'd need to look at pg_stats for the table and see how many distinct\n>>> values, and if student_performance_index is relatively high or low (or not\n>>> present) in the MCVs list.\n>>>\n>> I will look into that.\n>>\n>>\n>>> I am not sure if your query does what you want it to do as I admit I\n>>> didn't follow your explanation of the desired behavior. My hunch is that\n>>> you want to make use of a window function and get rid of one of the CTEs.\n>>>\n>> If you could tell me what part(s) are unclear I would appreciate it so\n>> that I can write a better comment.\n>>\n>> Thank you sooo much for all the feedback. It is *greatly* appreciated!\n>> Sincerely,\n>>\n>> Dane\n>>\n>>\n\nA small update (see below/inline).On Tue, Feb 16, 2021 at 2:11 PM Dane Foster <[email protected]> wrote:Short conclusion:Switching from CTEs to temporary tables and analyzing reduced the runtime from 15 minutes to about 1.5 minutes.Longer conclusion:@Justin PryzbyI experimented w/ materializing the CTEs and it helped at the margins but did not significantly contribute to a reduction in runtime.No clustering was required because once I switched to temporary tables the new plan no longer used the for_upsert index.@Michael LewisIncreasing work_mem to 100MB (up from 4MB) helped at the margins (i.e., some 100's of millisecond improvement) but did not represent a significant reduction in the runtime.It wasn't obvious to me which window function would be appropriate for the problem I was trying to solve therefore I didn't experiment w/ that approach.I want to update/correct this statement: The selectivity of score_name='student_performance_index' was not enough for the planner to choose an index over doing a FTS.I added a partial index (WHERE score_name='student_performance_index'::citext) and that had a dramatic impact. That part of the query went from ~12 seconds to ~1 second.Finally, thank you both for helping me bring this poor performing query to heel. Your insights were helpful and greatly appreciated.Sincerely,DaneOn Tue, Feb 16, 2021 at 10:25 AM Dane Foster <[email protected]> wrote:On Tue, Feb 16, 2021 at 10:13 AM Michael Lewis <[email protected]> wrote: Sort Method: external merge Disk: 30760kB Worker 0: Sort Method: external merge Disk: 30760kB Worker 1: Sort Method: external merge Disk: 30760kBIf you can increase work_mem, even setting it temporarily higher for the session or transaction, that may dramatically change the plan.I will try increasing work_mem for the session later today. The advice given by Justin particularly about row estimates would be wise to pursue. I'd wonder how selective that condition of score_name = 'student_performance_index' is in filtering out many of the 9.3 million tuples in that table and if an index with that as the leading column, or just an index on that column would be helpful.There are 1,206,355 rows where score_name='student_performance_idex'. You'd need to look at pg_stats for the table and see how many distinct values, and if student_performance_index is relatively high or low (or not present) in the MCVs list.I will look into that. I am not sure if your query does what you want it to do as I admit I didn't follow your explanation of the desired behavior. My hunch is that you want to make use of a window function and get rid of one of the CTEs.If you could tell me what part(s) are unclear I would appreciate it so that I can write a better comment. Thank you sooo much for all the feedback. It is greatly appreciated!Sincerely,Dane",
"msg_date": "Wed, 17 Feb 2021 11:51:10 -0500",
"msg_from": "Dane Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query and wrong row estimates for CTE"
},
{
"msg_contents": "*Hi all, *\n\n*This is my first post on this mailing list, I really enjoy it.*\n*I wanted to add some details and answers to this disccusion.*\n\n 17 févr. 2021 à 17:52, Dane Foster <[email protected]> a écrit :\n\n>\n> A small update (see below/inline).\n>\n>\n> On Tue, Feb 16, 2021 at 2:11 PM Dane Foster <[email protected]> wrote:\n>\n>> Short conclusion:\n>> Switching from CTEs to temporary tables and analyzing reduced the runtime\n>> from 15 minutes to about 1.5 minutes.\n>>\n>> *The attempt_scores table is pretty big and it is called 3 times; try to\nrewrite the query in order to reduce call to this table. for example :*\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n*EXPLAIN (ANALYZE, BUFFERS)WITH reports AS ( SELECT student_id,\nassignment_id, max(score_value) FILTER (WHERE score_name =\n'student_performance_index'),\nmax(attempt_report_id) maxid,\nmax(score_value) spi FROM attempt_scores GROUP BY student_id,\nassignment_id HAVING max(score_value) > 0 AND max(score_value) FILTER\n(WHERE score_name = 'student_performance_index') = max(score_value))SELECT\navg(spi) spi, avg(CASE score_name WHEN 'digital_clinical_experience' THEN\nscore_value END) dce, avg(CASE score_name WHEN\n'tier1_subjective_data_collection' THEN score_value END) sdcFROM\nattempt_scores JOIN reports ON\nreports.maxid=attempt_scores.attempt_report_id;*\n\n*Also, I would continue to increase work_mem to 200MB until the external\nmerge is not required.*\n*SET WORK_MEM='200MB'; -- to change only at session level*\n\n>\n>> Longer conclusion:\n>>\n>> @Justin Pryzby <[email protected]>\n>>\n>> - I experimented w/ materializing the CTEs and it helped at the\n>> margins but did not significantly contribute to a reduction in runtime.\n>> - No clustering was required because once I switched to temporary\n>> tables the new plan no longer used the for_upsert index.\n>>\n>> @Michael Lewis <[email protected]>\n>>\n>> - Increasing work_mem to 100MB (up from 4MB) helped at the margins\n>> (i.e., some 100's of millisecond improvement) but did not represent a\n>> significant reduction in the runtime.\n>> - It wasn't obvious to me which window function would be appropriate\n>> for the problem I was trying to solve therefore I didn't experiment w/ that\n>> approach.\n>>\n>> I want to update/correct this statement:\n>\n>>\n>> - The selectivity of score_name='student_performance_index' was not\n>> enough for the planner to choose an index over doing a FTS.\n>>\n>> I added a partial index (WHERE\n> score_name='student_performance_index'::citext) and that had a *dramatic*\n> impact. That part of the query went from ~12 seconds to ~1 second.\n>\n\n*Another way to generate perf. gains on this query, CREATE HASH INDEX ON\nattempt_scores(score_name); --since score_name doesn't seem to have a big\ncardinality*\n\nFinally, thank you both for helping me bring this poor performing query to\n>> heel. Your insights were helpful and greatly appreciated.\n>>\n>> Sincerely,\n>>\n>> Dane\n>>\n>>\n>> On Tue, Feb 16, 2021 at 10:25 AM Dane Foster <[email protected]> wrote:\n>>\n>>>\n>>> On Tue, Feb 16, 2021 at 10:13 AM Michael Lewis <[email protected]>\n>>> wrote:\n>>>\n>>>> Sort Method:\n>>>>> external merge Disk: 30760kB\n>>>>> Worker 0: Sort\n>>>>> Method: external merge Disk: 30760kB\n>>>>> Worker 1: Sort\n>>>>> Method: external merge Disk: 30760kB\n>>>>>\n>>>>\n>>>> If you can increase work_mem, even setting it temporarily higher for\n>>>> the session or transaction, that may dramatically change the plan.\n>>>>\n>>> I will try increasing work_mem for the session later today.\n>>>\n>>>> The advice given by Justin particularly about row estimates would be\n>>>> wise to pursue.\n>>>>\n>>>\n>>>\n>>>> I'd wonder how selective that condition of score_name =\n>>>> 'student_performance_index' is in filtering out many of the 9.3\n>>>> million tuples in that table and if an index with that as the leading\n>>>> column, or just an index on that column would be helpful.\n>>>>\n>>> There are 1,206,355 rows where score_name='student_performance_idex'.\n>>>\n>>>> You'd need to look at pg_stats for the table and see how many distinct\n>>>> values, and if student_performance_index is relatively high or low (or not\n>>>> present) in the MCVs list.\n>>>>\n>>> I will look into that.\n>>>\n>>>\n>>>> I am not sure if your query does what you want it to do as I admit I\n>>>> didn't follow your explanation of the desired behavior. My hunch is that\n>>>> you want to make use of a window function and get rid of one of the CTEs.\n>>>>\n>>> If you could tell me what part(s) are unclear I would appreciate it so\n>>> that I can write a better comment.\n>>>\n>>> Thank you sooo much for all the feedback. It is *greatly* appreciated!\n>>> Sincerely,\n>>>\n>>> Dane\n>>>\n>>>\n\n-- \nRegards,\nYo.\n\nHi all, This is my first post on this mailing list, I really enjoy it.I wanted to add some details and answers to this disccusion. 17 févr. 2021 à 17:52, Dane Foster <[email protected]> a écrit :A small update (see below/inline).On Tue, Feb 16, 2021 at 2:11 PM Dane Foster <[email protected]> wrote:Short conclusion:Switching from CTEs to temporary tables and analyzing reduced the runtime from 15 minutes to about 1.5 minutes.The attempt_scores table is pretty big and it is called 3 times; try to rewrite the query in order to reduce call to this table. for example :EXPLAIN (ANALYZE, BUFFERS)WITH reports AS ( SELECT student_id, assignment_id, max(score_value) FILTER (WHERE score_name = 'student_performance_index'), max(attempt_report_id) maxid, max(score_value) spi FROM attempt_scores GROUP BY student_id, assignment_id HAVING max(score_value) > 0 AND max(score_value) FILTER (WHERE score_name = 'student_performance_index') = max(score_value))SELECT avg(spi) spi, avg(CASE score_name WHEN 'digital_clinical_experience' THEN score_value END) dce, avg(CASE score_name WHEN 'tier1_subjective_data_collection' THEN score_value END) sdcFROM attempt_scores JOIN reports ON reports.maxid=attempt_scores.attempt_report_id;Also, I would continue to increase work_mem to 200MB until the external merge is not required.SET WORK_MEM='200MB'; -- to change only at session levelLonger conclusion:@Justin PryzbyI experimented w/ materializing the CTEs and it helped at the margins but did not significantly contribute to a reduction in runtime.No clustering was required because once I switched to temporary tables the new plan no longer used the for_upsert index.@Michael LewisIncreasing work_mem to 100MB (up from 4MB) helped at the margins (i.e., some 100's of millisecond improvement) but did not represent a significant reduction in the runtime.It wasn't obvious to me which window function would be appropriate for the problem I was trying to solve therefore I didn't experiment w/ that approach.I want to update/correct this statement: The selectivity of score_name='student_performance_index' was not enough for the planner to choose an index over doing a FTS.I added a partial index (WHERE score_name='student_performance_index'::citext) and that had a dramatic impact. That part of the query went from ~12 seconds to ~1 second.Another way to generate perf. gains on this query, CREATE HASH INDEX ON attempt_scores(score_name); --since score_name doesn't seem to have a big cardinalityFinally, thank you both for helping me bring this poor performing query to heel. Your insights were helpful and greatly appreciated.Sincerely,DaneOn Tue, Feb 16, 2021 at 10:25 AM Dane Foster <[email protected]> wrote:On Tue, Feb 16, 2021 at 10:13 AM Michael Lewis <[email protected]> wrote: Sort Method: external merge Disk: 30760kB Worker 0: Sort Method: external merge Disk: 30760kB Worker 1: Sort Method: external merge Disk: 30760kBIf you can increase work_mem, even setting it temporarily higher for the session or transaction, that may dramatically change the plan.I will try increasing work_mem for the session later today. The advice given by Justin particularly about row estimates would be wise to pursue. I'd wonder how selective that condition of score_name = 'student_performance_index' is in filtering out many of the 9.3 million tuples in that table and if an index with that as the leading column, or just an index on that column would be helpful.There are 1,206,355 rows where score_name='student_performance_idex'. You'd need to look at pg_stats for the table and see how many distinct values, and if student_performance_index is relatively high or low (or not present) in the MCVs list.I will look into that. I am not sure if your query does what you want it to do as I admit I didn't follow your explanation of the desired behavior. My hunch is that you want to make use of a window function and get rid of one of the CTEs.If you could tell me what part(s) are unclear I would appreciate it so that I can write a better comment. Thank you sooo much for all the feedback. It is greatly appreciated!Sincerely,Dane\n\n\n-- Regards,Yo.",
"msg_date": "Wed, 17 Feb 2021 19:37:29 +0100",
"msg_from": "Yoan SULTAN <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query and wrong row estimates for CTE"
},
{
"msg_contents": "On Wed, Feb 17, 2021 at 1:37 PM Yoan SULTAN <[email protected]> wrote:\n\n> *Hi all, *\n>\n> *This is my first post on this mailing list, I really enjoy it.*\n> *I wanted to add some details and answers to this disccusion.*\n>\nI'm happy you've decided to join the conversation and about the fact that\nyou've opened up an entirely new avenue for me to investigate and learn\nfrom. I feel like I'm about to level up my SQL-fu! 😊\n\n\n> 17 févr. 2021 à 17:52, Dane Foster <[email protected]> a écrit :\n>\n>>\n>> A small update (see below/inline).\n>>\n>>\n>> On Tue, Feb 16, 2021 at 2:11 PM Dane Foster <[email protected]> wrote:\n>>\n>>> Short conclusion:\n>>> Switching from CTEs to temporary tables and analyzing reduced the\n>>> runtime from 15 minutes to about 1.5 minutes.\n>>>\n>>> *The attempt_scores table is pretty big and it is called 3 times; try to\n> rewrite the query in order to reduce call to this table. for example :*\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> *EXPLAIN (ANALYZE, BUFFERS)WITH reports AS ( SELECT student_id,\n> assignment_id, max(score_value) FILTER (WHERE score_name =\n> 'student_performance_index'),\n> max(attempt_report_id) maxid,\n> max(score_value) spi FROM attempt_scores GROUP BY student_id,\n> assignment_id HAVING max(score_value) > 0 AND max(score_value) FILTER\n> (WHERE score_name = 'student_performance_index') = max(score_value))SELECT\n> avg(spi) spi, avg(CASE score_name WHEN 'digital_clinical_experience' THEN\n> score_value END) dce, avg(CASE score_name WHEN\n> 'tier1_subjective_data_collection' THEN score_value END) sdcFROM\n> attempt_scores JOIN reports ON\n> reports.maxid=attempt_scores.attempt_report_id;*\n>\nGiven: HAVING max(score_value) > 0 AND max(score_value) FILTER (WHERE\nscore_name = 'student_performance_index') = max(score_value)\n\nWhy: max(score_value) FILTER (WHERE score_name =\n'student_performance_index') but no FILTER clause on:\nmax(attempt_report_id)?\n\nSome context for my question. I'm new to aggregate expressions therefore I\ndon't have a strong mental model for what's happening. So let me tell you\nwhat I *think* is happening and you can correct me.\n\nThe new HAVING clause that you've added ensures that for each\nstudent/assignment pair/group that we are selecting the max spi value\n(i.e., score_name = 'student_performance_index'). Therefore, isn't the\nFILTER clause in the SELECT section redundant? And if it's *not* redundant\nthen why isn't it necessary for: max(attempt_report_id)?\n\n\n*Also, I would continue to increase work_mem to 200MB until the external\n> merge is not required.*\n> *SET WORK_MEM='200MB'; -- to change only at session level*\n>\n>>\n>>> Longer conclusion:\n>>>\n>>> @Justin Pryzby <[email protected]>\n>>>\n>>> - I experimented w/ materializing the CTEs and it helped at the\n>>> margins but did not significantly contribute to a reduction in runtime.\n>>> - No clustering was required because once I switched to temporary\n>>> tables the new plan no longer used the for_upsert index.\n>>>\n>>> @Michael Lewis <[email protected]>\n>>>\n>>> - Increasing work_mem to 100MB (up from 4MB) helped at the margins\n>>> (i.e., some 100's of millisecond improvement) but did not represent a\n>>> significant reduction in the runtime.\n>>> - It wasn't obvious to me which window function would be appropriate\n>>> for the problem I was trying to solve therefore I didn't experiment w/ that\n>>> approach.\n>>>\n>>> I want to update/correct this statement:\n>>\n>>>\n>>> - The selectivity of score_name='student_performance_index' was not\n>>> enough for the planner to choose an index over doing a FTS.\n>>>\n>>> I added a partial index (WHERE\n>> score_name='student_performance_index'::citext) and that had a *dramatic*\n>> impact. That part of the query went from ~12 seconds to ~1 second.\n>>\n>\n> *Another way to generate perf. gains on this query, CREATE HASH INDEX ON\n> attempt_scores(score_name); --since score_name doesn't seem to have a big\n> cardinality*\n>\n> Finally, thank you both for helping me bring this poor performing query to\n>>> heel. Your insights were helpful and greatly appreciated.\n>>>\n>>> Sincerely,\n>>>\n>>> Dane\n>>>\n>>>\n>>> On Tue, Feb 16, 2021 at 10:25 AM Dane Foster <[email protected]>\n>>> wrote:\n>>>\n>>>>\n>>>> On Tue, Feb 16, 2021 at 10:13 AM Michael Lewis <[email protected]>\n>>>> wrote:\n>>>>\n>>>>> Sort Method:\n>>>>>> external merge Disk: 30760kB\n>>>>>> Worker 0: Sort\n>>>>>> Method: external merge Disk: 30760kB\n>>>>>> Worker 1: Sort\n>>>>>> Method: external merge Disk: 30760kB\n>>>>>>\n>>>>>\n>>>>> If you can increase work_mem, even setting it temporarily higher for\n>>>>> the session or transaction, that may dramatically change the plan.\n>>>>>\n>>>> I will try increasing work_mem for the session later today.\n>>>>\n>>>>> The advice given by Justin particularly about row estimates would be\n>>>>> wise to pursue.\n>>>>>\n>>>>\n>>>>\n>>>>> I'd wonder how selective that condition of score_name =\n>>>>> 'student_performance_index' is in filtering out many of the 9.3\n>>>>> million tuples in that table and if an index with that as the leading\n>>>>> column, or just an index on that column would be helpful.\n>>>>>\n>>>> There are 1,206,355 rows where score_name='student_performance_idex'.\n>>>>\n>>>>> You'd need to look at pg_stats for the table and see how many distinct\n>>>>> values, and if student_performance_index is relatively high or low (or not\n>>>>> present) in the MCVs list.\n>>>>>\n>>>> I will look into that.\n>>>>\n>>>>\n>>>>> I am not sure if your query does what you want it to do as I admit I\n>>>>> didn't follow your explanation of the desired behavior. My hunch is that\n>>>>> you want to make use of a window function and get rid of one of the CTEs.\n>>>>>\n>>>> If you could tell me what part(s) are unclear I would appreciate it so\n>>>> that I can write a better comment.\n>>>>\n>>>> Thank you sooo much for all the feedback. It is *greatly* appreciated!\n>>>> Sincerely,\n>>>>\n>>>> Dane\n>>>>\n>>>>\n>\n> --\n> Regards,\n> Yo.\n>\nAgain, thanks for joining the conversation. I look forward to hearing from\nyou.\n\nSincerely,\n\nDane\n\nOn Wed, Feb 17, 2021 at 1:37 PM Yoan SULTAN <[email protected]> wrote:Hi all, This is my first post on this mailing list, I really enjoy it.I wanted to add some details and answers to this disccusion.I'm happy you've decided to join the conversation and about the fact that you've opened up an entirely new avenue for me to investigate and learn from. I feel like I'm about to level up my SQL-fu! 😊 17 févr. 2021 à 17:52, Dane Foster <[email protected]> a écrit :A small update (see below/inline).On Tue, Feb 16, 2021 at 2:11 PM Dane Foster <[email protected]> wrote:Short conclusion:Switching from CTEs to temporary tables and analyzing reduced the runtime from 15 minutes to about 1.5 minutes.The attempt_scores table is pretty big and it is called 3 times; try to rewrite the query in order to reduce call to this table. for example :EXPLAIN (ANALYZE, BUFFERS)WITH reports AS ( SELECT student_id, assignment_id, max(score_value) FILTER (WHERE score_name = 'student_performance_index'), max(attempt_report_id) maxid, max(score_value) spi FROM attempt_scores GROUP BY student_id, assignment_id HAVING max(score_value) > 0 AND max(score_value) FILTER (WHERE score_name = 'student_performance_index') = max(score_value))SELECT avg(spi) spi, avg(CASE score_name WHEN 'digital_clinical_experience' THEN score_value END) dce, avg(CASE score_name WHEN 'tier1_subjective_data_collection' THEN score_value END) sdcFROM attempt_scores JOIN reports ON reports.maxid=attempt_scores.attempt_report_id;Given: HAVING max(score_value) > 0 AND max(score_value) FILTER (WHERE score_name = 'student_performance_index') = max(score_value)Why: max(score_value) FILTER (WHERE score_name = 'student_performance_index') but no FILTER clause on: max(attempt_report_id)?Some context for my question. I'm new to aggregate expressions therefore I don't have a strong mental model for what's happening. So let me tell you what I think is happening and you can correct me. The new HAVING clause that you've added ensures that for each student/assignment pair/group that we are selecting the max spi value (i.e., score_name = 'student_performance_index'). Therefore, isn't the FILTER clause in the SELECT section redundant? And if it's not redundant then why isn't it necessary for: max(attempt_report_id)?Also, I would continue to increase work_mem to 200MB until the external merge is not required.SET WORK_MEM='200MB'; -- to change only at session levelLonger conclusion:@Justin PryzbyI experimented w/ materializing the CTEs and it helped at the margins but did not significantly contribute to a reduction in runtime.No clustering was required because once I switched to temporary tables the new plan no longer used the for_upsert index.@Michael LewisIncreasing work_mem to 100MB (up from 4MB) helped at the margins (i.e., some 100's of millisecond improvement) but did not represent a significant reduction in the runtime.It wasn't obvious to me which window function would be appropriate for the problem I was trying to solve therefore I didn't experiment w/ that approach.I want to update/correct this statement: The selectivity of score_name='student_performance_index' was not enough for the planner to choose an index over doing a FTS.I added a partial index (WHERE score_name='student_performance_index'::citext) and that had a dramatic impact. That part of the query went from ~12 seconds to ~1 second.Another way to generate perf. gains on this query, CREATE HASH INDEX ON attempt_scores(score_name); --since score_name doesn't seem to have a big cardinalityFinally, thank you both for helping me bring this poor performing query to heel. Your insights were helpful and greatly appreciated.Sincerely,DaneOn Tue, Feb 16, 2021 at 10:25 AM Dane Foster <[email protected]> wrote:On Tue, Feb 16, 2021 at 10:13 AM Michael Lewis <[email protected]> wrote: Sort Method: external merge Disk: 30760kB Worker 0: Sort Method: external merge Disk: 30760kB Worker 1: Sort Method: external merge Disk: 30760kBIf you can increase work_mem, even setting it temporarily higher for the session or transaction, that may dramatically change the plan.I will try increasing work_mem for the session later today. The advice given by Justin particularly about row estimates would be wise to pursue. I'd wonder how selective that condition of score_name = 'student_performance_index' is in filtering out many of the 9.3 million tuples in that table and if an index with that as the leading column, or just an index on that column would be helpful.There are 1,206,355 rows where score_name='student_performance_idex'. You'd need to look at pg_stats for the table and see how many distinct values, and if student_performance_index is relatively high or low (or not present) in the MCVs list.I will look into that. I am not sure if your query does what you want it to do as I admit I didn't follow your explanation of the desired behavior. My hunch is that you want to make use of a window function and get rid of one of the CTEs.If you could tell me what part(s) are unclear I would appreciate it so that I can write a better comment. Thank you sooo much for all the feedback. It is greatly appreciated!Sincerely,Dane\n\n\n-- Regards,Yo.Again, thanks for joining the conversation. I look forward to hearing from you.Sincerely,Dane",
"msg_date": "Wed, 17 Feb 2021 15:32:23 -0500",
"msg_from": "Dane Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query and wrong row estimates for CTE"
},
{
"msg_contents": "*You are totally right, the max(score_value) FILTER (WHERE score_name =\n'student_performance_index') in the SELECT clause is redundant.*\n\nLe mer. 17 févr. 2021 à 21:33, Dane Foster <[email protected]> a écrit :\n\n> On Wed, Feb 17, 2021 at 1:37 PM Yoan SULTAN <[email protected]> wrote:\n>\n>> *Hi all, *\n>>\n>> *This is my first post on this mailing list, I really enjoy it.*\n>> *I wanted to add some details and answers to this disccusion.*\n>>\n> I'm happy you've decided to join the conversation and about the fact that\n> you've opened up an entirely new avenue for me to investigate and learn\n> from. I feel like I'm about to level up my SQL-fu! 😊\n>\n>\n>> 17 févr. 2021 à 17:52, Dane Foster <[email protected]> a écrit :\n>>\n>>>\n>>> A small update (see below/inline).\n>>>\n>>>\n>>> On Tue, Feb 16, 2021 at 2:11 PM Dane Foster <[email protected]> wrote:\n>>>\n>>>> Short conclusion:\n>>>> Switching from CTEs to temporary tables and analyzing reduced the\n>>>> runtime from 15 minutes to about 1.5 minutes.\n>>>>\n>>>> *The attempt_scores table is pretty big and it is called 3 times; try\n>> to rewrite the query in order to reduce call to this table. for example :*\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>> *EXPLAIN (ANALYZE, BUFFERS)WITH reports AS ( SELECT student_id,\n>> assignment_id, max(score_value) FILTER (WHERE score_name =\n>> 'student_performance_index'),\n>> max(attempt_report_id) maxid,\n>> max(score_value) spi FROM attempt_scores GROUP BY student_id,\n>> assignment_id HAVING max(score_value) > 0 AND max(score_value) FILTER\n>> (WHERE score_name = 'student_performance_index') = max(score_value))SELECT\n>> avg(spi) spi, avg(CASE score_name WHEN 'digital_clinical_experience' THEN\n>> score_value END) dce, avg(CASE score_name WHEN\n>> 'tier1_subjective_data_collection' THEN score_value END) sdcFROM\n>> attempt_scores JOIN reports ON\n>> reports.maxid=attempt_scores.attempt_report_id;*\n>>\n> Given: HAVING max(score_value) > 0 AND max(score_value) FILTER (WHERE\n> score_name = 'student_performance_index') = max(score_value)\n>\n> Why: max(score_value) FILTER (WHERE score_name =\n> 'student_performance_index') but no FILTER clause on:\n> max(attempt_report_id)?\n>\n\n> Some context for my question. I'm new to aggregate expressions therefore I\n> don't have a strong mental model for what's happening. So let me tell you\n> what I *think* is happening and you can correct me.\n>\n> The new HAVING clause that you've added ensures that for each\n> student/assignment pair/group that we are selecting the max spi value\n> (i.e., score_name = 'student_performance_index'). Therefore, isn't the\n> FILTER clause in the SELECT section redundant? And if it's *not*\n> redundant then why isn't it necessary for: max(attempt_report_id)?\n>\n\n\n>\n> *Also, I would continue to increase work_mem to 200MB until the external\n>> merge is not required.*\n>> *SET WORK_MEM='200MB'; -- to change only at session level*\n>>\n>>>\n>>>> Longer conclusion:\n>>>>\n>>>> @Justin Pryzby <[email protected]>\n>>>>\n>>>> - I experimented w/ materializing the CTEs and it helped at the\n>>>> margins but did not significantly contribute to a reduction in runtime.\n>>>> - No clustering was required because once I switched to temporary\n>>>> tables the new plan no longer used the for_upsert index.\n>>>>\n>>>> @Michael Lewis <[email protected]>\n>>>>\n>>>> - Increasing work_mem to 100MB (up from 4MB) helped at the margins\n>>>> (i.e., some 100's of millisecond improvement) but did not represent a\n>>>> significant reduction in the runtime.\n>>>> - It wasn't obvious to me which window function would be\n>>>> appropriate for the problem I was trying to solve therefore I didn't\n>>>> experiment w/ that approach.\n>>>>\n>>>> I want to update/correct this statement:\n>>>\n>>>>\n>>>> - The selectivity of score_name='student_performance_index' was not\n>>>> enough for the planner to choose an index over doing a FTS.\n>>>>\n>>>> I added a partial index (WHERE\n>>> score_name='student_performance_index'::citext) and that had a\n>>> *dramatic* impact. That part of the query went from ~12 seconds to ~1\n>>> second.\n>>>\n>>\n>> *Another way to generate perf. gains on this query, CREATE HASH INDEX ON\n>> attempt_scores(score_name); --since score_name doesn't seem to have a big\n>> cardinality*\n>>\n>> Finally, thank you both for helping me bring this poor performing query\n>>>> to heel. Your insights were helpful and greatly appreciated.\n>>>>\n>>>> Sincerely,\n>>>>\n>>>> Dane\n>>>>\n>>>>\n>>>> On Tue, Feb 16, 2021 at 10:25 AM Dane Foster <[email protected]>\n>>>> wrote:\n>>>>\n>>>>>\n>>>>> On Tue, Feb 16, 2021 at 10:13 AM Michael Lewis <[email protected]>\n>>>>> wrote:\n>>>>>\n>>>>>> Sort Method:\n>>>>>>> external merge Disk: 30760kB\n>>>>>>> Worker 0: Sort\n>>>>>>> Method: external merge Disk: 30760kB\n>>>>>>> Worker 1: Sort\n>>>>>>> Method: external merge Disk: 30760kB\n>>>>>>>\n>>>>>>\n>>>>>> If you can increase work_mem, even setting it temporarily higher for\n>>>>>> the session or transaction, that may dramatically change the plan.\n>>>>>>\n>>>>> I will try increasing work_mem for the session later today.\n>>>>>\n>>>>>> The advice given by Justin particularly about row estimates would be\n>>>>>> wise to pursue.\n>>>>>>\n>>>>>\n>>>>>\n>>>>>> I'd wonder how selective that condition of score_name =\n>>>>>> 'student_performance_index' is in filtering out many of the 9.3\n>>>>>> million tuples in that table and if an index with that as the leading\n>>>>>> column, or just an index on that column would be helpful.\n>>>>>>\n>>>>> There are 1,206,355 rows where score_name='student_performance_idex'.\n>>>>>\n>>>>>> You'd need to look at pg_stats for the table and see how many\n>>>>>> distinct values, and if student_performance_index is relatively high or low\n>>>>>> (or not present) in the MCVs list.\n>>>>>>\n>>>>> I will look into that.\n>>>>>\n>>>>>\n>>>>>> I am not sure if your query does what you want it to do as I admit I\n>>>>>> didn't follow your explanation of the desired behavior. My hunch is that\n>>>>>> you want to make use of a window function and get rid of one of the CTEs.\n>>>>>>\n>>>>> If you could tell me what part(s) are unclear I would appreciate it so\n>>>>> that I can write a better comment.\n>>>>>\n>>>>> Thank you sooo much for all the feedback. It is *greatly* appreciated!\n>>>>> Sincerely,\n>>>>>\n>>>>> Dane\n>>>>>\n>>>>>\n>>\n>> --\n>> Regards,\n>> Yo.\n>>\n> Again, thanks for joining the conversation. I look forward to hearing from\n> you.\n>\n> Sincerely,\n>\n> Dane\n>\n\n\n-- \nRegards,\nYo.\n\nYou are totally right, the \n\nmax(score_value) FILTER (WHERE score_name = 'student_performance_index') in the SELECT clause is redundant.Le mer. 17 févr. 2021 à 21:33, Dane Foster <[email protected]> a écrit :On Wed, Feb 17, 2021 at 1:37 PM Yoan SULTAN <[email protected]> wrote:Hi all, This is my first post on this mailing list, I really enjoy it.I wanted to add some details and answers to this disccusion.I'm happy you've decided to join the conversation and about the fact that you've opened up an entirely new avenue for me to investigate and learn from. I feel like I'm about to level up my SQL-fu! 😊 17 févr. 2021 à 17:52, Dane Foster <[email protected]> a écrit :A small update (see below/inline).On Tue, Feb 16, 2021 at 2:11 PM Dane Foster <[email protected]> wrote:Short conclusion:Switching from CTEs to temporary tables and analyzing reduced the runtime from 15 minutes to about 1.5 minutes.The attempt_scores table is pretty big and it is called 3 times; try to rewrite the query in order to reduce call to this table. for example :EXPLAIN (ANALYZE, BUFFERS)WITH reports AS ( SELECT student_id, assignment_id, max(score_value) FILTER (WHERE score_name = 'student_performance_index'), max(attempt_report_id) maxid, max(score_value) spi FROM attempt_scores GROUP BY student_id, assignment_id HAVING max(score_value) > 0 AND max(score_value) FILTER (WHERE score_name = 'student_performance_index') = max(score_value))SELECT avg(spi) spi, avg(CASE score_name WHEN 'digital_clinical_experience' THEN score_value END) dce, avg(CASE score_name WHEN 'tier1_subjective_data_collection' THEN score_value END) sdcFROM attempt_scores JOIN reports ON reports.maxid=attempt_scores.attempt_report_id;Given: HAVING max(score_value) > 0 AND max(score_value) FILTER (WHERE score_name = 'student_performance_index') = max(score_value)Why: max(score_value) FILTER (WHERE score_name = 'student_performance_index') but no FILTER clause on: max(attempt_report_id)? Some context for my question. I'm new to aggregate expressions therefore I don't have a strong mental model for what's happening. So let me tell you what I think is happening and you can correct me. The new HAVING clause that you've added ensures that for each student/assignment pair/group that we are selecting the max spi value (i.e., score_name = 'student_performance_index'). Therefore, isn't the FILTER clause in the SELECT section redundant? And if it's not redundant then why isn't it necessary for: max(attempt_report_id)?Also, I would continue to increase work_mem to 200MB until the external merge is not required.SET WORK_MEM='200MB'; -- to change only at session levelLonger conclusion:@Justin PryzbyI experimented w/ materializing the CTEs and it helped at the margins but did not significantly contribute to a reduction in runtime.No clustering was required because once I switched to temporary tables the new plan no longer used the for_upsert index.@Michael LewisIncreasing work_mem to 100MB (up from 4MB) helped at the margins (i.e., some 100's of millisecond improvement) but did not represent a significant reduction in the runtime.It wasn't obvious to me which window function would be appropriate for the problem I was trying to solve therefore I didn't experiment w/ that approach.I want to update/correct this statement: The selectivity of score_name='student_performance_index' was not enough for the planner to choose an index over doing a FTS.I added a partial index (WHERE score_name='student_performance_index'::citext) and that had a dramatic impact. That part of the query went from ~12 seconds to ~1 second.Another way to generate perf. gains on this query, CREATE HASH INDEX ON attempt_scores(score_name); --since score_name doesn't seem to have a big cardinalityFinally, thank you both for helping me bring this poor performing query to heel. Your insights were helpful and greatly appreciated.Sincerely,DaneOn Tue, Feb 16, 2021 at 10:25 AM Dane Foster <[email protected]> wrote:On Tue, Feb 16, 2021 at 10:13 AM Michael Lewis <[email protected]> wrote: Sort Method: external merge Disk: 30760kB Worker 0: Sort Method: external merge Disk: 30760kB Worker 1: Sort Method: external merge Disk: 30760kBIf you can increase work_mem, even setting it temporarily higher for the session or transaction, that may dramatically change the plan.I will try increasing work_mem for the session later today. The advice given by Justin particularly about row estimates would be wise to pursue. I'd wonder how selective that condition of score_name = 'student_performance_index' is in filtering out many of the 9.3 million tuples in that table and if an index with that as the leading column, or just an index on that column would be helpful.There are 1,206,355 rows where score_name='student_performance_idex'. You'd need to look at pg_stats for the table and see how many distinct values, and if student_performance_index is relatively high or low (or not present) in the MCVs list.I will look into that. I am not sure if your query does what you want it to do as I admit I didn't follow your explanation of the desired behavior. My hunch is that you want to make use of a window function and get rid of one of the CTEs.If you could tell me what part(s) are unclear I would appreciate it so that I can write a better comment. Thank you sooo much for all the feedback. It is greatly appreciated!Sincerely,Dane\n\n\n-- Regards,Yo.Again, thanks for joining the conversation. I look forward to hearing from you.Sincerely,Dane \n-- Regards,Yo.",
"msg_date": "Thu, 18 Feb 2021 05:33:11 +0100",
"msg_from": "Yoan SULTAN <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query and wrong row estimates for CTE"
}
] |
[
{
"msg_contents": "Hi all,\nI'm running a virtual machine with FreeBSD 12.2, PostgreSQL 12.5 and\nUFS as filesystem.\nI was experimenting with fsync = off and pgbench, and I see no\nparticular difference in tps having fsync enabled or disabled.\nNow, the same tiny test on a linux box provides a 10x tps, while on\nFreeBSD is a 1% increase.\nI'm trying to figure out why, and I suspect there is something related\nto how UFS handles writes.\n\nAny particular advice about tuning and parameters that can be\naffecting the \"no difference\" with fsync turned off?\n\n\n% sudo tunefs -p /dev/gpt/DATA\ntunefs: POSIX.1e ACLs: (-a) disabled\ntunefs: NFSv4 ACLs: (-N) disabled\ntunefs: MAC multilabel: (-l) disabled\ntunefs: soft updates: (-n) disabled\ntunefs: soft update journaling: (-j) disabled\ntunefs: gjournal: (-J) disabled\ntunefs: trim: (-t) enabled\ntunefs: maximum blocks per file in a cylinder group: (-e) 8192\ntunefs: average file size: (-f) 16384\ntunefs: average number of files in a directory: (-s) 64\ntunefs: minimum percentage of free space: (-m) 8%\ntunefs: space to hold for metadata blocks: (-k) 6408\ntunefs: optimization preference: (-o) time\ntunefs: volume label: (-L) DATA\n\n\n",
"msg_date": "Mon, 22 Feb 2021 17:48:28 +0100",
"msg_from": "Luca Ferrari <[email protected]>",
"msg_from_op": true,
"msg_subject": "FreeBSD UFS & fsync"
},
{
"msg_contents": "On Tue, Feb 23, 2021 at 5:49 AM Luca Ferrari <[email protected]> wrote:\n> I'm running a virtual machine with FreeBSD 12.2, PostgreSQL 12.5 and\n> UFS as filesystem.\n> I was experimenting with fsync = off and pgbench, and I see no\n> particular difference in tps having fsync enabled or disabled.\n> Now, the same tiny test on a linux box provides a 10x tps, while on\n> FreeBSD is a 1% increase.\n> I'm trying to figure out why, and I suspect there is something related\n> to how UFS handles writes.\n\nDo you have WCE enabled? In that case, modern Linux file systems\nwould do a synchronous SYNCHRONIZE CACHE for our WAL fdatasync(), but\nFreeBSD UFS wouldn't as far as I know. It does know how to do that\n(there's a BIO_FLUSH operation, also used by ZFS), but as far as I can\nsee UFS uses it just for its own file system meta-data crash safety\ncurrently (see softdep_synchronize()). (There is also no FUA flag for\nO_[D]SYNC writes, an even more modern invention.)\n\n\n",
"msg_date": "Tue, 23 Feb 2021 10:37:29 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FreeBSD UFS & fsync"
},
{
"msg_contents": "On Mon, Feb 22, 2021 at 10:38 PM Thomas Munro <[email protected]> wrote:\n> Do you have WCE enabled? In that case, modern Linux file systems\n> would do a synchronous SYNCHRONIZE CACHE for our WAL fdatasync(), but\n> FreeBSD UFS wouldn't as far as I know. It does know how to do that\n> (there's a BIO_FLUSH operation, also used by ZFS), but as far as I can\n> see UFS uses it just for its own file system meta-data crash safety\n> currently (see softdep_synchronize()). (There is also no FUA flag for\n> O_[D]SYNC writes, an even more modern invention.)\n\nApparently no WCE, but I could be looking at the wrong piece:\n\n% sysctl kern.cam.ada | grep write_cache\nkern.cam.ada.2.write_cache: -1\nkern.cam.ada.1.write_cache: -1\nkern.cam.ada.0.write_cache: -1\nkern.cam.ada.write_cache: -1\n\nI'm using sata disks, not scsi. Assuming I'm not looking at the wrong\nparameter, I wil attach a scsi disk to do the same test and see if\nsomething changes.\nOr if you have any other suggestion about what to inspect, please advice.\n\nThanks,\nLuca\n\n\n",
"msg_date": "Tue, 23 Feb 2021 08:46:50 +0100",
"msg_from": "Luca Ferrari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: FreeBSD UFS & fsync"
},
{
"msg_contents": "On Tue, Feb 23, 2021 at 8:46 AM Luca Ferrari <[email protected]> wrote:\n> I'm using sata disks, not scsi. Assuming I'm not looking at the wrong\n> parameter, I wil attach a scsi disk to do the same test and see if\n> something changes.\n\nI've tested the same version of PostgreSQL, same benchmark, on a scsi\ndisk. However, turning off fsync does not provide any increment at all\n(something that spans in less than 1% tps).\nI've checked and I have WCE enabled on such disk, but apparently I\ncannot modify (I suspect this is due to the virtualization of the\ndisk):\n\n# echo \"WCE: 0\" | camcontrol modepage da0 -m 0x08 -e\ncamcontrol: error sending mode select command\n# camcontrol modepage da0 -m 0x08 | grep WCE\nWCE: 1\n\nand the filesystem has everything disabled:\n\n# tunefs -p da0p1\ntunefs: Can't stat da0p1: No such file or directory\ntunefs: POSIX.1e ACLs: (-a) disabled\ntunefs: NFSv4 ACLs: (-N) disabled\ntunefs: MAC multilabel: (-l) disabled\ntunefs: soft updates: (-n) disabled\ntunefs: soft update journaling: (-j) disabled\ntunefs: gjournal: (-J) disabled\ntunefs: trim: (-t) disabled\ntunefs: maximum blocks per file in a cylinder group: (-e) 4096\ntunefs: average file size: (-f) 16384\ntunefs: average number of files in a directory: (-s) 64\ntunefs: minimum percentage of free space: (-m) 8%\ntunefs: space to hold for metadata blocks: (-k) 6408\ntunefs: optimization preference: (-o) time\ntunefs: volume label: (-L)\n\nI think I will not be able to test in a virtual environment, unless\nI'm missing something.\n\nThanks,\nLuca\n\n\n",
"msg_date": "Tue, 23 Feb 2021 12:57:22 +0100",
"msg_from": "Luca Ferrari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: FreeBSD UFS & fsync"
},
{
"msg_contents": "On Tue, Feb 23, 2021 at 12:57:22PM +0100, Luca Ferrari wrote:\n> On Tue, Feb 23, 2021 at 8:46 AM Luca Ferrari <[email protected]> wrote:\n> > I'm using sata disks, not scsi. Assuming I'm not looking at the wrong\n> > parameter, I wil attach a scsi disk to do the same test and see if\n> > something changes.\n> \n> I've tested the same version of PostgreSQL, same benchmark, on a scsi\n> disk. However, turning off fsync does not provide any increment at all\n> (something that spans in less than 1% tps).\n> I've checked and I have WCE enabled on such disk, but apparently I\n> cannot modify (I suspect this is due to the virtualization of the\n> disk):\n\nYou should really be running pg_test_fsync for this kind of testing.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 11 Mar 2021 09:29:45 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FreeBSD UFS & fsync"
},
{
"msg_contents": "On Thu, Mar 11, 2021 at 3:29 PM Bruce Momjian <[email protected]> wrote:\n>\n> You should really be running pg_test_fsync for this kind of testing.\n>\n\nSorry Bruce, but it is not clear to me: pg_test_fsync compares\ndifferent fsync implementations, but not the fsync on/off setting of a\ncluster.\n\nNow, pg_test_fsync reports the \"non synced writes\", which are\neffectively 15x faster (that is near to what I was expecting turning\noff fsync):\n\n% pg_test_fsync\n5 seconds per test\nO_DIRECT supported on this platform for open_datasync and open_sync.\n\nCompare file sync methods using one 8kB write:\n(in wal_sync_method preference order, except fdatasync is Linux's default)\n open_datasync n/a\n fdatasync 16269.365 ops/sec 61 usecs/op\n fsync 8471.429 ops/sec 118 usecs/op\n fsync_writethrough n/a\n open_sync 5664.861 ops/sec 177 usecs/op\n\nCompare file sync methods using two 8kB writes:\n(in wal_sync_method preference order, except fdatasync is Linux's default)\n open_datasync n/a\n fdatasync 15196.244 ops/sec 66 usecs/op\n fsync 7754.729 ops/sec 129 usecs/op\n fsync_writethrough n/a\n open_sync 2670.645 ops/sec 374 usecs/op\n\nCompare open_sync with different write sizes:\n(This is designed to compare the cost of writing 16kB in different write\nopen_sync sizes.)\n 1 * 16kB open_sync write 5486.140 ops/sec 182 usecs/op\n 2 * 8kB open_sync writes 2344.310 ops/sec 427 usecs/op\n 4 * 4kB open_sync writes 1323.548 ops/sec 756 usecs/op\n 8 * 2kB open_sync writes 659.449 ops/sec 1516 usecs/op\n 16 * 1kB open_sync writes 332.844 ops/sec 3004 usecs/op\n\nTest if fsync on non-write file descriptor is honored:\n(If the times are similar, fsync() can sync data written on a different\ndescriptor.)\n write, fsync, close 7515.006 ops/sec 133 usecs/op\n write, close, fsync 7107.698 ops/sec 141 usecs/op\n\nNon-sync'ed 8kB writes:\n write 278484.510 ops/sec 4 usecs/op\n\n\n\nHowever, these are not results I'm getting via pgbench.\n\n% sudo -u postgres postgres -C fsync -D /postgres/12/data\non\n% sudo -u postgres postgres -C checkpoint_timeout -D /postgres/12/data\n30\n% pgbench -T 60 -c 4 -r -n -U luca pgbench\n...\nnumber of transactions actually processed: 7347\nlatency average = 32.947 ms\ntps = 121.405308 (including connections establishing)\ntps = 121.429075 (excluding connections establishing)\n\n\n% sudo -u postgres postgres -C checkpoint_timeout -D /postgres/12/data\n30\n% sudo -u postgres postgres -C fsync -D /postgres/12/data\noff\n% pgbench -T 60 -c 4 -r -n -U luca pgbench\n...\nnumber of transactions actually processed: 8220\nlatency average = 29.212 ms\ntps = 136.929481 (including connections establishing)\ntps = 136.963971 (excluding connections establishing)\n\n\nOf course, the above test is really quick (and covers at least one\ncheckpoint), but event longer tests provide similar results, that are\nsomehow in contrast with the pg_test_fsync result.\nHowever, apparently the problem is not related to disck cache, since\npg_test_fsync reports correct times (as far as I understand).\nAm I missing something?\n\nLuca\n\n\n",
"msg_date": "Fri, 12 Mar 2021 10:08:50 +0100",
"msg_from": "Luca Ferrari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: FreeBSD UFS & fsync"
},
{
"msg_contents": "On Fri, Mar 12, 2021 at 10:09 PM Luca Ferrari <[email protected]> wrote:\n> fdatasync 16269.365 ops/sec 61 usecs/op\n> fsync 8471.429 ops/sec 118 usecs/op\n\n> Non-sync'ed 8kB writes:\n> write 278484.510 ops/sec 4 usecs/op\n\n> tps = 136.963971 (excluding connections establishing)\n\nIt looks like your system is performing very badly for some other\nreason, so that synchronous I/O waits are only a small proportion of\nthe time, and thus fsync=off doesn't speed things up very much. I'd\nlook into profiling the system to try to figure out what it's doing...\nmaybe it's suffering from super slow hypercalls for gettimeofday(), or\nsomething like that?\n\n\n",
"msg_date": "Fri, 12 Mar 2021 22:33:29 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FreeBSD UFS & fsync"
},
{
"msg_contents": "On Fri, Mar 12, 2021 at 10:33:29PM +1300, Thomas Munro wrote:\n> On Fri, Mar 12, 2021 at 10:09 PM Luca Ferrari <[email protected]> wrote:\n> > fdatasync 16269.365 ops/sec 61 usecs/op\n> > fsync 8471.429 ops/sec 118 usecs/op\n> \n> > Non-sync'ed 8kB writes:\n> > write 278484.510 ops/sec 4 usecs/op\n> \n> > tps = 136.963971 (excluding connections establishing)\n> \n> It looks like your system is performing very badly for some other\n> reason, so that synchronous I/O waits are only a small proportion of\n> the time, and thus fsync=off doesn't speed things up very much. I'd\n> look into profiling the system to try to figure out what it's doing...\n> maybe it's suffering from super slow hypercalls for gettimeofday(), or\n> something like that?\n\nAnd we have pg_test_timing for gettimeofday() testing.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Fri, 12 Mar 2021 14:39:58 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FreeBSD UFS & fsync"
},
{
"msg_contents": "On Fri, Mar 12, 2021 at 10:34 AM Thomas Munro <[email protected]> wrote:\n> It looks like your system is performing very badly for some other\n> reason, so that synchronous I/O waits are only a small proportion of\n> the time, and thus fsync=off doesn't speed things up very much. I'd\n> look into profiling the system to try to figure out what it's doing...\n> maybe it's suffering from super slow hypercalls for gettimeofday(), or\n> something like that?\n\n\nLet me get this straight to see if I understand it correctly:\npg_test_fsync reports 278000 tps in non sync-ed mode, and that is what\nI should expect (nearly) from turning off fsyc.\nHowever, something else is eating my resources, so I'm not getting the\ncorrect results.\nNow, what do you mean by profiling the system? Since I'm on FreeBSD I\ncould use dtrace to see if there's any clue where the time is spent,\neven if I'm not so expert in dtrace.\nPlease also note that pg_test_timing seems fine to me (I've tried\nseveral times with pretty much the same results):\n\n% pg_test_timing\nTesting timing overhead for 3 seconds.\nPer loop time including overhead: 37.68 ns\nHistogram of timing durations:\n < us % of total count\n 1 96.46399 76796834\n 2 3.52417 2805657\n 4 0.00400 3183\n 8 0.00320 2546\n 16 0.00235 1871\n 32 0.00124 988\n 64 0.00065 517\n 128 0.00024 189\n 256 0.00007 58\n 512 0.00003 26\n 1024 0.00002 18\n 2048 0.00002 19\n 4096 0.00001 9\n 8192 0.00000 1\n\nSo apparently gettimeofday should not be the problem right here.\n\nLuca\n\n\n",
"msg_date": "Mon, 15 Mar 2021 14:35:49 +0100",
"msg_from": "Luca Ferrari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: FreeBSD UFS & fsync"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI have 2 postgres instances created from the same dump (backup), one on a\nGCP VM and the other on AWS RDS. The first instance takes 18 minutes and\nthe second one takes less than 20s to run this simples query:\nSELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM\n\"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN\n'2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';\nI’ve run this query a few times to make sure both should be reading data\nfrom cache.\nI expect my postgres on GPC to be at least similar to the one managed by\nAWS RDS so that I can work on improvements parallelly and compare.\n\n\n\n*DETAILS:Query explain for Postgres on GCP VM:*Bitmap Heap Scan on\nSignalRecordsBlobs SignalRecordsBlobs (cost=18.80..2480.65 rows=799\nwidth=70) (actual time=216.766..776.032 rows=5122 loops=1)\n Filter: ((\"DateTime\" >= \\'2019-11-28 14:00:12.5402\\'::timestamp without\ntime zone) AND (\"DateTime\" <= \\'2020-07-23 21:12:32.249\\'::timestamp\nwithout time zone))\n Heap Blocks: exact=5223\n Buffers: shared hit=423 read=4821\n -> Bitmap Index Scan on IDX_SignalRecordsBlobs_SignalSettingId\n (cost=0.00..18.61 rows=824 width=0) (actual time=109.000..109.001\nrows=5228 loops=1)\n Index Cond: (\"SignalSettingId\" = 103)\n Buffers: shared hit=3 read=18\nPlanning time: 456.315 ms\nExecution time: 776.976 ms\n\n\n*Query explain for Postgres on AWS RDS:*Bitmap Heap Scan on\nSignalRecordsBlobs SignalRecordsBlobs (cost=190.02..13204.28 rows=6213\nwidth=69) (actual time=2.215..14.505 rows=5122 loops=1)\n Filter: ((\"DateTime\" >= \\'2019-11-28 14:00:12.5402\\'::timestamp without\ntime zone) AND (\"DateTime\" <= \\'2020-07-23 21:12:32.249\\'::timestamp\nwithout time zone))\n Heap Blocks: exact=5209\n Buffers: shared hit=3290 read=1948\n -> Bitmap Index Scan on IDX_SignalRecordsBlobs_SignalSettingId\n (cost=0.00..188.46 rows=6405 width=0) (actual time=1.159..1.159 rows=5228\nloops=1)\n Index Cond: (\"SignalSettingId\" = 103)\n Buffers: shared hit=3 read=26\nPlanning time: 0.407 ms\nExecution time: 14.87 ms\n\n\n*PostgreSQL version number running:• VM on GCP*: PostgreSQL 11.10 (Debian\n11.10-0+deb10u1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6)\n8.3.0, 64-bit\n*• Managed by RDS on AWS:* PostgreSQL 11.10 on x86_64-pc-linux-gnu,\ncompiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit\n\n\n*How PostgreSQL was installed:• VM on GCP*: Already installed when created\nVM running Debian on Google Console.\n*• Managed by RDS on AWS:* RDS managed the installation.\n\n\n*Changes made to the settings in the postgresql.conf file:*Here are some\npostgres parameters that might be useful:\n*Instance on VM on GCP (2 vCPUs, 2 GB memory, 800 GB disk):*\n• effective_cache_size: 1496MB\n• maintenance_work_mem: 255462kB (close to 249MB)\n• max_wal_size: 1GB\n• min_wal_size: 512MB\n• shared_buffers: 510920kB (close to 499MB)\n• max_locks_per_transaction 1000\n• wal_buffers: 15320kB (close to 15MB)\n• work_mem: 2554kB\n• effective_io_concurrency: 200\n• dynamic_shared_memory_type: posix\nOn this instance we installed a postgres extension called timescaledb to\ngain performance on other tables. Some of these parameters were set using\nrecommendations from that extension.\n\n*Instance managed by RDS (2 vCPUs, 2 GiB RAM, 250GB disk, 750 de IOPS):*\n• effective_cache_size: 1887792kB (close to 1844MB)\n• maintenance_work_mem: 64MB\n• max_wal_size: 2GB\n• min_wal_size: 192MB\n• shared_buffers: 943896kB (close to 922MB)\n• max_locks_per_transaction 64\n\n\n*Operating system and version by runing \"uname -a\":• VM on GCP:* Linux\n{{{my instance name}}} 4.19.0-14-cloud-amd64 #1 SMP Debian 4.19.171-2\n(2021-01-30) x86_64 GNU/Linux\n*• Managed by AWS RDS:* Aparently Red Hay as shown using SELECT version();\n\n*Program used to connect to PostgreSQL:* Python psycopg2.connect() to\ncreate the connection and pandas read_sql_query() to query using that\nconnection.\n\nThanks in advance\n\nHi everyone,I have 2 postgres instances created from the same dump (backup), one on a GCP VM and the other on AWS RDS. The first instance takes 18 minutes and the second one takes less than 20s to run this simples query:SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';I’ve run this query a few times to make sure both should be reading data from cache.I expect my postgres on GPC to be at least similar to the one managed by AWS RDS so that I can work on improvements parallelly and compare.DETAILS:Query explain for Postgres on GCP VM:Bitmap Heap Scan on SignalRecordsBlobs SignalRecordsBlobs (cost=18.80..2480.65 rows=799 width=70) (actual time=216.766..776.032 rows=5122 loops=1) Filter: ((\"DateTime\" >= \\'2019-11-28 14:00:12.5402\\'::timestamp without time zone) AND (\"DateTime\" <= \\'2020-07-23 21:12:32.249\\'::timestamp without time zone)) Heap Blocks: exact=5223 Buffers: shared hit=423 read=4821 -> Bitmap Index Scan on IDX_SignalRecordsBlobs_SignalSettingId (cost=0.00..18.61 rows=824 width=0) (actual time=109.000..109.001 rows=5228 loops=1) Index Cond: (\"SignalSettingId\" = 103) Buffers: shared hit=3 read=18Planning time: 456.315 msExecution time: 776.976 msQuery explain for Postgres on AWS RDS:Bitmap Heap Scan on SignalRecordsBlobs SignalRecordsBlobs (cost=190.02..13204.28 rows=6213 width=69) (actual time=2.215..14.505 rows=5122 loops=1) Filter: ((\"DateTime\" >= \\'2019-11-28 14:00:12.5402\\'::timestamp without time zone) AND (\"DateTime\" <= \\'2020-07-23 21:12:32.249\\'::timestamp without time zone)) Heap Blocks: exact=5209 Buffers: shared hit=3290 read=1948 -> Bitmap Index Scan on IDX_SignalRecordsBlobs_SignalSettingId (cost=0.00..188.46 rows=6405 width=0) (actual time=1.159..1.159 rows=5228 loops=1) Index Cond: (\"SignalSettingId\" = 103) Buffers: shared hit=3 read=26Planning time: 0.407 msExecution time: 14.87 msPostgreSQL version number running:•\tVM on GCP: PostgreSQL 11.10 (Debian 11.10-0+deb10u1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit•\tManaged by RDS on AWS: PostgreSQL 11.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bitHow PostgreSQL was installed:•\tVM on GCP: Already installed when created VM running Debian on Google Console. •\tManaged by RDS on AWS: RDS managed the installation.Changes made to the settings in the postgresql.conf file:Here are some postgres parameters that might be useful:Instance on VM on GCP (2 vCPUs, 2 GB memory, 800 GB disk):•\teffective_cache_size: 1496MB•\tmaintenance_work_mem: 255462kB (close to 249MB)•\tmax_wal_size: 1GB•\tmin_wal_size: 512MB•\tshared_buffers: 510920kB (close to 499MB)•\tmax_locks_per_transaction 1000•\twal_buffers: 15320kB (close to 15MB)•\twork_mem: 2554kB•\teffective_io_concurrency: 200•\tdynamic_shared_memory_type: posixOn this instance we installed a postgres extension called timescaledb to gain performance on other tables. Some of these parameters were set using recommendations from that extension.Instance managed by RDS (2 vCPUs, 2 GiB RAM, 250GB disk, 750 de IOPS):•\teffective_cache_size: 1887792kB (close to 1844MB)•\tmaintenance_work_mem: 64MB•\tmax_wal_size: 2GB•\tmin_wal_size: 192MB•\tshared_buffers: 943896kB (close to 922MB)•\tmax_locks_per_transaction 64Operating system and version by runing \"uname -a\":•\tVM on GCP: Linux {{{my instance name}}} 4.19.0-14-cloud-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64 GNU/Linux•\tManaged by AWS RDS: Aparently Red Hay as shown using SELECT version();Program used to connect to PostgreSQL: Python psycopg2.connect() to create the connection and pandas read_sql_query() to query using that connection.Thanks in advance",
"msg_date": "Tue, 23 Feb 2021 15:12:19 -0300",
"msg_from": "Maurici Meneghetti <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres performance comparing GCP and AWS"
},
{
"msg_contents": "Hi Maurici,\n\nin my experience the key factor about speed in big queries is sequential \nscan. There is a huge variance in how the system is tuned. In some cases \nI cannot read more than 10 MB/s, in others I get to expect 20-40 MB/s. \nBut then, when things are tuned well and the parallel workers set in, I \nsee the throughput spike to 100-200 MB/s.\n\nYou may have to enable the parallel workers in your postgresql.conf\n\nSo, to me, this is what you want to check first. While the query runs, \nhave both iostat and top running, with top -j or -c or -a or whatever it \nis on that particular OS to see the detail info about the process. \nPerhaps even -H to see threads.\n\nThen you should see good flow with high read speed and reasonable CPU \nload %. If you get low read speed and low CPU that is a sign of IO \nblockage somewhere. If you get high CPU and low IO, that's a planning \nmistake (the nested loop trap). You don't have that here apparently. But \nindex scans I have seen with much worse IO throughput than seq table \nscans. Not sure.\n\nAlso, on AWS you need to be sure you have enough IOPS provisioned on \nyour EBS (I use gp3 now where you can have up to 10k IOPS) and also \ncheck bus throughput of the EC2 instance. Needless to say you don't want \na t* instance where you have a limited burst CPU capacity only.\n\nregards,\n-Gunther\n\nOn 2/23/2021 1:12 PM, Maurici Meneghetti wrote:\n> Hi everyone,\n>\n> I have 2 postgres instances created from the same dump (backup), one \n> on a GCP VM and the other on AWS RDS. The first instance takes 18 \n> minutes and the second one takes less than 20s to run this simples query:\n> SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM \n> \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" \n> BETWEEN '2019-11-28T14:00:12.540200000' AND \n> '2020-07-23T21:12:32.249000000';\n> I’ve run this query a few times to make sure both should be reading \n> data from cache.\n> I expect my postgres on GPC to be at least similar to the one managed \n> by AWS RDS so that I can work on improvements parallelly and compare.\n>\n> *DETAILS:\n> Query explain for Postgres on GCP VM:\n> *Bitmap Heap Scan on SignalRecordsBlobs SignalRecordsBlobs \n> (cost=18.80..2480.65 rows=799 width=70) (actual time=216.766..776.032 \n> rows=5122 loops=1)\n> Filter: ((\"DateTime\" >= \\'2019-11-28 14:00:12.5402\\'::timestamp \n> without time zone) AND (\"DateTime\" <= \\'2020-07-23 \n> 21:12:32.249\\'::timestamp without time zone))\n> Heap Blocks: exact=5223\n> Buffers: shared hit=423 read=4821\n> -> Bitmap Index Scan on IDX_SignalRecordsBlobs_SignalSettingId \n> (cost=0.00..18.61 rows=824 width=0) (actual time=109.000..109.001 \n> rows=5228 loops=1)\n> Index Cond: (\"SignalSettingId\" = 103)\n> Buffers: shared hit=3 read=18\n> Planning time: 456.315 ms\n> Execution time: 776.976 ms\n>\n> *Query explain for Postgres on AWS RDS:\n> *Bitmap Heap Scan on SignalRecordsBlobs SignalRecordsBlobs \n> (cost=190.02..13204.28 rows=6213 width=69) (actual time=2.215..14.505 \n> rows=5122 loops=1)\n> Filter: ((\"DateTime\" >= \\'2019-11-28 14:00:12.5402\\'::timestamp \n> without time zone) AND (\"DateTime\" <= \\'2020-07-23 \n> 21:12:32.249\\'::timestamp without time zone))\n> Heap Blocks: exact=5209\n> Buffers: shared hit=3290 read=1948\n> -> Bitmap Index Scan on IDX_SignalRecordsBlobs_SignalSettingId \n> (cost=0.00..188.46 rows=6405 width=0) (actual time=1.159..1.159 \n> rows=5228 loops=1)\n> Index Cond: (\"SignalSettingId\" = 103)\n> Buffers: shared hit=3 read=26\n> Planning time: 0.407 ms\n> Execution time: 14.87 ms\n>\n> *PostgreSQL version number running:\n> • VM on GCP*: PostgreSQL 11.10 (Debian 11.10-0+deb10u1) on \n> x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit\n> *• Managed by RDS on AWS:* PostgreSQL 11.10 on x86_64-pc-linux-gnu, \n> compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit\n>\n> *How PostgreSQL was installed:\n> • VM on GCP*: Already installed when created VM running Debian on \n> Google Console.\n> *• Managed by RDS on AWS:* RDS managed the installation.\n>\n> *Changes made to the settings in the postgresql.conf file:\n> *Here are some postgres parameters that might be useful:\n> *Instance on VM on GCP (2 vCPUs, 2 GB memory, 800 GB disk):*\n> • effective_cache_size: 1496MB\n> • maintenance_work_mem: 255462kB (close to 249MB)\n> • max_wal_size: 1GB\n> • min_wal_size: 512MB\n> • shared_buffers: 510920kB (close to 499MB)\n> • max_locks_per_transaction 1000\n> • wal_buffers: 15320kB (close to 15MB)\n> • work_mem: 2554kB\n> • effective_io_concurrency: 200\n> • dynamic_shared_memory_type: posix\n> On this instance we installed a postgres extension called timescaledb \n> to gain performance on other tables. Some of these parameters were set \n> using recommendations from that extension.\n>\n> *Instance managed by RDS (2 vCPUs, 2 GiB RAM, 250GB disk, 750 de IOPS):*\n> • effective_cache_size: 1887792kB (close to 1844MB)\n> • maintenance_work_mem: 64MB\n> • max_wal_size: 2GB\n> • min_wal_size: 192MB\n> • shared_buffers: 943896kB (close to 922MB)\n> • max_locks_per_transaction 64\n>\n> *Operating system and version by runing \"uname -a\":\n> • VM on GCP:* Linux {{{my instance name}}} 4.19.0-14-cloud-amd64 #1 \n> SMP Debian 4.19.171-2 (2021-01-30) x86_64 GNU/Linux\n> *• Managed by AWS RDS:* Aparently Red Hay as shown using SELECT version();\n>\n> *Program used to connect to PostgreSQL:* Python psycopg2.connect() to \n> create the connection and pandas read_sql_query() to query using that \n> connection.\n>\n> Thanks in advance\n\n\n\n\n\n\nHi Maurici, \n\nin my experience the key factor about speed in big queries is\n sequential scan. There is a huge variance in how the system is\n tuned. In some cases I cannot read more than 10 MB/s, in others I\n get to expect 20-40 MB/s. But then, when things are tuned well and\n the parallel workers set in, I see the throughput spike to 100-200\n MB/s. \n\nYou may have to enable the parallel workers in your\n postgresql.conf\n\nSo, to me, this is what you want to check first. While the query\n runs, have both iostat and top running, with top -j or -c or -a or\n whatever it is on that particular OS to see the detail info about\n the process. Perhaps even -H to see threads. \n\nThen you should see good flow with high read speed and reasonable\n CPU load %. If you get low read speed and low CPU that is a sign\n of IO blockage somewhere. If you get high CPU and low IO, that's a\n planning mistake (the nested loop trap). You don't have that here\n apparently. But index scans I have seen with much worse IO\n throughput than seq table scans. Not sure.\nAlso, on AWS you need to be sure you have enough IOPS provisioned\n on your EBS (I use gp3 now where you can have up to 10k IOPS) and \n also check bus throughput of the EC2 instance. Needless to say you\n don't want a t* instance where you have a limited burst CPU\n capacity only.\nregards,\n -Gunther\n\nOn 2/23/2021 1:12 PM, Maurici\n Meneghetti wrote:\n\n\n\n\nHi everyone,\n\n\n I have 2 postgres instances created from the same dump (backup),\n one on a GCP VM and the other on AWS RDS. The first instance\n takes 18 minutes and the second one takes less than 20s to run\n this simples query:\n SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM\n \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND\n \"DateTime\" BETWEEN '2019-11-28T14:00:12.540200000' AND\n '2020-07-23T21:12:32.249000000';\n I’ve run this query a few times to make sure both should be\n reading data from cache.\n I expect my postgres on GPC to be at least similar to the one\n managed by AWS RDS so that I can work on improvements parallelly\n and compare.\n\nDETAILS:\n Query explain for Postgres on GCP VM:\nBitmap Heap Scan on SignalRecordsBlobs SignalRecordsBlobs\n (cost=18.80..2480.65 rows=799 width=70) (actual\n time=216.766..776.032 rows=5122 loops=1)\n Filter: ((\"DateTime\" >= \\'2019-11-28\n 14:00:12.5402\\'::timestamp without time zone) AND (\"DateTime\"\n <= \\'2020-07-23 21:12:32.249\\'::timestamp without time zone))\n Heap Blocks: exact=5223\n Buffers: shared hit=423 read=4821\n -> Bitmap Index Scan on\n IDX_SignalRecordsBlobs_SignalSettingId (cost=0.00..18.61\n rows=824 width=0) (actual time=109.000..109.001 rows=5228\n loops=1)\n Index Cond: (\"SignalSettingId\" = 103)\n Buffers: shared hit=3 read=18\n Planning time: 456.315 ms\n Execution time: 776.976 ms\n\nQuery explain for Postgres on AWS RDS:\nBitmap Heap Scan on SignalRecordsBlobs SignalRecordsBlobs\n (cost=190.02..13204.28 rows=6213 width=69) (actual\n time=2.215..14.505 rows=5122 loops=1)\n Filter: ((\"DateTime\" >= \\'2019-11-28\n 14:00:12.5402\\'::timestamp without time zone) AND (\"DateTime\"\n <= \\'2020-07-23 21:12:32.249\\'::timestamp without time zone))\n Heap Blocks: exact=5209\n Buffers: shared hit=3290 read=1948\n -> Bitmap Index Scan on\n IDX_SignalRecordsBlobs_SignalSettingId (cost=0.00..188.46\n rows=6405 width=0) (actual time=1.159..1.159 rows=5228 loops=1)\n Index Cond: (\"SignalSettingId\" = 103)\n Buffers: shared hit=3 read=26\n Planning time: 0.407 ms\n Execution time: 14.87 ms\n\nPostgreSQL version number running:\n • VM on GCP: PostgreSQL 11.10 (Debian 11.10-0+deb10u1) on\n x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0,\n 64-bit\n• Managed by RDS on AWS: PostgreSQL 11.10 on\n x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red\n Hat 4.8.5-11), 64-bit\n\nHow PostgreSQL was installed:\n • VM on GCP: Already installed when created VM running\n Debian on Google Console. \n• Managed by RDS on AWS: RDS managed the installation.\n\nChanges made to the settings in the postgresql.conf file:\nHere are some postgres parameters that might be useful:\nInstance on VM on GCP (2 vCPUs, 2 GB memory, 800 GB disk):\n • effective_cache_size: 1496MB\n • maintenance_work_mem: 255462kB (close to 249MB)\n • max_wal_size: 1GB\n • min_wal_size: 512MB\n • shared_buffers: 510920kB (close to 499MB)\n • max_locks_per_transaction 1000\n • wal_buffers: 15320kB (close to 15MB)\n • work_mem: 2554kB\n • effective_io_concurrency: 200\n • dynamic_shared_memory_type: posix\n On this instance we installed a postgres extension called\n timescaledb to gain performance on other tables. Some of these\n parameters were set using recommendations from that extension.\n\nInstance managed by RDS (2 vCPUs, 2 GiB RAM, 250GB disk, 750\n de IOPS):\n • effective_cache_size: 1887792kB (close to 1844MB)\n • maintenance_work_mem: 64MB\n • max_wal_size: 2GB\n • min_wal_size: 192MB\n • shared_buffers: 943896kB (close to 922MB)\n • max_locks_per_transaction 64\n\nOperating system and version by runing \"uname -a\":\n • VM on GCP: Linux {{{my instance name}}}\n 4.19.0-14-cloud-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30)\n x86_64 GNU/Linux\n• Managed by AWS RDS: Aparently Red Hay as shown using\n SELECT version();\n\nProgram used to connect to PostgreSQL: Python\n psycopg2.connect() to create the connection and pandas\n read_sql_query() to query using that connection.\n \n\n\nThanks in advance",
"msg_date": "Wed, 24 Feb 2021 08:10:20 -0500",
"msg_from": "Gunther Schadow <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance comparing GCP and AWS"
},
{
"msg_contents": "Hi Maurici,\n\nas a starting point: can you make sure your GPC instance is configured in\nthe same way AWS is?\nOnce you do it, repeat the tests, and post the outcome.\n\nThanks,\nMilos\n\n\n\nOn Tue, Feb 23, 2021 at 11:14 PM Maurici Meneghetti <\[email protected]> wrote:\n\n> Hi everyone,\n>\n> I have 2 postgres instances created from the same dump (backup), one on a\n> GCP VM and the other on AWS RDS. The first instance takes 18 minutes and\n> the second one takes less than 20s to run this simples query:\n> SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM\n> \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN\n> '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';\n> I’ve run this query a few times to make sure both should be reading data\n> from cache.\n> I expect my postgres on GPC to be at least similar to the one managed by\n> AWS RDS so that I can work on improvements parallelly and compare.\n>\n>\n>\n> *DETAILS:Query explain for Postgres on GCP VM:*Bitmap Heap Scan on\n> SignalRecordsBlobs SignalRecordsBlobs (cost=18.80..2480.65 rows=799\n> width=70) (actual time=216.766..776.032 rows=5122 loops=1)\n> Filter: ((\"DateTime\" >= \\'2019-11-28 14:00:12.5402\\'::timestamp\n> without time zone) AND (\"DateTime\" <= \\'2020-07-23\n> 21:12:32.249\\'::timestamp without time zone))\n> Heap Blocks: exact=5223\n> Buffers: shared hit=423 read=4821\n> -> Bitmap Index Scan on IDX_SignalRecordsBlobs_SignalSettingId\n> (cost=0.00..18.61 rows=824 width=0) (actual time=109.000..109.001\n> rows=5228 loops=1)\n> Index Cond: (\"SignalSettingId\" = 103)\n> Buffers: shared hit=3 read=18\n> Planning time: 456.315 ms\n> Execution time: 776.976 ms\n>\n>\n> *Query explain for Postgres on AWS RDS:*Bitmap Heap Scan on\n> SignalRecordsBlobs SignalRecordsBlobs (cost=190.02..13204.28 rows=6213\n> width=69) (actual time=2.215..14.505 rows=5122 loops=1)\n> Filter: ((\"DateTime\" >= \\'2019-11-28 14:00:12.5402\\'::timestamp\n> without time zone) AND (\"DateTime\" <= \\'2020-07-23\n> 21:12:32.249\\'::timestamp without time zone))\n> Heap Blocks: exact=5209\n> Buffers: shared hit=3290 read=1948\n> -> Bitmap Index Scan on IDX_SignalRecordsBlobs_SignalSettingId\n> (cost=0.00..188.46 rows=6405 width=0) (actual time=1.159..1.159 rows=5228\n> loops=1)\n> Index Cond: (\"SignalSettingId\" = 103)\n> Buffers: shared hit=3 read=26\n> Planning time: 0.407 ms\n> Execution time: 14.87 ms\n>\n>\n> *PostgreSQL version number running:• VM on GCP*: PostgreSQL 11.10 (Debian\n> 11.10-0+deb10u1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6)\n> 8.3.0, 64-bit\n> *• Managed by RDS on AWS:* PostgreSQL 11.10 on x86_64-pc-linux-gnu,\n> compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit\n>\n>\n> *How PostgreSQL was installed:• VM on GCP*: Already installed when\n> created VM running Debian on Google Console.\n> *• Managed by RDS on AWS:* RDS managed the installation.\n>\n>\n> *Changes made to the settings in the postgresql.conf file:*Here are some\n> postgres parameters that might be useful:\n> *Instance on VM on GCP (2 vCPUs, 2 GB memory, 800 GB disk):*\n> • effective_cache_size: 1496MB\n> • maintenance_work_mem: 255462kB (close to 249MB)\n> • max_wal_size: 1GB\n> • min_wal_size: 512MB\n> • shared_buffers: 510920kB (close to 499MB)\n> • max_locks_per_transaction 1000\n> • wal_buffers: 15320kB (close to 15MB)\n> • work_mem: 2554kB\n> • effective_io_concurrency: 200\n> • dynamic_shared_memory_type: posix\n> On this instance we installed a postgres extension called timescaledb to\n> gain performance on other tables. Some of these parameters were set using\n> recommendations from that extension.\n>\n> *Instance managed by RDS (2 vCPUs, 2 GiB RAM, 250GB disk, 750 de IOPS):*\n> • effective_cache_size: 1887792kB (close to 1844MB)\n> • maintenance_work_mem: 64MB\n> • max_wal_size: 2GB\n> • min_wal_size: 192MB\n> • shared_buffers: 943896kB (close to 922MB)\n> • max_locks_per_transaction 64\n>\n>\n> *Operating system and version by runing \"uname -a\":• VM on GCP:* Linux\n> {{{my instance name}}} 4.19.0-14-cloud-amd64 #1 SMP Debian 4.19.171-2\n> (2021-01-30) x86_64 GNU/Linux\n> *• Managed by AWS RDS:* Aparently Red Hay as shown using SELECT version();\n>\n> *Program used to connect to PostgreSQL:* Python psycopg2.connect() to\n> create the connection and pandas read_sql_query() to query using that\n> connection.\n>\n> Thanks in advance\n>\n\nHi Maurici,as a starting point: can you make sure your GPC instance is configured in the same way AWS is?Once you do it, repeat the tests, and post the outcome.Thanks,MilosOn Tue, Feb 23, 2021 at 11:14 PM Maurici Meneghetti <[email protected]> wrote:Hi everyone,I have 2 postgres instances created from the same dump (backup), one on a GCP VM and the other on AWS RDS. The first instance takes 18 minutes and the second one takes less than 20s to run this simples query:SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';I’ve run this query a few times to make sure both should be reading data from cache.I expect my postgres on GPC to be at least similar to the one managed by AWS RDS so that I can work on improvements parallelly and compare.DETAILS:Query explain for Postgres on GCP VM:Bitmap Heap Scan on SignalRecordsBlobs SignalRecordsBlobs (cost=18.80..2480.65 rows=799 width=70) (actual time=216.766..776.032 rows=5122 loops=1) Filter: ((\"DateTime\" >= \\'2019-11-28 14:00:12.5402\\'::timestamp without time zone) AND (\"DateTime\" <= \\'2020-07-23 21:12:32.249\\'::timestamp without time zone)) Heap Blocks: exact=5223 Buffers: shared hit=423 read=4821 -> Bitmap Index Scan on IDX_SignalRecordsBlobs_SignalSettingId (cost=0.00..18.61 rows=824 width=0) (actual time=109.000..109.001 rows=5228 loops=1) Index Cond: (\"SignalSettingId\" = 103) Buffers: shared hit=3 read=18Planning time: 456.315 msExecution time: 776.976 msQuery explain for Postgres on AWS RDS:Bitmap Heap Scan on SignalRecordsBlobs SignalRecordsBlobs (cost=190.02..13204.28 rows=6213 width=69) (actual time=2.215..14.505 rows=5122 loops=1) Filter: ((\"DateTime\" >= \\'2019-11-28 14:00:12.5402\\'::timestamp without time zone) AND (\"DateTime\" <= \\'2020-07-23 21:12:32.249\\'::timestamp without time zone)) Heap Blocks: exact=5209 Buffers: shared hit=3290 read=1948 -> Bitmap Index Scan on IDX_SignalRecordsBlobs_SignalSettingId (cost=0.00..188.46 rows=6405 width=0) (actual time=1.159..1.159 rows=5228 loops=1) Index Cond: (\"SignalSettingId\" = 103) Buffers: shared hit=3 read=26Planning time: 0.407 msExecution time: 14.87 msPostgreSQL version number running:•\tVM on GCP: PostgreSQL 11.10 (Debian 11.10-0+deb10u1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit•\tManaged by RDS on AWS: PostgreSQL 11.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bitHow PostgreSQL was installed:•\tVM on GCP: Already installed when created VM running Debian on Google Console. •\tManaged by RDS on AWS: RDS managed the installation.Changes made to the settings in the postgresql.conf file:Here are some postgres parameters that might be useful:Instance on VM on GCP (2 vCPUs, 2 GB memory, 800 GB disk):•\teffective_cache_size: 1496MB•\tmaintenance_work_mem: 255462kB (close to 249MB)•\tmax_wal_size: 1GB•\tmin_wal_size: 512MB•\tshared_buffers: 510920kB (close to 499MB)•\tmax_locks_per_transaction 1000•\twal_buffers: 15320kB (close to 15MB)•\twork_mem: 2554kB•\teffective_io_concurrency: 200•\tdynamic_shared_memory_type: posixOn this instance we installed a postgres extension called timescaledb to gain performance on other tables. Some of these parameters were set using recommendations from that extension.Instance managed by RDS (2 vCPUs, 2 GiB RAM, 250GB disk, 750 de IOPS):•\teffective_cache_size: 1887792kB (close to 1844MB)•\tmaintenance_work_mem: 64MB•\tmax_wal_size: 2GB•\tmin_wal_size: 192MB•\tshared_buffers: 943896kB (close to 922MB)•\tmax_locks_per_transaction 64Operating system and version by runing \"uname -a\":•\tVM on GCP: Linux {{{my instance name}}} 4.19.0-14-cloud-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64 GNU/Linux•\tManaged by AWS RDS: Aparently Red Hay as shown using SELECT version();Program used to connect to PostgreSQL: Python psycopg2.connect() to create the connection and pandas read_sql_query() to query using that connection.Thanks in advance",
"msg_date": "Wed, 24 Feb 2021 14:15:36 +0100",
"msg_from": "Milos Babic <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance comparing GCP and AWS"
},
{
"msg_contents": "Hi,\n\nOn Wed, Feb 24, 2021 at 6:14 AM Maurici Meneghetti\n<[email protected]> wrote:\n>\n> I have 2 postgres instances created from the same dump (backup), one on a GCP VM and the other on AWS RDS. The first instance takes 18 minutes and the second one takes less than 20s to run this simples query:\n> SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';\n> I’ve run this query a few times to make sure both should be reading data from cache.\n> I expect my postgres on GPC to be at least similar to the one managed by AWS RDS so that I can work on improvements parallelly and compare.\n>\n> DETAILS:\n> [...]\n> Planning time: 456.315 ms\n> Execution time: 776.976 ms\n>\n> Query explain for Postgres on AWS RDS:\n> [...]\n> Planning time: 0.407 ms\n> Execution time: 14.87 ms\n\nThose queries were executed in respectively ~1s and ~15ms (one thing\nto note is that the slower one had less data in cache, which may or\nmay note account for the difference). Does those plans reflect the\nreality of your slow executions? If yes it's likely due to quite slow\nnetwork transfer. Otherwise we would need an explain plan from the\nslow execution, for which auto_explain can help you. See\nhttps://www.postgresql.org/docs/11/auto-explain.html for more details.\n\n\n",
"msg_date": "Wed, 24 Feb 2021 21:35:12 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance comparing GCP and AWS"
},
{
"msg_contents": "Hi, Julien\n\nYour hypothesis about network transfer makes sense. The query returns a big\nsize byte array blobs.\n\nIs there a way to test the network speed against the instances? I have\naccess to the network speed in gcp (5 Mb/s), but don't have access in aws\nrds.\n\n[image: image.png]\n\nThanks in advance\n\nEm qua., 24 de fev. de 2021 às 10:35, Julien Rouhaud <[email protected]>\nescreveu:\n\n> Hi,\n>\n> On Wed, Feb 24, 2021 at 6:14 AM Maurici Meneghetti\n> <[email protected]> wrote:\n> >\n> > I have 2 postgres instances created from the same dump (backup), one on\n> a GCP VM and the other on AWS RDS. The first instance takes 18 minutes and\n> the second one takes less than 20s to run this simples query:\n> > SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM\n> \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN\n> '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';\n> > I’ve run this query a few times to make sure both should be reading data\n> from cache.\n> > I expect my postgres on GPC to be at least similar to the one managed by\n> AWS RDS so that I can work on improvements parallelly and compare.\n> >\n> > DETAILS:\n> > [...]\n> > Planning time: 456.315 ms\n> > Execution time: 776.976 ms\n> >\n> > Query explain for Postgres on AWS RDS:\n> > [...]\n> > Planning time: 0.407 ms\n> > Execution time: 14.87 ms\n>\n> Those queries were executed in respectively ~1s and ~15ms (one thing\n> to note is that the slower one had less data in cache, which may or\n> may note account for the difference). Does those plans reflect the\n> reality of your slow executions? If yes it's likely due to quite slow\n> network transfer. Otherwise we would need an explain plan from the\n> slow execution, for which auto_explain can help you. See\n> https://www.postgresql.org/docs/11/auto-explain.html for more details.\n>\n\n\n-- \n*Att,*\n\n*Igor Gois | Sócio Consultor*\n(48) 99169-9889 | Skype: igor_msg\nSite <https://bixtecnologia.com.br/>| Blog\n<https://www.bixtecnologia.com.br/blog/> | LinkedIn\n<https://www.linkedin.com/company/bixtecnologia/>| Facebook\n<https://www.facebook.com/bix.tecnologia>| Instagram\n<https://www.instagram.com/bixtecnologia/>",
"msg_date": "Wed, 24 Feb 2021 12:11:30 -0300",
"msg_from": "Igor Gois <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance comparing GCP and AWS"
},
{
"msg_contents": "> I expect my postgres on GPC to be at least similar to the one managed by\nAWS RDS\n\nimho:\n- on Google Cloud you can test with \"Cloud SQL for Postgresql\" (\nhttps://cloud.google.com/sql/docs/postgres )\n- on Google Compute Engine ( VM ): you have to tune the disks ; linux ;\nfile system ; scheduler ;\n and it is a complex task\n\nimho: select the perfect disk types for the postgresql data ( and create\na fast RAID )\nhttps://cloud.google.com/compute/docs/disks\n\n*Compute Engine offers several types of storage options for your instances.\nEach of the following storage options has unique price and performance\ncharacteristics:*\n\n*- Zonal persistent disk: Efficient, reliable block storage.*\n\n*- Regional persistent disk: Regional block storage replicated in two\nzones.*\n*- Local SSD: High performance, transient, local block storage.*\n*- Cloud Storage buckets: Affordable object storage.*\n*- Filestore: High performance file storage for Google Cloud users.*\n\n\nregards,\n Imre\n\n\nMaurici Meneghetti <[email protected]> ezt írta\n(időpont: 2021. febr. 23., K, 23:14):\n\n> Hi everyone,\n>\n> I have 2 postgres instances created from the same dump (backup), one on a\n> GCP VM and the other on AWS RDS. The first instance takes 18 minutes and\n> the second one takes less than 20s to run this simples query:\n> SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM\n> \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN\n> '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';\n> I’ve run this query a few times to make sure both should be reading data\n> from cache.\n> I expect my postgres on GPC to be at least similar to the one managed by\n> AWS RDS so that I can work on improvements parallelly and compare.\n>\n>\n>\n> *DETAILS:Query explain for Postgres on GCP VM:*Bitmap Heap Scan on\n> SignalRecordsBlobs SignalRecordsBlobs (cost=18.80..2480.65 rows=799\n> width=70) (actual time=216.766..776.032 rows=5122 loops=1)\n> Filter: ((\"DateTime\" >= \\'2019-11-28 14:00:12.5402\\'::timestamp\n> without time zone) AND (\"DateTime\" <= \\'2020-07-23\n> 21:12:32.249\\'::timestamp without time zone))\n> Heap Blocks: exact=5223\n> Buffers: shared hit=423 read=4821\n> -> Bitmap Index Scan on IDX_SignalRecordsBlobs_SignalSettingId\n> (cost=0.00..18.61 rows=824 width=0) (actual time=109.000..109.001\n> rows=5228 loops=1)\n> Index Cond: (\"SignalSettingId\" = 103)\n> Buffers: shared hit=3 read=18\n> Planning time: 456.315 ms\n> Execution time: 776.976 ms\n>\n>\n> *Query explain for Postgres on AWS RDS:*Bitmap Heap Scan on\n> SignalRecordsBlobs SignalRecordsBlobs (cost=190.02..13204.28 rows=6213\n> width=69) (actual time=2.215..14.505 rows=5122 loops=1)\n> Filter: ((\"DateTime\" >= \\'2019-11-28 14:00:12.5402\\'::timestamp\n> without time zone) AND (\"DateTime\" <= \\'2020-07-23\n> 21:12:32.249\\'::timestamp without time zone))\n> Heap Blocks: exact=5209\n> Buffers: shared hit=3290 read=1948\n> -> Bitmap Index Scan on IDX_SignalRecordsBlobs_SignalSettingId\n> (cost=0.00..188.46 rows=6405 width=0) (actual time=1.159..1.159 rows=5228\n> loops=1)\n> Index Cond: (\"SignalSettingId\" = 103)\n> Buffers: shared hit=3 read=26\n> Planning time: 0.407 ms\n> Execution time: 14.87 ms\n>\n>\n> *PostgreSQL version number running:• VM on GCP*: PostgreSQL 11.10 (Debian\n> 11.10-0+deb10u1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6)\n> 8.3.0, 64-bit\n> *• Managed by RDS on AWS:* PostgreSQL 11.10 on x86_64-pc-linux-gnu,\n> compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit\n>\n>\n> *How PostgreSQL was installed:• VM on GCP*: Already installed when\n> created VM running Debian on Google Console.\n> *• Managed by RDS on AWS:* RDS managed the installation.\n>\n>\n> *Changes made to the settings in the postgresql.conf file:*Here are some\n> postgres parameters that might be useful:\n> *Instance on VM on GCP (2 vCPUs, 2 GB memory, 800 GB disk):*\n> • effective_cache_size: 1496MB\n> • maintenance_work_mem: 255462kB (close to 249MB)\n> • max_wal_size: 1GB\n> • min_wal_size: 512MB\n> • shared_buffers: 510920kB (close to 499MB)\n> • max_locks_per_transaction 1000\n> • wal_buffers: 15320kB (close to 15MB)\n> • work_mem: 2554kB\n> • effective_io_concurrency: 200\n> • dynamic_shared_memory_type: posix\n> On this instance we installed a postgres extension called timescaledb to\n> gain performance on other tables. Some of these parameters were set using\n> recommendations from that extension.\n>\n> *Instance managed by RDS (2 vCPUs, 2 GiB RAM, 250GB disk, 750 de IOPS):*\n> • effective_cache_size: 1887792kB (close to 1844MB)\n> • maintenance_work_mem: 64MB\n> • max_wal_size: 2GB\n> • min_wal_size: 192MB\n> • shared_buffers: 943896kB (close to 922MB)\n> • max_locks_per_transaction 64\n>\n>\n> *Operating system and version by runing \"uname -a\":• VM on GCP:* Linux\n> {{{my instance name}}} 4.19.0-14-cloud-amd64 #1 SMP Debian 4.19.171-2\n> (2021-01-30) x86_64 GNU/Linux\n> *• Managed by AWS RDS:* Aparently Red Hay as shown using SELECT version();\n>\n> *Program used to connect to PostgreSQL:* Python psycopg2.connect() to\n> create the connection and pandas read_sql_query() to query using that\n> connection.\n>\n> Thanks in advance\n>\n\n> I expect my postgres on GPC to be at least similar to the one managed by AWS RDSimho:- on Google Cloud you can test with \"Cloud SQL for Postgresql\" ( https://cloud.google.com/sql/docs/postgres )- on Google Compute Engine ( VM ): you have to tune the disks ; linux ; file system ; scheduler ; and it is a complex taskimho: select the perfect disk types for the postgresql data ( and create a fast RAID )https://cloud.google.com/compute/docs/disksCompute Engine offers several types of storage options for your instances. Each of the following storage options has unique price and performance characteristics:- Zonal persistent disk: Efficient, reliable block storage.- Regional persistent disk: Regional block storage replicated in two zones.- Local SSD: High performance, transient, local block storage.- Cloud Storage buckets: Affordable object storage.- Filestore: High performance file storage for Google Cloud users.regards, ImreMaurici Meneghetti <[email protected]> ezt írta (időpont: 2021. febr. 23., K, 23:14):Hi everyone,I have 2 postgres instances created from the same dump (backup), one on a GCP VM and the other on AWS RDS. The first instance takes 18 minutes and the second one takes less than 20s to run this simples query:SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';I’ve run this query a few times to make sure both should be reading data from cache.I expect my postgres on GPC to be at least similar to the one managed by AWS RDS so that I can work on improvements parallelly and compare.DETAILS:Query explain for Postgres on GCP VM:Bitmap Heap Scan on SignalRecordsBlobs SignalRecordsBlobs (cost=18.80..2480.65 rows=799 width=70) (actual time=216.766..776.032 rows=5122 loops=1) Filter: ((\"DateTime\" >= \\'2019-11-28 14:00:12.5402\\'::timestamp without time zone) AND (\"DateTime\" <= \\'2020-07-23 21:12:32.249\\'::timestamp without time zone)) Heap Blocks: exact=5223 Buffers: shared hit=423 read=4821 -> Bitmap Index Scan on IDX_SignalRecordsBlobs_SignalSettingId (cost=0.00..18.61 rows=824 width=0) (actual time=109.000..109.001 rows=5228 loops=1) Index Cond: (\"SignalSettingId\" = 103) Buffers: shared hit=3 read=18Planning time: 456.315 msExecution time: 776.976 msQuery explain for Postgres on AWS RDS:Bitmap Heap Scan on SignalRecordsBlobs SignalRecordsBlobs (cost=190.02..13204.28 rows=6213 width=69) (actual time=2.215..14.505 rows=5122 loops=1) Filter: ((\"DateTime\" >= \\'2019-11-28 14:00:12.5402\\'::timestamp without time zone) AND (\"DateTime\" <= \\'2020-07-23 21:12:32.249\\'::timestamp without time zone)) Heap Blocks: exact=5209 Buffers: shared hit=3290 read=1948 -> Bitmap Index Scan on IDX_SignalRecordsBlobs_SignalSettingId (cost=0.00..188.46 rows=6405 width=0) (actual time=1.159..1.159 rows=5228 loops=1) Index Cond: (\"SignalSettingId\" = 103) Buffers: shared hit=3 read=26Planning time: 0.407 msExecution time: 14.87 msPostgreSQL version number running:•\tVM on GCP: PostgreSQL 11.10 (Debian 11.10-0+deb10u1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit•\tManaged by RDS on AWS: PostgreSQL 11.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bitHow PostgreSQL was installed:•\tVM on GCP: Already installed when created VM running Debian on Google Console. •\tManaged by RDS on AWS: RDS managed the installation.Changes made to the settings in the postgresql.conf file:Here are some postgres parameters that might be useful:Instance on VM on GCP (2 vCPUs, 2 GB memory, 800 GB disk):•\teffective_cache_size: 1496MB•\tmaintenance_work_mem: 255462kB (close to 249MB)•\tmax_wal_size: 1GB•\tmin_wal_size: 512MB•\tshared_buffers: 510920kB (close to 499MB)•\tmax_locks_per_transaction 1000•\twal_buffers: 15320kB (close to 15MB)•\twork_mem: 2554kB•\teffective_io_concurrency: 200•\tdynamic_shared_memory_type: posixOn this instance we installed a postgres extension called timescaledb to gain performance on other tables. Some of these parameters were set using recommendations from that extension.Instance managed by RDS (2 vCPUs, 2 GiB RAM, 250GB disk, 750 de IOPS):•\teffective_cache_size: 1887792kB (close to 1844MB)•\tmaintenance_work_mem: 64MB•\tmax_wal_size: 2GB•\tmin_wal_size: 192MB•\tshared_buffers: 943896kB (close to 922MB)•\tmax_locks_per_transaction 64Operating system and version by runing \"uname -a\":•\tVM on GCP: Linux {{{my instance name}}} 4.19.0-14-cloud-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64 GNU/Linux•\tManaged by AWS RDS: Aparently Red Hay as shown using SELECT version();Program used to connect to PostgreSQL: Python psycopg2.connect() to create the connection and pandas read_sql_query() to query using that connection.Thanks in advance",
"msg_date": "Wed, 24 Feb 2021 16:16:25 +0100",
"msg_from": "Imre Samu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance comparing GCP and AWS"
},
{
"msg_contents": "\n\n> On Feb 24, 2021, at 10:11 AM, Igor Gois <[email protected]> wrote:\n> \n> Hi, Julien\n> \n> Your hypothesis about network transfer makes sense. The query returns a big size byte array blobs.\n> \n> Is there a way to test the network speed against the instances? I have access to the network speed in gcp (5 Mb/s), but don't have access in aws rds.\n\nPerhaps what you should run is EXPLAIN ANALYZE SELECT...? My understanding is that EXPLAIN ANALYZE executes the query but discards the results. That doesn’t tell you the network speed of your AWS instance, but it does isolate the query execution speed (which is what I think you’re trying to measure) from the network speed.\n\nHope this is useful.\n\nCheers\nPhilip\n\n> \n> \n> Em qua., 24 de fev. de 2021 às 10:35, Julien Rouhaud <[email protected]> escreveu:\n> Hi,\n> \n> On Wed, Feb 24, 2021 at 6:14 AM Maurici Meneghetti\n> <[email protected]> wrote:\n> >\n> > I have 2 postgres instances created from the same dump (backup), one on a GCP VM and the other on AWS RDS. The first instance takes 18 minutes and the second one takes less than 20s to run this simples query:\n> > SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';\n> > I’ve run this query a few times to make sure both should be reading data from cache.\n> > I expect my postgres on GPC to be at least similar to the one managed by AWS RDS so that I can work on improvements parallelly and compare.\n> >\n> > DETAILS:\n> > [...]\n> > Planning time: 456.315 ms\n> > Execution time: 776.976 ms\n> >\n> > Query explain for Postgres on AWS RDS:\n> > [...]\n> > Planning time: 0.407 ms\n> > Execution time: 14.87 ms\n> \n> Those queries were executed in respectively ~1s and ~15ms (one thing\n> to note is that the slower one had less data in cache, which may or\n> may note account for the difference). Does those plans reflect the\n> reality of your slow executions? If yes it's likely due to quite slow\n> network transfer. Otherwise we would need an explain plan from the\n> slow execution, for which auto_explain can help you. See\n> https://www.postgresql.org/docs/11/auto-explain.html for more details.\n> \n> \n> -- \n> Att,\n> \n> Igor Gois | Sócio Consultor\n> (48) 99169-9889 | Skype: igor_msg\n> Site | Blog | LinkedIn | Facebook | Instagram\n> \n> \n\n\n\n",
"msg_date": "Thu, 25 Feb 2021 13:13:33 -0500",
"msg_from": "Philip Semanchuk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance comparing GCP and AWS"
},
{
"msg_contents": "Hi, Philip\n\nWe ran: EXPLAIN (FORMAT JSON) SELECT \"Id\", \"DateTime\", \"SignalRegisterId\",\n\"Raw\" FROM \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND\n\"DateTime\" BETWEEN '2019-11-28T14:00:12.540200000' AND\n'2020-07-23T21:12:32.249000000';\n\nbut it was really fast. I think the results were discarded.\n\nAWS Execution time select without explain: 24.96505s (calculated in python\nclient)\nAWS Execution time select with explain but without analyze: 0.03876s\n(calculated in python client)\n\nhttps://explain.depesz.com/s/5HRO\n\nThanks in advance\n\nEm qui., 25 de fev. de 2021 às 15:13, Philip Semanchuk <\[email protected]> escreveu:\n\n>\n>\n> > On Feb 24, 2021, at 10:11 AM, Igor Gois <[email protected]>\n> wrote:\n> >\n> > Hi, Julien\n> >\n> > Your hypothesis about network transfer makes sense. The query returns a\n> big size byte array blobs.\n> >\n> > Is there a way to test the network speed against the instances? I have\n> access to the network speed in gcp (5 Mb/s), but don't have access in aws\n> rds.\n>\n> Perhaps what you should run is EXPLAIN ANALYZE SELECT...? My understanding\n> is that EXPLAIN ANALYZE executes the query but discards the results. That\n> doesn’t tell you the network speed of your AWS instance, but it does\n> isolate the query execution speed (which is what I think you’re trying to\n> measure) from the network speed.\n>\n> Hope this is useful.\n>\n> Cheers\n> Philip\n>\n> >\n> >\n> > Em qua., 24 de fev. de 2021 às 10:35, Julien Rouhaud <[email protected]>\n> escreveu:\n> > Hi,\n> >\n> > On Wed, Feb 24, 2021 at 6:14 AM Maurici Meneghetti\n> > <[email protected]> wrote:\n> > >\n> > > I have 2 postgres instances created from the same dump (backup), one\n> on a GCP VM and the other on AWS RDS. The first instance takes 18 minutes\n> and the second one takes less than 20s to run this simples query:\n> > > SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM\n> \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN\n> '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';\n> > > I’ve run this query a few times to make sure both should be reading\n> data from cache.\n> > > I expect my postgres on GPC to be at least similar to the one managed\n> by AWS RDS so that I can work on improvements parallelly and compare.\n> > >\n> > > DETAILS:\n> > > [...]\n> > > Planning time: 456.315 ms\n> > > Execution time: 776.976 ms\n> > >\n> > > Query explain for Postgres on AWS RDS:\n> > > [...]\n> > > Planning time: 0.407 ms\n> > > Execution time: 14.87 ms\n> >\n> > Those queries were executed in respectively ~1s and ~15ms (one thing\n> > to note is that the slower one had less data in cache, which may or\n> > may note account for the difference). Does those plans reflect the\n> > reality of your slow executions? If yes it's likely due to quite slow\n> > network transfer. Otherwise we would need an explain plan from the\n> > slow execution, for which auto_explain can help you. See\n> > https://www.postgresql.org/docs/11/auto-explain.html for more details.\n> >\n> >\n> > --\n> > Att,\n> >\n> > Igor Gois | Sócio Consultor\n> > (48) 99169-9889 | Skype: igor_msg\n> > Site | Blog | LinkedIn | Facebook | Instagram\n> >\n> >\n>\n>\n\n-- \n*Att,*\n\n*Igor Gois | Sócio Consultor*\n(48) 99169-9889 | Skype: igor_msg\nSite <https://bixtecnologia.com.br/>| Blog\n<https://www.bixtecnologia.com.br/blog/> | LinkedIn\n<https://www.linkedin.com/company/bixtecnologia/>| Facebook\n<https://www.facebook.com/bix.tecnologia>| Instagram\n<https://www.instagram.com/bixtecnologia/>\n\nHi, PhilipWe ran: EXPLAIN (FORMAT JSON) SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';but it was really fast. I think the results were discarded.AWS Execution time select without explain: 24.96505s (calculated in python client)AWS Execution time select with explain but without analyze: 0.03876s (calculated in python client)https://explain.depesz.com/s/5HROThanks in advance\nEm qui., 25 de fev. de 2021 às 15:13, Philip Semanchuk <[email protected]> escreveu:\n\n> On Feb 24, 2021, at 10:11 AM, Igor Gois <[email protected]> wrote:\n> \n> Hi, Julien\n> \n> Your hypothesis about network transfer makes sense. The query returns a big size byte array blobs.\n> \n> Is there a way to test the network speed against the instances? I have access to the network speed in gcp (5 Mb/s), but don't have access in aws rds.\n\nPerhaps what you should run is EXPLAIN ANALYZE SELECT...? My understanding is that EXPLAIN ANALYZE executes the query but discards the results. That doesn’t tell you the network speed of your AWS instance, but it does isolate the query execution speed (which is what I think you’re trying to measure) from the network speed.\n\nHope this is useful.\n\nCheers\nPhilip\n\n> \n> \n> Em qua., 24 de fev. de 2021 às 10:35, Julien Rouhaud <[email protected]> escreveu:\n> Hi,\n> \n> On Wed, Feb 24, 2021 at 6:14 AM Maurici Meneghetti\n> <[email protected]> wrote:\n> >\n> > I have 2 postgres instances created from the same dump (backup), one on a GCP VM and the other on AWS RDS. The first instance takes 18 minutes and the second one takes less than 20s to run this simples query:\n> > SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';\n> > I’ve run this query a few times to make sure both should be reading data from cache.\n> > I expect my postgres on GPC to be at least similar to the one managed by AWS RDS so that I can work on improvements parallelly and compare.\n> >\n> > DETAILS:\n> > [...]\n> > Planning time: 456.315 ms\n> > Execution time: 776.976 ms\n> >\n> > Query explain for Postgres on AWS RDS:\n> > [...]\n> > Planning time: 0.407 ms\n> > Execution time: 14.87 ms\n> \n> Those queries were executed in respectively ~1s and ~15ms (one thing\n> to note is that the slower one had less data in cache, which may or\n> may note account for the difference). Does those plans reflect the\n> reality of your slow executions? If yes it's likely due to quite slow\n> network transfer. Otherwise we would need an explain plan from the\n> slow execution, for which auto_explain can help you. See\n> https://www.postgresql.org/docs/11/auto-explain.html for more details.\n> \n> \n> -- \n> Att,\n> \n> Igor Gois | Sócio Consultor\n> (48) 99169-9889 | Skype: igor_msg\n> Site | Blog | LinkedIn | Facebook | Instagram\n> \n> \n\n-- Att,Igor Gois | Sócio Consultor(48) 99169-9889 | Skype: igor_msgSite | Blog | LinkedIn | Facebook | Instagram",
"msg_date": "Thu, 25 Feb 2021 17:46:58 -0300",
"msg_from": "Igor Gois <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance comparing GCP and AWS"
},
{
"msg_contents": "\n\n> On Feb 25, 2021, at 3:46 PM, Igor Gois <[email protected]> wrote:\n> \n> Hi, Philip\n> \n> We ran: EXPLAIN (FORMAT JSON) SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';\n> \n> but it was really fast. I think the results were discarded.\n\nEXPLAIN and EXPLAIN ANALYZE are different in an important way. EXPLAIN merely plans the query, EXPLAIN ANALYZE plans *and executes* the query. From the doc —\n\n\"The ANALYZE option causes the statement to be actually executed, not only planned....Keep in mind that the statement is actually executed when the ANALYZE option is used. Although EXPLAIN will discard any output that a SELECT would return, other side effects of the statement will happen as usual. “\n\nhttps://www.postgresql.org/docs/12/sql-explain.html\n\n\n> \n> AWS Execution time select without explain: 24.96505s (calculated in python client)\n> AWS Execution time select with explain but without analyze: 0.03876s (calculated in python client)\n> \n> https://explain.depesz.com/s/5HRO\n> \n> Thanks in advance\n> \n> \n> Em qui., 25 de fev. de 2021 às 15:13, Philip Semanchuk <[email protected]> escreveu:\n> \n> \n> > On Feb 24, 2021, at 10:11 AM, Igor Gois <[email protected]> wrote:\n> > \n> > Hi, Julien\n> > \n> > Your hypothesis about network transfer makes sense. The query returns a big size byte array blobs.\n> > \n> > Is there a way to test the network speed against the instances? I have access to the network speed in gcp (5 Mb/s), but don't have access in aws rds.\n> \n> Perhaps what you should run is EXPLAIN ANALYZE SELECT...? My understanding is that EXPLAIN ANALYZE executes the query but discards the results. That doesn’t tell you the network speed of your AWS instance, but it does isolate the query execution speed (which is what I think you’re trying to measure) from the network speed.\n> \n> Hope this is useful.\n> \n> Cheers\n> Philip\n> \n> > \n> > \n> > Em qua., 24 de fev. de 2021 às 10:35, Julien Rouhaud <[email protected]> escreveu:\n> > Hi,\n> > \n> > On Wed, Feb 24, 2021 at 6:14 AM Maurici Meneghetti\n> > <[email protected]> wrote:\n> > >\n> > > I have 2 postgres instances created from the same dump (backup), one on a GCP VM and the other on AWS RDS. The first instance takes 18 minutes and the second one takes less than 20s to run this simples query:\n> > > SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';\n> > > I’ve run this query a few times to make sure both should be reading data from cache.\n> > > I expect my postgres on GPC to be at least similar to the one managed by AWS RDS so that I can work on improvements parallelly and compare.\n> > >\n> > > DETAILS:\n> > > [...]\n> > > Planning time: 456.315 ms\n> > > Execution time: 776.976 ms\n> > >\n> > > Query explain for Postgres on AWS RDS:\n> > > [...]\n> > > Planning time: 0.407 ms\n> > > Execution time: 14.87 ms\n> > \n> > Those queries were executed in respectively ~1s and ~15ms (one thing\n> > to note is that the slower one had less data in cache, which may or\n> > may note account for the difference). Does those plans reflect the\n> > reality of your slow executions? If yes it's likely due to quite slow\n> > network transfer. Otherwise we would need an explain plan from the\n> > slow execution, for which auto_explain can help you. See\n> > https://www.postgresql.org/docs/11/auto-explain.html for more details.\n> > \n> > \n> > -- \n> > Att,\n> > \n> > Igor Gois | Sócio Consultor\n> > (48) 99169-9889 | Skype: igor_msg\n> > Site | Blog | LinkedIn | Facebook | Instagram\n> > \n> > \n> \n> \n> \n> -- \n> Att,\n> \n> Igor Gois | Sócio Consultor\n> (48) 99169-9889 | Skype: igor_msg\n> Site | Blog | LinkedIn | Facebook | Instagram\n> \n> \n\n\n\n",
"msg_date": "Thu, 25 Feb 2021 15:53:13 -0500",
"msg_from": "Philip Semanchuk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance comparing GCP and AWS"
},
{
"msg_contents": "Philip,\n\nThe results in first email in this thread were using explain analyze.\n\nI thought that you asked to run using only 'explain'. My bad.\n\nThe point is, the execution time with explain analyze is less the 1 second.\nBut the actual execution time (calculated from the python client) is 24\nseconds (aws) and 300+ seconds in gcp\n\nThank you\n\nEm qui., 25 de fev. de 2021 às 17:53, Philip Semanchuk <\[email protected]> escreveu:\n\n>\n>\n> > On Feb 25, 2021, at 3:46 PM, Igor Gois <[email protected]>\n> wrote:\n> >\n> > Hi, Philip\n> >\n> > We ran: EXPLAIN (FORMAT JSON) SELECT \"Id\", \"DateTime\",\n> \"SignalRegisterId\", \"Raw\" FROM \"SignalRecordsBlobs\" WHERE \"SignalSettingId\"\n> = 103 AND \"DateTime\" BETWEEN '2019-11-28T14:00:12.540200000' AND\n> '2020-07-23T21:12:32.249000000';\n> >\n> > but it was really fast. I think the results were discarded.\n>\n> EXPLAIN and EXPLAIN ANALYZE are different in an important way. EXPLAIN\n> merely plans the query, EXPLAIN ANALYZE plans *and executes* the query.\n> From the doc —\n>\n> \"The ANALYZE option causes the statement to be actually executed, not only\n> planned....Keep in mind that the statement is actually executed when the\n> ANALYZE option is used. Although EXPLAIN will discard any output that a\n> SELECT would return, other side effects of the statement will happen as\n> usual. “\n>\n> https://www.postgresql.org/docs/12/sql-explain.html\n>\n>\n> >\n> > AWS Execution time select without explain: 24.96505s (calculated in\n> python client)\n> > AWS Execution time select with explain but without analyze: 0.03876s\n> (calculated in python client)\n> >\n> > https://explain.depesz.com/s/5HRO\n> >\n> > Thanks in advance\n> >\n> >\n> > Em qui., 25 de fev. de 2021 às 15:13, Philip Semanchuk <\n> [email protected]> escreveu:\n> >\n> >\n> > > On Feb 24, 2021, at 10:11 AM, Igor Gois <[email protected]>\n> wrote:\n> > >\n> > > Hi, Julien\n> > >\n> > > Your hypothesis about network transfer makes sense. The query returns\n> a big size byte array blobs.\n> > >\n> > > Is there a way to test the network speed against the instances? I have\n> access to the network speed in gcp (5 Mb/s), but don't have access in aws\n> rds.\n> >\n> > Perhaps what you should run is EXPLAIN ANALYZE SELECT...? My\n> understanding is that EXPLAIN ANALYZE executes the query but discards the\n> results. That doesn’t tell you the network speed of your AWS instance, but\n> it does isolate the query execution speed (which is what I think you’re\n> trying to measure) from the network speed.\n> >\n> > Hope this is useful.\n> >\n> > Cheers\n> > Philip\n> >\n> > >\n> > >\n> > > Em qua., 24 de fev. de 2021 às 10:35, Julien Rouhaud <\n> [email protected]> escreveu:\n> > > Hi,\n> > >\n> > > On Wed, Feb 24, 2021 at 6:14 AM Maurici Meneghetti\n> > > <[email protected]> wrote:\n> > > >\n> > > > I have 2 postgres instances created from the same dump (backup), one\n> on a GCP VM and the other on AWS RDS. The first instance takes 18 minutes\n> and the second one takes less than 20s to run this simples query:\n> > > > SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM\n> \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN\n> '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';\n> > > > I’ve run this query a few times to make sure both should be reading\n> data from cache.\n> > > > I expect my postgres on GPC to be at least similar to the one\n> managed by AWS RDS so that I can work on improvements parallelly and\n> compare.\n> > > >\n> > > > DETAILS:\n> > > > [...]\n> > > > Planning time: 456.315 ms\n> > > > Execution time: 776.976 ms\n> > > >\n> > > > Query explain for Postgres on AWS RDS:\n> > > > [...]\n> > > > Planning time: 0.407 ms\n> > > > Execution time: 14.87 ms\n> > >\n> > > Those queries were executed in respectively ~1s and ~15ms (one thing\n> > > to note is that the slower one had less data in cache, which may or\n> > > may note account for the difference). Does those plans reflect the\n> > > reality of your slow executions? If yes it's likely due to quite slow\n> > > network transfer. Otherwise we would need an explain plan from the\n> > > slow execution, for which auto_explain can help you. See\n> > > https://www.postgresql.org/docs/11/auto-explain.html for more details.\n> > >\n> > >\n> > > --\n> > > Att,\n> > >\n> > > Igor Gois | Sócio Consultor\n> > > (48) 99169-9889 | Skype: igor_msg\n> > > Site | Blog | LinkedIn | Facebook | Instagram\n> > >\n> > >\n> >\n> >\n> >\n> > --\n> > Att,\n> >\n> > Igor Gois | Sócio Consultor\n> > (48) 99169-9889 | Skype: igor_msg\n> > Site | Blog | LinkedIn | Facebook | Instagram\n> >\n> >\n>\n>\n\n-- \n*Att,*\n\n*Igor Gois | Sócio Consultor*\n(48) 99169-9889 | Skype: igor_msg\nSite <https://bixtecnologia.com.br/>| Blog\n<https://www.bixtecnologia.com.br/blog/> | LinkedIn\n<https://www.linkedin.com/company/bixtecnologia/>| Facebook\n<https://www.facebook.com/bix.tecnologia>| Instagram\n<https://www.instagram.com/bixtecnologia/>\n\nPhilip,The results in first email in this thread were using explain analyze.I thought that you asked to run using only 'explain'. My bad.The point is, the execution time with explain analyze is less the 1 second. But the actual execution time (calculated from the python client) is 24 seconds (aws) and 300+ seconds in gcpThank youEm qui., 25 de fev. de 2021 às 17:53, Philip Semanchuk <[email protected]> escreveu:\n\n> On Feb 25, 2021, at 3:46 PM, Igor Gois <[email protected]> wrote:\n> \n> Hi, Philip\n> \n> We ran: EXPLAIN (FORMAT JSON) SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';\n> \n> but it was really fast. I think the results were discarded.\n\nEXPLAIN and EXPLAIN ANALYZE are different in an important way. EXPLAIN merely plans the query, EXPLAIN ANALYZE plans *and executes* the query. From the doc —\n\n\"The ANALYZE option causes the statement to be actually executed, not only planned....Keep in mind that the statement is actually executed when the ANALYZE option is used. Although EXPLAIN will discard any output that a SELECT would return, other side effects of the statement will happen as usual. “\n\nhttps://www.postgresql.org/docs/12/sql-explain.html\n\n\n> \n> AWS Execution time select without explain: 24.96505s (calculated in python client)\n> AWS Execution time select with explain but without analyze: 0.03876s (calculated in python client)\n> \n> https://explain.depesz.com/s/5HRO\n> \n> Thanks in advance\n> \n> \n> Em qui., 25 de fev. de 2021 às 15:13, Philip Semanchuk <[email protected]> escreveu:\n> \n> \n> > On Feb 24, 2021, at 10:11 AM, Igor Gois <[email protected]> wrote:\n> > \n> > Hi, Julien\n> > \n> > Your hypothesis about network transfer makes sense. The query returns a big size byte array blobs.\n> > \n> > Is there a way to test the network speed against the instances? I have access to the network speed in gcp (5 Mb/s), but don't have access in aws rds.\n> \n> Perhaps what you should run is EXPLAIN ANALYZE SELECT...? My understanding is that EXPLAIN ANALYZE executes the query but discards the results. That doesn’t tell you the network speed of your AWS instance, but it does isolate the query execution speed (which is what I think you’re trying to measure) from the network speed.\n> \n> Hope this is useful.\n> \n> Cheers\n> Philip\n> \n> > \n> > \n> > Em qua., 24 de fev. de 2021 às 10:35, Julien Rouhaud <[email protected]> escreveu:\n> > Hi,\n> > \n> > On Wed, Feb 24, 2021 at 6:14 AM Maurici Meneghetti\n> > <[email protected]> wrote:\n> > >\n> > > I have 2 postgres instances created from the same dump (backup), one on a GCP VM and the other on AWS RDS. The first instance takes 18 minutes and the second one takes less than 20s to run this simples query:\n> > > SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';\n> > > I’ve run this query a few times to make sure both should be reading data from cache.\n> > > I expect my postgres on GPC to be at least similar to the one managed by AWS RDS so that I can work on improvements parallelly and compare.\n> > >\n> > > DETAILS:\n> > > [...]\n> > > Planning time: 456.315 ms\n> > > Execution time: 776.976 ms\n> > >\n> > > Query explain for Postgres on AWS RDS:\n> > > [...]\n> > > Planning time: 0.407 ms\n> > > Execution time: 14.87 ms\n> > \n> > Those queries were executed in respectively ~1s and ~15ms (one thing\n> > to note is that the slower one had less data in cache, which may or\n> > may note account for the difference). Does those plans reflect the\n> > reality of your slow executions? If yes it's likely due to quite slow\n> > network transfer. Otherwise we would need an explain plan from the\n> > slow execution, for which auto_explain can help you. See\n> > https://www.postgresql.org/docs/11/auto-explain.html for more details.\n> > \n> > \n> > -- \n> > Att,\n> > \n> > Igor Gois | Sócio Consultor\n> > (48) 99169-9889 | Skype: igor_msg\n> > Site | Blog | LinkedIn | Facebook | Instagram\n> > \n> > \n> \n> \n> \n> -- \n> Att,\n> \n> Igor Gois | Sócio Consultor\n> (48) 99169-9889 | Skype: igor_msg\n> Site | Blog | LinkedIn | Facebook | Instagram\n> \n> \n\n-- Att,Igor Gois | Sócio Consultor(48) 99169-9889 | Skype: igor_msgSite | Blog | LinkedIn | Facebook | Instagram",
"msg_date": "Thu, 25 Feb 2021 18:04:50 -0300",
"msg_from": "Igor Gois <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance comparing GCP and AWS"
},
{
"msg_contents": "\n\n> On Feb 25, 2021, at 4:04 PM, Igor Gois <[email protected]> wrote:\n> \n> Philip,\n> \n> The results in first email in this thread were using explain analyze.\n> \n> I thought that you asked to run using only 'explain'. My bad.\n> \n> The point is, the execution time with explain analyze is less the 1 second. But the actual execution time (calculated from the python client) is 24 seconds (aws) and 300+ seconds in gcp\n\nOh OK, sorry, I wasn’t following. Yes, network speed sounds like the source of the problem. \n\nUnder AWS sometimes we log into an EC2 instance if we have to run a query that generates a lot of data so that both server and client are inside AWS. If GCP has something similar to EC2, it might be an interesting experiment to run your query from there and see how much, if any, that changes the time it takes to get results.\n\nHope this helps\nPhilip\n\n\n\n> \n> Em qui., 25 de fev. de 2021 às 17:53, Philip Semanchuk <[email protected]> escreveu:\n> \n> \n> > On Feb 25, 2021, at 3:46 PM, Igor Gois <[email protected]> wrote:\n> > \n> > Hi, Philip\n> > \n> > We ran: EXPLAIN (FORMAT JSON) SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';\n> > \n> > but it was really fast. I think the results were discarded.\n> \n> EXPLAIN and EXPLAIN ANALYZE are different in an important way. EXPLAIN merely plans the query, EXPLAIN ANALYZE plans *and executes* the query. From the doc —\n> \n> \"The ANALYZE option causes the statement to be actually executed, not only planned....Keep in mind that the statement is actually executed when the ANALYZE option is used. Although EXPLAIN will discard any output that a SELECT would return, other side effects of the statement will happen as usual. “\n> \n> https://www.postgresql.org/docs/12/sql-explain.html\n> \n> \n> > \n> > AWS Execution time select without explain: 24.96505s (calculated in python client)\n> > AWS Execution time select with explain but without analyze: 0.03876s (calculated in python client)\n> > \n> > https://explain.depesz.com/s/5HRO\n> > \n> > Thanks in advance\n> > \n> > \n> > Em qui., 25 de fev. de 2021 às 15:13, Philip Semanchuk <[email protected]> escreveu:\n> > \n> > \n> > > On Feb 24, 2021, at 10:11 AM, Igor Gois <[email protected]> wrote:\n> > > \n> > > Hi, Julien\n> > > \n> > > Your hypothesis about network transfer makes sense. The query returns a big size byte array blobs.\n> > > \n> > > Is there a way to test the network speed against the instances? I have access to the network speed in gcp (5 Mb/s), but don't have access in aws rds.\n> > \n> > Perhaps what you should run is EXPLAIN ANALYZE SELECT...? My understanding is that EXPLAIN ANALYZE executes the query but discards the results. That doesn’t tell you the network speed of your AWS instance, but it does isolate the query execution speed (which is what I think you’re trying to measure) from the network speed.\n> > \n> > Hope this is useful.\n> > \n> > Cheers\n> > Philip\n> > \n> > > \n> > > \n> > > Em qua., 24 de fev. de 2021 às 10:35, Julien Rouhaud <[email protected]> escreveu:\n> > > Hi,\n> > > \n> > > On Wed, Feb 24, 2021 at 6:14 AM Maurici Meneghetti\n> > > <[email protected]> wrote:\n> > > >\n> > > > I have 2 postgres instances created from the same dump (backup), one on a GCP VM and the other on AWS RDS. The first instance takes 18 minutes and the second one takes less than 20s to run this simples query:\n> > > > SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';\n> > > > I’ve run this query a few times to make sure both should be reading data from cache.\n> > > > I expect my postgres on GPC to be at least similar to the one managed by AWS RDS so that I can work on improvements parallelly and compare.\n> > > >\n> > > > DETAILS:\n> > > > [...]\n> > > > Planning time: 456.315 ms\n> > > > Execution time: 776.976 ms\n> > > >\n> > > > Query explain for Postgres on AWS RDS:\n> > > > [...]\n> > > > Planning time: 0.407 ms\n> > > > Execution time: 14.87 ms\n> > > \n> > > Those queries were executed in respectively ~1s and ~15ms (one thing\n> > > to note is that the slower one had less data in cache, which may or\n> > > may note account for the difference). Does those plans reflect the\n> > > reality of your slow executions? If yes it's likely due to quite slow\n> > > network transfer. Otherwise we would need an explain plan from the\n> > > slow execution, for which auto_explain can help you. See\n> > > https://www.postgresql.org/docs/11/auto-explain.html for more details.\n> > > \n> > > \n> > > -- \n> > > Att,\n> > > \n> > > Igor Gois | Sócio Consultor\n> > > (48) 99169-9889 | Skype: igor_msg\n> > > Site | Blog | LinkedIn | Facebook | Instagram\n> > > \n> > > \n> > \n> > \n> > \n> > -- \n> > Att,\n> > \n> > Igor Gois | Sócio Consultor\n> > (48) 99169-9889 | Skype: igor_msg\n> > Site | Blog | LinkedIn | Facebook | Instagram\n> > \n> > \n> \n> \n> \n> -- \n> Att,\n> \n> Igor Gois | Sócio Consultor\n> (48) 99169-9889 | Skype: igor_msg\n> Site | Blog | LinkedIn | Facebook | Instagram\n> \n> \n\n\n\n",
"msg_date": "Thu, 25 Feb 2021 17:32:32 -0500",
"msg_from": "Philip Semanchuk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance comparing GCP and AWS"
},
{
"msg_contents": "Since this is a comparison to RDS, and the goal presumably is to make the\ntest as even as possible, you will want to pay attention to the network IO\ncapacity for the client and the server in both tests.\n\nFor RDS, you will be unable to run the client software locally on the\nserver hardware, so you should plan to do the same for the GCP comparison.\n\nWhat is the machine size you are using for your RDS instance? Each machine\nsize will specify CPU and RAM along with disk and network IO capacity.\n\nIs your GCP VM where you are running PG ( a GCP VM is the equivalent of an\nEC2 instance, by the way ) roughly equivalent to that RDS instance?\n\nFinally, is the network topology roughly equivalent? Are you performing\nthese tests with the same region and/or availability zone?\n\n\n\nOn Thu, Feb 25, 2021 at 3:32 PM Philip Semanchuk <\[email protected]> wrote:\n\n>\n>\n> > On Feb 25, 2021, at 4:04 PM, Igor Gois <[email protected]>\n> wrote:\n> >\n> > Philip,\n> >\n> > The results in first email in this thread were using explain analyze.\n> >\n> > I thought that you asked to run using only 'explain'. My bad.\n> >\n> > The point is, the execution time with explain analyze is less the 1\n> second. But the actual execution time (calculated from the python client)\n> is 24 seconds (aws) and 300+ seconds in gcp\n>\n> Oh OK, sorry, I wasn’t following. Yes, network speed sounds like the\n> source of the problem.\n>\n> Under AWS sometimes we log into an EC2 instance if we have to run a query\n> that generates a lot of data so that both server and client are inside AWS.\n> If GCP has something similar to EC2, it might be an interesting experiment\n> to run your query from there and see how much, if any, that changes the\n> time it takes to get results.\n>\n> Hope this helps\n> Philip\n>\n>\n>\n> >\n> > Em qui., 25 de fev. de 2021 às 17:53, Philip Semanchuk <\n> [email protected]> escreveu:\n> >\n> >\n> > > On Feb 25, 2021, at 3:46 PM, Igor Gois <[email protected]>\n> wrote:\n> > >\n> > > Hi, Philip\n> > >\n> > > We ran: EXPLAIN (FORMAT JSON) SELECT \"Id\", \"DateTime\",\n> \"SignalRegisterId\", \"Raw\" FROM \"SignalRecordsBlobs\" WHERE \"SignalSettingId\"\n> = 103 AND \"DateTime\" BETWEEN '2019-11-28T14:00:12.540200000' AND\n> '2020-07-23T21:12:32.249000000';\n> > >\n> > > but it was really fast. I think the results were discarded.\n> >\n> > EXPLAIN and EXPLAIN ANALYZE are different in an important way. EXPLAIN\n> merely plans the query, EXPLAIN ANALYZE plans *and executes* the query.\n> From the doc —\n> >\n> > \"The ANALYZE option causes the statement to be actually executed, not\n> only planned....Keep in mind that the statement is actually executed when\n> the ANALYZE option is used. Although EXPLAIN will discard any output that a\n> SELECT would return, other side effects of the statement will happen as\n> usual. “\n> >\n> > https://www.postgresql.org/docs/12/sql-explain.html\n> >\n> >\n> > >\n> > > AWS Execution time select without explain: 24.96505s (calculated in\n> python client)\n> > > AWS Execution time select with explain but without analyze: 0.03876s\n> (calculated in python client)\n> > >\n> > > https://explain.depesz.com/s/5HRO\n> > >\n> > > Thanks in advance\n> > >\n> > >\n> > > Em qui., 25 de fev. de 2021 às 15:13, Philip Semanchuk <\n> [email protected]> escreveu:\n> > >\n> > >\n> > > > On Feb 24, 2021, at 10:11 AM, Igor Gois <[email protected]>\n> wrote:\n> > > >\n> > > > Hi, Julien\n> > > >\n> > > > Your hypothesis about network transfer makes sense. The query\n> returns a big size byte array blobs.\n> > > >\n> > > > Is there a way to test the network speed against the instances? I\n> have access to the network speed in gcp (5 Mb/s), but don't have access in\n> aws rds.\n> > >\n> > > Perhaps what you should run is EXPLAIN ANALYZE SELECT...? My\n> understanding is that EXPLAIN ANALYZE executes the query but discards the\n> results. That doesn’t tell you the network speed of your AWS instance, but\n> it does isolate the query execution speed (which is what I think you’re\n> trying to measure) from the network speed.\n> > >\n> > > Hope this is useful.\n> > >\n> > > Cheers\n> > > Philip\n> > >\n> > > >\n> > > >\n> > > > Em qua., 24 de fev. de 2021 às 10:35, Julien Rouhaud <\n> [email protected]> escreveu:\n> > > > Hi,\n> > > >\n> > > > On Wed, Feb 24, 2021 at 6:14 AM Maurici Meneghetti\n> > > > <[email protected]> wrote:\n> > > > >\n> > > > > I have 2 postgres instances created from the same dump (backup),\n> one on a GCP VM and the other on AWS RDS. The first instance takes 18\n> minutes and the second one takes less than 20s to run this simples query:\n> > > > > SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM\n> \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN\n> '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';\n> > > > > I’ve run this query a few times to make sure both should be\n> reading data from cache.\n> > > > > I expect my postgres on GPC to be at least similar to the one\n> managed by AWS RDS so that I can work on improvements parallelly and\n> compare.\n> > > > >\n> > > > > DETAILS:\n> > > > > [...]\n> > > > > Planning time: 456.315 ms\n> > > > > Execution time: 776.976 ms\n> > > > >\n> > > > > Query explain for Postgres on AWS RDS:\n> > > > > [...]\n> > > > > Planning time: 0.407 ms\n> > > > > Execution time: 14.87 ms\n> > > >\n> > > > Those queries were executed in respectively ~1s and ~15ms (one thing\n> > > > to note is that the slower one had less data in cache, which may or\n> > > > may note account for the difference). Does those plans reflect the\n> > > > reality of your slow executions? If yes it's likely due to quite\n> slow\n> > > > network transfer. Otherwise we would need an explain plan from the\n> > > > slow execution, for which auto_explain can help you. See\n> > > > https://www.postgresql.org/docs/11/auto-explain.html for more\n> details.\n> > > >\n> > > >\n> > > > --\n> > > > Att,\n> > > >\n> > > > Igor Gois | Sócio Consultor\n> > > > (48) 99169-9889 | Skype: igor_msg\n> > > > Site | Blog | LinkedIn | Facebook | Instagram\n> > > >\n> > > >\n> > >\n> > >\n> > >\n> > > --\n> > > Att,\n> > >\n> > > Igor Gois | Sócio Consultor\n> > > (48) 99169-9889 | Skype: igor_msg\n> > > Site | Blog | LinkedIn | Facebook | Instagram\n> > >\n> > >\n> >\n> >\n> >\n> > --\n> > Att,\n> >\n> > Igor Gois | Sócio Consultor\n> > (48) 99169-9889 | Skype: igor_msg\n> > Site | Blog | LinkedIn | Facebook | Instagram\n> >\n> >\n>\n>\n>\n>\n\nSince this is a comparison to RDS, and the goal presumably is to make the test as even as possible, you will want to pay attention to the network IO capacity for the client and the server in both tests.For RDS, you will be unable to run the client software locally on the server hardware, so you should plan to do the same for the GCP comparison.What is the machine size you are using for your RDS instance? Each machine size will specify CPU and RAM along with disk and network IO capacity. Is your GCP VM where you are running PG ( a GCP VM is the equivalent of an EC2 instance, by the way ) roughly equivalent to that RDS instance?Finally, is the network topology roughly equivalent? Are you performing these tests with the same region and/or availability zone?On Thu, Feb 25, 2021 at 3:32 PM Philip Semanchuk <[email protected]> wrote:\n\n> On Feb 25, 2021, at 4:04 PM, Igor Gois <[email protected]> wrote:\n> \n> Philip,\n> \n> The results in first email in this thread were using explain analyze.\n> \n> I thought that you asked to run using only 'explain'. My bad.\n> \n> The point is, the execution time with explain analyze is less the 1 second. But the actual execution time (calculated from the python client) is 24 seconds (aws) and 300+ seconds in gcp\n\nOh OK, sorry, I wasn’t following. Yes, network speed sounds like the source of the problem. \n\nUnder AWS sometimes we log into an EC2 instance if we have to run a query that generates a lot of data so that both server and client are inside AWS. If GCP has something similar to EC2, it might be an interesting experiment to run your query from there and see how much, if any, that changes the time it takes to get results.\n\nHope this helps\nPhilip\n\n\n\n> \n> Em qui., 25 de fev. de 2021 às 17:53, Philip Semanchuk <[email protected]> escreveu:\n> \n> \n> > On Feb 25, 2021, at 3:46 PM, Igor Gois <[email protected]> wrote:\n> > \n> > Hi, Philip\n> > \n> > We ran: EXPLAIN (FORMAT JSON) SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';\n> > \n> > but it was really fast. I think the results were discarded.\n> \n> EXPLAIN and EXPLAIN ANALYZE are different in an important way. EXPLAIN merely plans the query, EXPLAIN ANALYZE plans *and executes* the query. From the doc —\n> \n> \"The ANALYZE option causes the statement to be actually executed, not only planned....Keep in mind that the statement is actually executed when the ANALYZE option is used. Although EXPLAIN will discard any output that a SELECT would return, other side effects of the statement will happen as usual. “\n> \n> https://www.postgresql.org/docs/12/sql-explain.html\n> \n> \n> > \n> > AWS Execution time select without explain: 24.96505s (calculated in python client)\n> > AWS Execution time select with explain but without analyze: 0.03876s (calculated in python client)\n> > \n> > https://explain.depesz.com/s/5HRO\n> > \n> > Thanks in advance\n> > \n> > \n> > Em qui., 25 de fev. de 2021 às 15:13, Philip Semanchuk <[email protected]> escreveu:\n> > \n> > \n> > > On Feb 24, 2021, at 10:11 AM, Igor Gois <[email protected]> wrote:\n> > > \n> > > Hi, Julien\n> > > \n> > > Your hypothesis about network transfer makes sense. The query returns a big size byte array blobs.\n> > > \n> > > Is there a way to test the network speed against the instances? I have access to the network speed in gcp (5 Mb/s), but don't have access in aws rds.\n> > \n> > Perhaps what you should run is EXPLAIN ANALYZE SELECT...? My understanding is that EXPLAIN ANALYZE executes the query but discards the results. That doesn’t tell you the network speed of your AWS instance, but it does isolate the query execution speed (which is what I think you’re trying to measure) from the network speed.\n> > \n> > Hope this is useful.\n> > \n> > Cheers\n> > Philip\n> > \n> > > \n> > > \n> > > Em qua., 24 de fev. de 2021 às 10:35, Julien Rouhaud <[email protected]> escreveu:\n> > > Hi,\n> > > \n> > > On Wed, Feb 24, 2021 at 6:14 AM Maurici Meneghetti\n> > > <[email protected]> wrote:\n> > > >\n> > > > I have 2 postgres instances created from the same dump (backup), one on a GCP VM and the other on AWS RDS. The first instance takes 18 minutes and the second one takes less than 20s to run this simples query:\n> > > > SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';\n> > > > I’ve run this query a few times to make sure both should be reading data from cache.\n> > > > I expect my postgres on GPC to be at least similar to the one managed by AWS RDS so that I can work on improvements parallelly and compare.\n> > > >\n> > > > DETAILS:\n> > > > [...]\n> > > > Planning time: 456.315 ms\n> > > > Execution time: 776.976 ms\n> > > >\n> > > > Query explain for Postgres on AWS RDS:\n> > > > [...]\n> > > > Planning time: 0.407 ms\n> > > > Execution time: 14.87 ms\n> > > \n> > > Those queries were executed in respectively ~1s and ~15ms (one thing\n> > > to note is that the slower one had less data in cache, which may or\n> > > may note account for the difference). Does those plans reflect the\n> > > reality of your slow executions? If yes it's likely due to quite slow\n> > > network transfer. Otherwise we would need an explain plan from the\n> > > slow execution, for which auto_explain can help you. See\n> > > https://www.postgresql.org/docs/11/auto-explain.html for more details.\n> > > \n> > > \n> > > -- \n> > > Att,\n> > > \n> > > Igor Gois | Sócio Consultor\n> > > (48) 99169-9889 | Skype: igor_msg\n> > > Site | Blog | LinkedIn | Facebook | Instagram\n> > > \n> > > \n> > \n> > \n> > \n> > -- \n> > Att,\n> > \n> > Igor Gois | Sócio Consultor\n> > (48) 99169-9889 | Skype: igor_msg\n> > Site | Blog | LinkedIn | Facebook | Instagram\n> > \n> > \n> \n> \n> \n> -- \n> Att,\n> \n> Igor Gois | Sócio Consultor\n> (48) 99169-9889 | Skype: igor_msg\n> Site | Blog | LinkedIn | Facebook | Instagram\n> \n>",
"msg_date": "Thu, 25 Feb 2021 18:06:15 -0700",
"msg_from": "Justin Pitts <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance comparing GCP and AWS"
},
{
"msg_contents": "Have you tried to set the instance running on GCP to have similar\nshared_buffers as the AWS database ?\n\nWhat you described has a much lower cache hit rate on GCS and 2X the\nshared buffers on AWS which could well explain much of the difference\nin execution times.\n\nDETAILS:\nQuery explain for Postgres on GCP VM:\n Buffers: shared hit=423 read=4821\n\nQuery explain for Postgres on AWS RDS:\n Buffers: shared hit=3290 read=1948\n\nand the configuration :\n\nInstance on VM on GCP (2 vCPUs, 2 GB memory, 800 GB disk):\n• shared_buffers: 510920kB (close to 499MB)\n\nInstance managed by RDS (2 vCPUs, 2 GiB RAM, 250GB disk, 750 de IOPS):\n• shared_buffers: 943896kB (close to 922MB)\n\n\nCheers\nHannu\n\nOn Fri, Feb 26, 2021 at 9:16 AM Justin Pitts <[email protected]> wrote:\n>\n> Since this is a comparison to RDS, and the goal presumably is to make the test as even as possible, you will want to pay attention to the network IO capacity for the client and the server in both tests.\n>\n> For RDS, you will be unable to run the client software locally on the server hardware, so you should plan to do the same for the GCP comparison.\n>\n> What is the machine size you are using for your RDS instance? Each machine size will specify CPU and RAM along with disk and network IO capacity.\n>\n> Is your GCP VM where you are running PG ( a GCP VM is the equivalent of an EC2 instance, by the way ) roughly equivalent to that RDS instance?\n>\n> Finally, is the network topology roughly equivalent? Are you performing these tests with the same region and/or availability zone?\n>\n>\n>\n> On Thu, Feb 25, 2021 at 3:32 PM Philip Semanchuk <[email protected]> wrote:\n>>\n>>\n>>\n>> > On Feb 25, 2021, at 4:04 PM, Igor Gois <[email protected]> wrote:\n>> >\n>> > Philip,\n>> >\n>> > The results in first email in this thread were using explain analyze.\n>> >\n>> > I thought that you asked to run using only 'explain'. My bad.\n>> >\n>> > The point is, the execution time with explain analyze is less the 1 second. But the actual execution time (calculated from the python client) is 24 seconds (aws) and 300+ seconds in gcp\n>>\n>> Oh OK, sorry, I wasn’t following. Yes, network speed sounds like the source of the problem.\n>>\n>> Under AWS sometimes we log into an EC2 instance if we have to run a query that generates a lot of data so that both server and client are inside AWS. If GCP has something similar to EC2, it might be an interesting experiment to run your query from there and see how much, if any, that changes the time it takes to get results.\n>>\n>> Hope this helps\n>> Philip\n>>\n>>\n>>\n>> >\n>> > Em qui., 25 de fev. de 2021 às 17:53, Philip Semanchuk <[email protected]> escreveu:\n>> >\n>> >\n>> > > On Feb 25, 2021, at 3:46 PM, Igor Gois <[email protected]> wrote:\n>> > >\n>> > > Hi, Philip\n>> > >\n>> > > We ran: EXPLAIN (FORMAT JSON) SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';\n>> > >\n>> > > but it was really fast. I think the results were discarded.\n>> >\n>> > EXPLAIN and EXPLAIN ANALYZE are different in an important way. EXPLAIN merely plans the query, EXPLAIN ANALYZE plans *and executes* the query. From the doc —\n>> >\n>> > \"The ANALYZE option causes the statement to be actually executed, not only planned....Keep in mind that the statement is actually executed when the ANALYZE option is used. Although EXPLAIN will discard any output that a SELECT would return, other side effects of the statement will happen as usual. “\n>> >\n>> > https://www.postgresql.org/docs/12/sql-explain.html\n>> >\n>> >\n>> > >\n>> > > AWS Execution time select without explain: 24.96505s (calculated in python client)\n>> > > AWS Execution time select with explain but without analyze: 0.03876s (calculated in python client)\n>> > >\n>> > > https://explain.depesz.com/s/5HRO\n>> > >\n>> > > Thanks in advance\n>> > >\n>> > >\n>> > > Em qui., 25 de fev. de 2021 às 15:13, Philip Semanchuk <[email protected]> escreveu:\n>> > >\n>> > >\n>> > > > On Feb 24, 2021, at 10:11 AM, Igor Gois <[email protected]> wrote:\n>> > > >\n>> > > > Hi, Julien\n>> > > >\n>> > > > Your hypothesis about network transfer makes sense. The query returns a big size byte array blobs.\n>> > > >\n>> > > > Is there a way to test the network speed against the instances? I have access to the network speed in gcp (5 Mb/s), but don't have access in aws rds.\n>> > >\n>> > > Perhaps what you should run is EXPLAIN ANALYZE SELECT...? My understanding is that EXPLAIN ANALYZE executes the query but discards the results. That doesn’t tell you the network speed of your AWS instance, but it does isolate the query execution speed (which is what I think you’re trying to measure) from the network speed.\n>> > >\n>> > > Hope this is useful.\n>> > >\n>> > > Cheers\n>> > > Philip\n>> > >\n>> > > >\n>> > > >\n>> > > > Em qua., 24 de fev. de 2021 às 10:35, Julien Rouhaud <[email protected]> escreveu:\n>> > > > Hi,\n>> > > >\n>> > > > On Wed, Feb 24, 2021 at 6:14 AM Maurici Meneghetti\n>> > > > <[email protected]> wrote:\n>> > > > >\n>> > > > > I have 2 postgres instances created from the same dump (backup), one on a GCP VM and the other on AWS RDS. The first instance takes 18 minutes and the second one takes less than 20s to run this simples query:\n>> > > > > SELECT \"Id\", \"DateTime\", \"SignalRegisterId\", \"Raw\" FROM \"SignalRecordsBlobs\" WHERE \"SignalSettingId\" = 103 AND \"DateTime\" BETWEEN '2019-11-28T14:00:12.540200000' AND '2020-07-23T21:12:32.249000000';\n>> > > > > I’ve run this query a few times to make sure both should be reading data from cache.\n>> > > > > I expect my postgres on GPC to be at least similar to the one managed by AWS RDS so that I can work on improvements parallelly and compare.\n>> > > > >\n>> > > > > DETAILS:\n>> > > > > [...]\n>> > > > > Planning time: 456.315 ms\n>> > > > > Execution time: 776.976 ms\n>> > > > >\n>> > > > > Query explain for Postgres on AWS RDS:\n>> > > > > [...]\n>> > > > > Planning time: 0.407 ms\n>> > > > > Execution time: 14.87 ms\n>> > > >\n>> > > > Those queries were executed in respectively ~1s and ~15ms (one thing\n>> > > > to note is that the slower one had less data in cache, which may or\n>> > > > may note account for the difference). Does those plans reflect the\n>> > > > reality of your slow executions? If yes it's likely due to quite slow\n>> > > > network transfer. Otherwise we would need an explain plan from the\n>> > > > slow execution, for which auto_explain can help you. See\n>> > > > https://www.postgresql.org/docs/11/auto-explain.html for more details.\n>> > > >\n>> > > >\n>> > > > --\n>> > > > Att,\n>> > > >\n>> > > > Igor Gois | Sócio Consultor\n>> > > > (48) 99169-9889 | Skype: igor_msg\n>> > > > Site | Blog | LinkedIn | Facebook | Instagram\n>> > > >\n>> > > >\n>> > >\n>> > >\n>> > >\n>> > > --\n>> > > Att,\n>> > >\n>> > > Igor Gois | Sócio Consultor\n>> > > (48) 99169-9889 | Skype: igor_msg\n>> > > Site | Blog | LinkedIn | Facebook | Instagram\n>> > >\n>> > >\n>> >\n>> >\n>> >\n>> > --\n>> > Att,\n>> >\n>> > Igor Gois | Sócio Consultor\n>> > (48) 99169-9889 | Skype: igor_msg\n>> > Site | Blog | LinkedIn | Facebook | Instagram\n>> >\n>> >\n>>\n>>\n>>\n\n\n",
"msg_date": "Mon, 1 Mar 2021 17:01:06 +0100",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance comparing GCP and AWS"
}
] |
[
{
"msg_contents": "Hi,\n\n\nI want to examine the exhaustive search and not the geqo here. I'd expect the exhaustive search to give the plan with the lowest cost, but apparently it doesn't. I have found a few dozen different querys where that isn't the case. I attached one straight forward example. For the join of two partitions a row first approach would have been reasonable.\n\nPlease note that I have enable_partitionwise_join on.\n\nBoth tables are partitioned by si, agv and have a primary key leading with si, agv.\n\nI know that for this particular query the planning time dominates the execution time, but that is not my issue here. It's more a synthetic boiled down example coming from a more complex one with notably higher execution time.\n\nThank you for having a look!\n\n\nRegards\n\nArne",
"msg_date": "Thu, 25 Feb 2021 23:07:08 +0000",
"msg_from": "Arne Roland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Disabling options lowers the estimated cost of a query"
},
{
"msg_contents": "Arne Roland <[email protected]> writes:\n> I want to examine the exhaustive search and not the geqo here. I'd expect the exhaustive search to give the plan with the lowest cost, but apparently it doesn't. I have found a few dozen different querys where that isn't the case. I attached one straight forward example. For the join of two partitions a row first approach would have been reasonable.\n\nHmm. While the search should be exhaustive, there are pretty aggressive\npruning heuristics (mostly in and around add_path()) that can cause us to\ndrop paths that don't seem to be enough better than other alternatives.\nI suspect that the seqscan plan may have beaten out the other one at\nsome earlier stage that didn't think that the startup-cost advantage\nwas sufficient reason to keep it.\n\nIt's also possible that you've found a bug. I notice that both\nplans are using incremental sort, which has been, um, rather buggy.\nHard to tell without a concrete test case to poke at.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Feb 2021 22:00:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling options lowers the estimated cost of a query"
},
{
"msg_contents": "The startup cost is pretty expensive. This seems to be common issue using partition wise joins.\n\n\nI attached a simplified reproducer. Thanks for having a look!\n\n\nRegards\n\nArne\n\n________________________________\nFrom: Tom Lane <[email protected]>\nSent: Friday, February 26, 2021 4:00:18 AM\nTo: Arne Roland\nCc: [email protected]\nSubject: Re: Disabling options lowers the estimated cost of a query\n\nArne Roland <[email protected]> writes:\n> I want to examine the exhaustive search and not the geqo here. I'd expect the exhaustive search to give the plan with the lowest cost, but apparently it doesn't. I have found a few dozen different querys where that isn't the case. I attached one straight forward example. For the join of two partitions a row first approach would have been reasonable.\n\nHmm. While the search should be exhaustive, there are pretty aggressive\npruning heuristics (mostly in and around add_path()) that can cause us to\ndrop paths that don't seem to be enough better than other alternatives.\nI suspect that the seqscan plan may have beaten out the other one at\nsome earlier stage that didn't think that the startup-cost advantage\nwas sufficient reason to keep it.\n\nIt's also possible that you've found a bug. I notice that both\nplans are using incremental sort, which has been, um, rather buggy.\nHard to tell without a concrete test case to poke at.\n\n regards, tom lane",
"msg_date": "Thu, 15 Apr 2021 18:13:18 +0000",
"msg_from": "Arne Roland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Disabling options lowers the estimated cost of a query"
},
{
"msg_contents": "On 2/26/21 4:00 AM, Tom Lane wrote:\n> Arne Roland <[email protected]> writes:\n>> I want to examine the exhaustive search and not the geqo here. I'd\n>> expect the exhaustive search to give the plan with the lowest cost,\n>> but apparently it doesn't. I have found a few dozen different\n>> querys where that isn't the case. I attached one straight forward\n>> example. For the join of two partitions a row first approach would\n>> have been reasonable.\n> \n> Hmm. While the search should be exhaustive, there are pretty\n> aggressive pruning heuristics (mostly in and around add_path()) that\n> can cause us to drop paths that don't seem to be enough better than\n> other alternatives. I suspect that the seqscan plan may have beaten\n> out the other one at some earlier stage that didn't think that the\n> startup-cost advantage was sufficient reason to keep it.\n> \n> It's also possible that you've found a bug. I notice that both plans\n> are using incremental sort, which has been, um, rather buggy. Hard to\n> tell without a concrete test case to poke at.\n> \n\nWell, it's true incremental sort was not exactly bug free. But I very\nmuch doubt it's causing this issue, for two reasons:\n\n(a) It's trivial to simplify the reproducer further, so that there are\nno incremental sort nodes. See the attached script, which has just a\nsingle partition.\n\n(b) The incremental sort patch does not really tweak the costing or\nadd_path in ways that would break this.\n\n(c) PostgreSQL 12 has the same issue.\n\n\nIt seems the whole problem is in generate_orderedappend_paths(), which\nsimply considers two cases - paths with minimal startup cost and paths\nwith minimal total costs. But with LIMIT that does not work, of course.\n\nWith the simplified reproducer, I get these two plans:\n\n\n QUERY PLAN\n -------------------------------------------------------------------\n Limit (cost=9748.11..10228.11 rows=10000 width=8)\n -> Merge Left Join (cost=9748.11..14548.11 rows=100000 width=8)\n Merge Cond: (a.id = b.id)\n -> Index Only Scan Backward using a0_pkey on a0 a\n (cost=0.29..3050.29 rows=100000 width=8)\n -> Sort (cost=9747.82..9997.82 rows=100000 width=8)\n Sort Key: b.id DESC\n -> Seq Scan on b0 b\n (cost=0.00..1443.00 rows=100000 width=8)\n (7 rows)\n\n QUERY PLAN\n -------------------------------------------------------------------\n Limit (cost=0.58..3793.16 rows=10000 width=8)\n -> Nested Loop Left Join (cost=0.58..37926.29 rows=100000 ...)\n -> Index Only Scan Backward using a0_pkey on a0 a\n (cost=0.29..3050.29 rows=100000 width=8)\n -> Index Only Scan using b0_pkey on b0 b\n (cost=0.29..0.34 rows=1 width=8)\n Index Cond: (id = a.id)\n (5 rows)\n\n\nThe reason is quite simple - we get multiple join paths for each child\n(not visible in the plans, because there's just a single partition),\nwith these costs:\n\n A: nestloop_path startup 0.585000 total 35708.292500\n B: nestloop_path startup 0.292500 total 150004297.292500\n C: mergejoin_path startup 9748.112737 total 14102.092737\n\nThe one we'd like is the nestloop (A), and with disabled partition-wise\njoin that's what we pick. But generate_orderedappend_paths calls\nget_cheapest_path_for_pathkeys for startup/total cost, and gets the two\nother paths. Clearly, nestlop (B) is pretty terrible for LIMIT, because\nof the high total cost, and mergejoin (C) is what we end up with.\n\nNot sure how to fix this without making generate_orderedappend_paths way\nmore complicated ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 16 Apr 2021 06:52:04 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling options lowers the estimated cost of a query"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 2/26/21 4:00 AM, Tom Lane wrote:\n>> Hmm. While the search should be exhaustive, there are pretty\n>> aggressive pruning heuristics (mostly in and around add_path()) that\n>> can cause us to drop paths that don't seem to be enough better than\n>> other alternatives. I suspect that the seqscan plan may have beaten\n>> out the other one at some earlier stage that didn't think that the\n>> startup-cost advantage was sufficient reason to keep it.\n\n> It seems the whole problem is in generate_orderedappend_paths(), which\n> simply considers two cases - paths with minimal startup cost and paths\n> with minimal total costs. But with LIMIT that does not work, of course.\n\nAh, I see.\n\n> Not sure how to fix this without making generate_orderedappend_paths way\n> more complicated ...\n\nYou could, if root->tuple_fraction is > 0, also make a set of paths that\nare optimized for that fetch fraction. This is cheating to some extent,\nbecause it's only entirely accurate when your rel is the only one, but it\nseems better than ignoring the issue altogether. The code to select the\nright child path would be approximately like get_cheapest_fractional_path,\nexcept that you need to restrict it to paths with the right sort order.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 16 Apr 2021 08:58:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling options lowers the estimated cost of a query"
},
{
"msg_contents": "I wrote:\n> ... The code to select the\n> right child path would be approximately like get_cheapest_fractional_path,\n> except that you need to restrict it to paths with the right sort order.\n\nDuh, I forgot about get_cheapest_fractional_path_for_pathkeys().\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 16 Apr 2021 09:09:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling options lowers the estimated cost of a query"
},
{
"msg_contents": "Hi,\n\nOn 4/16/21 3:09 PM, Tom Lane wrote:\n> I wrote:\n>> ... The code to select the\n>> right child path would be approximately like get_cheapest_fractional_path,\n>> except that you need to restrict it to paths with the right sort order.\n> \n> Duh, I forgot about get_cheapest_fractional_path_for_pathkeys().\n> \n> \t\t\tregards, tom lane\n> \n\nThe attached patch does fix the issue for me, producing the same plans\nwith and without partition-wise joins.\n\nIt probably needs a bit more work, though:\n\n1) If get_cheapest_fractional_path_for_pathkeys returns NULL, it's not\nclear whether to use cheapest_startup or cheapest_total with Sort on\ntop. Or maybe consider an incremental sort?\n\n2) Same for the cheapest_total - maybe there's a partially sorted path,\nand using it with incremental sort on top would be better than using\ncheapest_total_path + sort.\n\n3) Not sure if get_cheapest_fractional_path_for_pathkeys should worry\nabout require_parallel_safe too.\n\n\nDoesn't seem like an urgent issue (has been there for a while, not sure\nwe even want to backpatch it). I'll add this to the next CF.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 16 Apr 2021 19:59:04 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling options lowers the estimated cost of a query"
}
] |
[
{
"msg_contents": "# Performance issues discovered from differential test\n\nHello. We are studying DBMS from GeorgiaTech and reporting interesting queries that potentially show performance problems.\n\nTo discover such cases, we used the following procedures:\n\n* Install four DBMSs with the latest version (PostgreSQL, SQLite, MySQL, CockroachDB)\n* Import TPCC-C benchmark for each DBMS\n* Generate random query (and translate the query to handle different dialects)\n* Run the query and measure the query execution time\n * Remove `LIMIT` to prevent any non-deterministic behaviors\n * Discard the test case if any DBMS returned an error\n * Some DBMS does not show the actual query execution time. In this case, query the `current time` before and after the actual query, and then we calculate the elapsed time.\n\nIn this report, we attached a few queries. We believe that there are many duplicated or false-positive cases. It would be great if we can get feedback about the reported queries. Once we know the root cause of the problem or false positive, we will make a follow-up report after we remove them all.\n\nFor example, the below query runs x1000 slower than other DBMSs from PostgreSQL.\n\n select ref_0.ol_amount as c0\n from order_line as ref_0\n left join stock as ref_1\n on (ref_0.ol_o_id = ref_1.s_w_id )\n inner join warehouse as ref_2\n on (ref_1.s_dist_09 is NULL)\n where ref_2.w_tax is NULL;\n\n\n* Query files link:\n\nwget https://gts3.org/~jjung/report1/pg.tar.gz\n\n* Execution result (execution time (second))\n\n| Filename | Postgres | Mysql | Cockroachdb | Sqlite | Ratio |\n|---------:|---------:|---------:|------------:|---------:|---------:|\n| 34065 | 1.31911 | 0.013 | 0.02493 | 1.025 | 101.47 |\n| 36399 | 3.60298 | 0.015 | 1.05593 | 3.487 | 240.20 |\n| 35767 | 4.01327 | 0.032 | 0.00727 | 2.311 | 552.19 |\n| 11132 | 4.3518 | 0.022 | 0.00635 | 3.617 | 684.88 |\n| 29658 | 4.6783 | 0.034 | 0.00778 | 2.63 | 601.10 |\n| 19522 | 1.06943 | 0.014 | 0.00569 | 0.0009 | 1188.26 |\n| 38388 | 3.21383 | 0.013 | 0.00913 | 2.462 | 352.09 |\n| 7187 | 1.20267 | 0.015 | 0.00316 | 0.0009 | 1336.30 |\n| 24121 | 2.80611 | 0.014 | 0.03083 | 0.005 | 561.21 |\n| 25800 | 3.95163 | 0.024 | 0.73027 | 3.876 | 164.65 |\n| 2030 | 1.91181 | 0.013 | 0.04123 | 1.634 | 147.06 |\n| 17383 | 3.28785 | 0.014 | 0.00611 | 2.4 | 538.45 |\n| 19551 | 4.70967 | 0.014 | 0.00329 | 0.0009 | 5232.97 |\n| 26595 | 3.70423 | 0.014 | 0.00601 | 2.747 | 615.92 |\n| 469 | 4.18906 | 0.013 | 0.12343 | 0.016 | 322.23 |\n\n\n# Reproduce: install DBMSs, import TPCC benchmark, run query\n\n### Cockroach (from binary)\n\n```sh\n# install DBMS\nwget https://binaries.cockroachdb.com/cockroach-v20.2.5.linux-amd64.tgz\ntar xzvf cockroach-v20.2.5.linux-amd64.tgz\nsudo cp -i cockroach-v20.2.5.linux-amd64/cockroach /usr/local/bin/cockroach20\n\nsudo mkdir -p /usr/local/lib/cockroach\nsudo cp -i cockroach-v20.2.5.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/\nsudo cp -i cockroach-v20.2.5.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/\n\n# test\nwhich cockroach20\ncockroach20 demo\n\n# start the DBMS (to make initial node files)\ncd ~\ncockroach20 start-single-node --insecure --store=node20 --listen-addr=localhost:26259 --http-port=28080 --max-sql-memory=1GB --background\n# quit\ncockroach20 quit --insecure --host=localhost:26259\n\n# import DB\nmkdir -p node20/extern\nwget https://gts3.org/~jjung/tpcc-perf/tpcc_cr.tar.gz\ntar xzvf tpcc_cr.tar.gz\ncp tpcc_cr.sql node20/tpcc.sql\n\n# start the DBMS again and createdb\ncockroach20 sql --insecure --host=localhost:26259 --execute=\"CREATE DATABASE IF NOT EXISTS cockroachdb;\"\n--cockroach20 sql --insecure --host=localhost:26259 --execute=\"DROP DATABASE cockroachdb;\"\n\ncockroach20 sql --insecure --host=localhost:26259 --database=cockroachdb --execute=\"IMPORT PGDUMP 'nodelocal://self/tpcc.sql';\"\n\n# test\ncockroach20 sql --insecure --host=localhost:26259 --database=cockroachdb --execute=\"explain analyze select count(*) from order_line;\"\n\n# run query\ncockroach20 sql --insecure --host=localhost --port=26259 --database=cockroachdb < query.sql\n```\n\n\n### Postgre (from SRC)\n\n```sh\n# remove any previous postgres (if exist)\nsudo apt-get --purge remove postgresql postgresql-doc postgresql-common\n\n# build latest postgres\ngit clone https://github.com/postgres/postgres.git\nmkdir bld\ncd bld\n../configure\nmake -j 20\n\n# install DBMS\nsudo su\nmake install\nadduser postgres\nrm -rf /usr/local/pgsql/data\nmkdir /usr/local/pgsql/data\nchown -R postgres /usr/local/pgsql/data\nsu - postgres\n/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data\n/usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start\n/usr/local/pgsql/bin/createdb jjung\n#/usr/local/pgsql/bin/psql postgresdb\n\n/usr/local/pgsql/bin/createuser -s {username}\n/usr/local/pgsql/bin/createdb postgresdb\n/usr/local/pgsql/bin/psql\n\n=# alter {username} with superuser\n\n# import DB\nwget https://gts3.org/~jjung/tpcc-perf/tpcc_pg.tar.gz\ntar xzvf tpcc_pg.tar.gz\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -f tpcc_pg.sql\n\n# test\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -c \"select * from warehouse\"\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -c \"\\\\dt\"\n\n# run query\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -f query.sql\n```\n\n\n### Sqlite (from SRC)\n\n```sh\n# uninstall any existing\nsudo apt purge sliqte3\n\n# build latest sqlite from src\ngit clone https://github.com/sqlite/sqlite.git\ncd sqlite\nmkdir bld\ncd bld\n../configure\nmake -j 20\n\n# install DBMS\nsudo make install\n\n# import DB\nwget https://gts3.org/~jjung/tpcc-perf/tpcc_sq.tar.gz\ntar xzvf tpcc_sq.tar.gz\n\n# test\nsqlite3 tpcc_sq.db\nsqlite> select * from warehouse;\n\n# run query\nsqlite3 tpcc_sq.db < query.sql\n```\n\n\n### Mysql (install V8.0.X)\n\n```sh\n# remove mysql v5.X (if exist)\nsudo apt purge mysql-server mysql-common mysql-client\n\n# install\nwget https://dev.mysql.com/get/mysql-apt-config_0.8.16-1_all.deb\nsudo dpkg -i mysql-apt-config_0.8.16-1_all.deb\n # then select mysql 8.0 server\nsudo apt update\nsudo apt install mysql-client mysql-community-server mysql-server\n\n# check\nmysql -u root -p\n\n# create user mysql\n CREATE USER 'mysql'@'localhost' IDENTIFIED BY 'mysql';\n alter user 'root'@'localhost' identified by 'mysql';\n\n# modify the conf (should add \"skip-grant-tables\" under [mysqld])\nsudo vim /etc/mysql/mysql.conf.d/mysqld.cnf\n\n# optimize\n# e.g., https://gist.github.com/fevangelou/fb72f36bbe333e059b66\n\n# import DB\nwget https://gts3.org/~jjung/tpcc-perf/tpcc_my.tar.gz\ntar xzvf tpcc_my.tar.gz\nmysql -u mysql -pmysql -e \"create database mysqldb\"\nmysql -u mysql -pmysql mysqldb < tpcc_my.sql\n\n# test\nmysql -u mysql -pmysql mysqldb -e \"show tables\"\nmysql -u mysql -pmysql mysqldb -e \"select * from customer\"\n\n# run query\nmysql -u mysql -pmysql mysqldb < query.sql\n```\n\n\n# Evaluation environment\n\n* Server: Ubuntu 18.04 (64bit)\n* CockroachDB: v20.2.5\n* PostgreSQL: latest commit (21 Feb, 2021)\n* MySQL: v8.0.23\n* SQLite: latest commit (21 Feb, 2021)\n\n\n\n\n\n\n\n\n\n# Performance issues discovered from differential test\n\n\nHello. We are studying DBMS from GeorgiaTech and reporting interesting queries that potentially show performance problems.\n\n\nTo discover such cases, we used the following procedures:\n\n\n* Install four DBMSs with the latest version (PostgreSQL, SQLite, MySQL, CockroachDB)\n* Import TPCC-C benchmark for each DBMS\n* Generate random query (and translate the query to handle different dialects)\n* Run the query and measure the query execution time\n * Remove `LIMIT` to prevent any non-deterministic behaviors\n * Discard the test case if any DBMS returned an error\n * Some DBMS does not show the actual query execution time. In this case, query the `current time` before and after the actual query, and then we calculate the elapsed time.\n\n\nIn this report, we attached a few queries. We believe that there are many duplicated or false-positive cases. It would be great if we can get feedback about the reported queries. Once we know the root cause of the problem or false positive, we will make\n a follow-up report after we remove them all.\n\n\nFor example, the below query runs x1000 slower than other DBMSs from PostgreSQL.\n\n\n\n select ref_0.ol_amount as c0\n from order_line as ref_0\n left join stock as ref_1\n on (ref_0.ol_o_id = ref_1.s_w_id )\n inner join warehouse as ref_2\n on (ref_1.s_dist_09 is NULL)\n where ref_2.w_tax is NULL;\n\n\n\n\n* Query files link:\n\n\nwget https://gts3.org/~jjung/report1/pg.tar.gz\n\n\n\n* Execution result (execution time (second))\n\n\n| Filename | Postgres | Mysql | Cockroachdb | Sqlite | Ratio |\n|---------:|---------:|---------:|------------:|---------:|---------:|\n| 34065 | 1.31911 | 0.013 | 0.02493 | 1.025 | 101.47 |\n| 36399 | 3.60298 | 0.015 | 1.05593 | 3.487 | 240.20 |\n| 35767 | 4.01327 | 0.032 | 0.00727 | 2.311 | 552.19 |\n| 11132 | 4.3518 | 0.022 | 0.00635 | 3.617 | 684.88 |\n| 29658 | 4.6783 | 0.034 | 0.00778 | 2.63 | 601.10 |\n| 19522 | 1.06943 | 0.014 | 0.00569 | 0.0009 | 1188.26 |\n| 38388 | 3.21383 | 0.013 | 0.00913 | 2.462 | 352.09 |\n| 7187 | 1.20267 | 0.015 | 0.00316 | 0.0009 | 1336.30 |\n| 24121 | 2.80611 | 0.014 | 0.03083 | 0.005 | 561.21 |\n| 25800 | 3.95163 | 0.024 | 0.73027 | 3.876 | 164.65 |\n| 2030 | 1.91181 | 0.013 | 0.04123 | 1.634 | 147.06 |\n| 17383 | 3.28785 | 0.014 | 0.00611 | 2.4 | 538.45 |\n| 19551 | 4.70967 | 0.014 | 0.00329 | 0.0009 | 5232.97 |\n| 26595 | 3.70423 | 0.014 | 0.00601 | 2.747 | 615.92 |\n| 469 | 4.18906 | 0.013 | 0.12343 | 0.016 | 322.23 |\n\n\n\n\n# Reproduce: install DBMSs, import TPCC benchmark, run query\n\n\n### Cockroach (from binary)\n\n\n```sh\n# install DBMS\nwget https://binaries.cockroachdb.com/cockroach-v20.2.5.linux-amd64.tgz\ntar xzvf cockroach-v20.2.5.linux-amd64.tgz\nsudo cp -i cockroach-v20.2.5.linux-amd64/cockroach /usr/local/bin/cockroach20\n\n\nsudo mkdir -p /usr/local/lib/cockroach\nsudo cp -i cockroach-v20.2.5.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/\nsudo cp -i cockroach-v20.2.5.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/\n\n\n# test\nwhich cockroach20\ncockroach20 demo\n\n\n# start the DBMS (to make initial node files)\ncd ~\ncockroach20 start-single-node --insecure --store=node20 --listen-addr=localhost:26259 --http-port=28080 --max-sql-memory=1GB --background\n# quit\ncockroach20 quit --insecure --host=localhost:26259\n\n\n# import DB\nmkdir -p node20/extern\nwget https://gts3.org/~jjung/tpcc-perf/tpcc_cr.tar.gz\ntar xzvf tpcc_cr.tar.gz\ncp tpcc_cr.sql node20/tpcc.sql\n\n\n# start the DBMS again and createdb\ncockroach20 sql --insecure --host=localhost:26259 --execute=\"CREATE DATABASE IF NOT EXISTS cockroachdb;\"\n--cockroach20 sql --insecure --host=localhost:26259 --execute=\"DROP DATABASE cockroachdb;\"\n\n\ncockroach20 sql --insecure --host=localhost:26259 --database=cockroachdb --execute=\"IMPORT PGDUMP 'nodelocal://self/tpcc.sql';\"\n\n\n# test\ncockroach20 sql --insecure --host=localhost:26259 --database=cockroachdb --execute=\"explain analyze select count(*) from order_line;\"\n\n\n# run query\ncockroach20 sql --insecure --host=localhost --port=26259 --database=cockroachdb < query.sql\n```\n\n\n\n\n### Postgre (from SRC)\n\n\n```sh\n# remove any previous postgres (if exist)\nsudo apt-get --purge remove postgresql postgresql-doc postgresql-common\n\n\n# build latest postgres\ngit clone https://github.com/postgres/postgres.git\nmkdir bld\ncd bld\n../configure\nmake -j 20\n\n\n# install DBMS\nsudo su\nmake install\nadduser postgres\nrm -rf /usr/local/pgsql/data\nmkdir /usr/local/pgsql/data\nchown -R postgres /usr/local/pgsql/data\nsu - postgres\n/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data\n/usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start\n/usr/local/pgsql/bin/createdb jjung\n#/usr/local/pgsql/bin/psql postgresdb\n\n\n/usr/local/pgsql/bin/createuser -s {username}\n/usr/local/pgsql/bin/createdb postgresdb\n/usr/local/pgsql/bin/psql\n\n\n=# alter {username} with superuser\n\n\n# import DB\nwget https://gts3.org/~jjung/tpcc-perf/tpcc_pg.tar.gz\ntar xzvf tpcc_pg.tar.gz\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -f tpcc_pg.sql\n\n\n# test\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -c \"select * from warehouse\"\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -c \"\\\\dt\"\n\n\n# run query\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -f query.sql\n```\n\n\n\n\n### Sqlite (from SRC)\n\n\n```sh\n# uninstall any existing\nsudo apt purge sliqte3\n\n\n# build latest sqlite from src\ngit clone https://github.com/sqlite/sqlite.git\ncd sqlite\nmkdir bld\ncd bld\n../configure\nmake -j 20\n\n\n# install DBMS\nsudo make install\n\n\n# import DB\nwget https://gts3.org/~jjung/tpcc-perf/tpcc_sq.tar.gz\ntar xzvf tpcc_sq.tar.gz\n\n\n# test\nsqlite3 tpcc_sq.db\nsqlite> select * from warehouse;\n\n\n# run query\nsqlite3 tpcc_sq.db < query.sql\n```\n\n\n\n\n### Mysql (install V8.0.X)\n\n\n```sh\n# remove mysql v5.X (if exist)\nsudo apt purge mysql-server mysql-common mysql-client\n\n\n# install\nwget https://dev.mysql.com/get/mysql-apt-config_0.8.16-1_all.deb\nsudo dpkg -i mysql-apt-config_0.8.16-1_all.deb\n # then select mysql 8.0 server\nsudo apt update\nsudo apt install mysql-client mysql-community-server mysql-server\n\n\n# check\nmysql -u root -p\n\n\n# create user mysql\n CREATE USER 'mysql'@'localhost' IDENTIFIED BY 'mysql';\n alter user 'root'@'localhost' identified by 'mysql';\n\n\n# modify the conf (should add \"skip-grant-tables\" under [mysqld])\nsudo vim /etc/mysql/mysql.conf.d/mysqld.cnf\n\n\n# optimize\n# e.g., https://gist.github.com/fevangelou/fb72f36bbe333e059b66\n\n\n# import DB\nwget https://gts3.org/~jjung/tpcc-perf/tpcc_my.tar.gz\ntar xzvf tpcc_my.tar.gz\nmysql -u mysql -pmysql -e \"create database mysqldb\"\nmysql -u mysql -pmysql mysqldb < tpcc_my.sql\n\n\n# test\nmysql -u mysql -pmysql mysqldb -e \"show tables\"\nmysql -u mysql -pmysql mysqldb -e \"select * from customer\"\n\n\n# run query\nmysql -u mysql -pmysql mysqldb < query.sql\n```\n\n\n\n\n# Evaluation environment\n\n\n* Server: Ubuntu 18.04 (64bit)\n* CockroachDB: v20.2.5\n* PostgreSQL: latest commit (21 Feb, 2021)\n* MySQL: v8.0.23\n* SQLite: latest commit (21 Feb, 2021)",
"msg_date": "Sun, 28 Feb 2021 15:04:33 +0000",
"msg_from": "\"Jung, Jinho\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Potential performance issues"
},
{
"msg_contents": "\nOn 2/28/21 10:04 AM, Jung, Jinho wrote:\n> # install DBMS\n> sudo su\n> make install\n> adduser postgres\n> rm -rf /usr/local/pgsql/data\n> mkdir /usr/local/pgsql/data\n> chown -R postgres /usr/local/pgsql/data\n> su - postgres\n> /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data\n> /usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start\n> /usr/local/pgsql/bin/createdb jjung\n\n\nUsing an untuned Postgres is fairly useless for a performance test. Out\nof the box, shared_buffers and work_mem are too low for almost all\nsituations, and many other settings can also usually be improved. The\ndefault settings are deliberately very conservative.\n\n\ncheers\n\n\nandrew\n\n\n\n-- Andrew Dunstan EDB: https://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 1 Mar 2021 07:59:10 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential performance issues"
},
{
"msg_contents": "Hi,\n\nIt is worthy work trying to compare performance across multiple database \nvendors, but unfortunately, it does not really come across as comparing \napples to apples.\n\nFor instance, configuration parameters:� I do not see where you are \ndoing any modification of configuration at all.� Since DBVendors are \ndifferent in how they apply \"out of the box\" configuration,� this alone \ncan severely affect your comparison tests even though you are using a \nstandard in benchmark testing, TPCC-C.� Postgres is especially \nconservative in \"out of the box\" configuration.� For instance, \n\"work_mem\" is set to an incredibly low value of 4MB.� This has a big \nimpact on many types of queries. Oracle has something called SGA_TARGET, \nwhich if enabled, self-regulates where the memory is utilized, thus not \nlimiting query memory specifically in the way Postgres does.� This is \njust one example of a bazillion others where differences in \"out of the \nbox\" configuration makes these tests more like comparing apples to \noranges.� There are many other areas of configuration related to memory, \ndisk, parallel execution, io concurrency, etc.\n\nIn sum, when comparing performance across different database vendors, \nthere are many other factors that must be taken into account when trying \nto do an impartial comparison.� I just showed one: how configuration \ndifferences can skew the results.\n\nRegards,\nMichael Vitale\n\n\n\n\nJung, Jinho wrote on 2/28/2021 10:04 AM:\n> # Performance issues discovered from differential test\n>\n> Hello. We are studying DBMS from GeorgiaTech and reporting interesting \n> queries that potentially show performance problems.\n>\n> To discover such cases, we used the following procedures:\n>\n> * Install four DBMSs with the latest version (PostgreSQL, SQLite, \n> MySQL, CockroachDB)\n> * Import TPCC-C benchmark for each DBMS\n> * Generate random query (and translate the query to handle different \n> dialects)\n> * Run the query and measure the query execution time\n> � �* Remove `LIMIT` to prevent any non-deterministic behaviors\n> � �* Discard the test case if any DBMS returned an error\n> � �* Some DBMS does not show the actual query execution time. In this \n> case, query the `current time` before and after the actual query, and \n> then we calculate the elapsed time.\n>\n> In this report, we attached a few queries. We believe that there are \n> many duplicated or false-positive cases. It would be great if we can \n> get feedback about the reported queries. Once we know the root cause \n> of the problem or false positive, we will make a follow-up report \n> after we remove them all.\n>\n> For example, the below query runs x1000 slower than other DBMSs from \n> PostgreSQL.\n>\n> � � select ref_0.ol_amount as c0\n> � � from order_line as ref_0\n> � � � � left join stock as ref_1\n> � � � � � on (ref_0.ol_o_id = ref_1.s_w_id )\n> � � � � inner join warehouse as ref_2\n> � � � � on (ref_1.s_dist_09 is NULL)\n> � � where ref_2.w_tax is NULL;\n>\n>\n> * Query files link:\n>\n> wget https://gts3.org/~jjung/report1/pg.tar.gz\n>\n> * Execution result (execution time (second))\n>\n> | Filename | Postgres | � Mysql �| Cockroachdb | �Sqlite �| � Ratio �|\n> |---------:|---------:|---------:|------------:|---------:|---------:|\n> | � �34065 | �1.31911 | � �0.013 | � � 0.02493 | � �1.025 | 101.47 |\n> | � �36399 | �3.60298 | � �0.015 | � � 1.05593 | � �3.487 | 240.20 |\n> | � �35767 | �4.01327 | � �0.032 | � � 0.00727 | � �2.311 | 552.19 |\n> | � �11132 | � 4.3518 | � �0.022 | � � 0.00635 | � �3.617 | 684.88 |\n> | � �29658 | � 4.6783 | � �0.034 | � � 0.00778 | � � 2.63 | 601.10 |\n> | � �19522 | �1.06943 | � �0.014 | � � 0.00569 | � 0.0009 | �1188.26 |\n> | � �38388 | �3.21383 | � �0.013 | � � 0.00913 | � �2.462 | 352.09 |\n> | � � 7187 | �1.20267 | � �0.015 | � � 0.00316 | � 0.0009 | �1336.30 |\n> | � �24121 | �2.80611 | � �0.014 | � � 0.03083 | � �0.005 | 561.21 |\n> | � �25800 | �3.95163 | � �0.024 | � � 0.73027 | � �3.876 | 164.65 |\n> | � � 2030 | �1.91181 | � �0.013 | � � 0.04123 | � �1.634 | 147.06 |\n> | � �17383 | �3.28785 | � �0.014 | � � 0.00611 | � � �2.4 | 538.45 |\n> | � �19551 | �4.70967 | � �0.014 | � � 0.00329 | � 0.0009 | �5232.97 |\n> | � �26595 | �3.70423 | � �0.014 | � � 0.00601 | � �2.747 | 615.92 |\n> | � � �469 | �4.18906 | � �0.013 | � � 0.12343 | � �0.016 | 322.23 |\n>\n>\n> # Reproduce: install DBMSs, import TPCC benchmark, run query\n>\n> ### Cockroach (from binary)\n>\n> ```sh\n> # install DBMS\n> wget https://binaries.cockroachdb.com/cockroach-v20.2.5.linux-amd64.tgz\n> tar xzvf cockroach-v20.2.5.linux-amd64.tgz\n> sudo cp -i cockroach-v20.2.5.linux-amd64/cockroach \n> /usr/local/bin/cockroach20\n>\n> sudo mkdir -p /usr/local/lib/cockroach\n> sudo cp -i cockroach-v20.2.5.linux-amd64/lib/libgeos.so \n> /usr/local/lib/cockroach/\n> sudo cp -i cockroach-v20.2.5.linux-amd64/lib/libgeos_c.so \n> /usr/local/lib/cockroach/\n>\n> # test\n> which cockroach20\n> cockroach20 demo\n>\n> # start the DBMS (to make initial node files)\n> cd ~\n> cockroach20 start-single-node --insecure --store=node20 \n> --listen-addr=localhost:26259 --http-port=28080 --max-sql-memory=1GB \n> --background\n> # quit\n> cockroach20 quit --insecure --host=localhost:26259\n>\n> # import DB\n> mkdir -p node20/extern\n> wget https://gts3.org/~jjung/tpcc-perf/tpcc_cr.tar.gz\n> tar xzvf tpcc_cr.tar.gz\n> cp tpcc_cr.sql node20/tpcc.sql\n>\n> # start the DBMS again and createdb\n> cockroach20 sql --insecure --host=localhost:26259 --execute=\"CREATE \n> DATABASE IF NOT EXISTS cockroachdb;\"\n> --cockroach20 sql --insecure --host=localhost:26259 --execute=\"DROP \n> DATABASE cockroachdb;\"\n>\n> cockroach20 sql --insecure --host=localhost:26259 \n> --database=cockroachdb --execute=\"IMPORT PGDUMP \n> 'nodelocal://self/tpcc.sql';\"\n>\n> # test\n> cockroach20 sql --insecure --host=localhost:26259 \n> --database=cockroachdb --execute=\"explain analyze select count(*) from \n> order_line;\"\n>\n> # run query\n> cockroach20 sql --insecure --host=localhost --port=26259 \n> --database=cockroachdb < query.sql\n> ```\n>\n>\n> ### Postgre (from SRC)\n>\n> ```sh\n> # remove any previous postgres (if exist)\n> sudo apt-get --purge remove postgresql postgresql-doc postgresql-common\n>\n> # build latest postgres\n> git clone https://github.com/postgres/postgres.git\n> mkdir bld\n> cd bld\n> ../configure\n> make -j 20\n>\n> # install DBMS\n> sudo su\n> make install\n> adduser postgres\n> rm -rf /usr/local/pgsql/data\n> mkdir /usr/local/pgsql/data\n> chown -R postgres /usr/local/pgsql/data\n> su - postgres\n> /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data\n> /usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start\n> /usr/local/pgsql/bin/createdb jjung\n> #/usr/local/pgsql/bin/psql postgresdb\n>\n> /usr/local/pgsql/bin/createuser -s {username}\n> /usr/local/pgsql/bin/createdb postgresdb\n> /usr/local/pgsql/bin/psql\n>\n> =# alter {username} with superuser\n>\n> # import DB\n> wget https://gts3.org/~jjung/tpcc-perf/tpcc_pg.tar.gz\n> tar xzvf tpcc_pg.tar.gz\n> /usr/local/pgsql/bin/psql -p 5432 -d postgresdb -f tpcc_pg.sql\n>\n> # test\n> /usr/local/pgsql/bin/psql -p 5432 -d postgresdb -c \"select * from \n> warehouse\"\n> /usr/local/pgsql/bin/psql -p 5432 -d postgresdb -c \"\\\\dt\"\n>\n> # run query\n> /usr/local/pgsql/bin/psql -p 5432 -d postgresdb -f query.sql\n> ```\n>\n>\n> ### Sqlite (from SRC)\n>\n> ```sh\n> # uninstall any existing\n> sudo apt purge sliqte3\n>\n> # build latest sqlite from src\n> git clone https://github.com/sqlite/sqlite.git\n> cd sqlite\n> mkdir bld\n> cd bld\n> ../configure\n> make -j 20\n>\n> # install DBMS\n> sudo make install\n>\n> # import DB\n> wget https://gts3.org/~jjung/tpcc-perf/tpcc_sq.tar.gz\n> tar xzvf tpcc_sq.tar.gz\n>\n> # test\n> sqlite3 tpcc_sq.db\n> sqlite> select * from warehouse;\n>\n> # run query\n> sqlite3 tpcc_sq.db < query.sql\n> ```\n>\n>\n> ### Mysql (install V8.0.X)\n>\n> ```sh\n> # remove mysql v5.X (if exist)\n> sudo apt purge mysql-server mysql-common mysql-client\n>\n> # install\n> wget https://dev.mysql.com/get/mysql-apt-config_0.8.16-1_all.deb\n> sudo dpkg -i mysql-apt-config_0.8.16-1_all.deb\n> �# then select mysql 8.0 server\n> sudo apt update\n> sudo apt install mysql-client mysql-community-server mysql-server\n>\n> # check\n> mysql -u root -p\n>\n> # create user mysql\n> �CREATE USER 'mysql'@'localhost' IDENTIFIED BY 'mysql';\n> �alter user 'root'@'localhost' identified by 'mysql';\n>\n> # modify the conf (should add \"skip-grant-tables\" under [mysqld])\n> sudo vim /etc/mysql/mysql.conf.d/mysqld.cnf\n>\n> # optimize\n> # e.g., https://gist.github.com/fevangelou/fb72f36bbe333e059b66\n>\n> # import DB\n> wget https://gts3.org/~jjung/tpcc-perf/tpcc_my.tar.gz\n> tar xzvf tpcc_my.tar.gz\n> mysql -u mysql -pmysql -e \"create database mysqldb\"\n> mysql -u mysql -pmysql mysqldb < tpcc_my.sql\n>\n> # test\n> mysql -u mysql -pmysql mysqldb -e \"show tables\"\n> mysql -u mysql -pmysql mysqldb -e \"select * from customer\"\n>\n> # run query\n> mysql -u mysql -pmysql mysqldb < query.sql\n> ```\n>\n>\n> # Evaluation environment\n>\n> * Server: Ubuntu 18.04 (64bit)\n> * CockroachDB: v20.2.5\n> * PostgreSQL: latest commit (21 Feb, 2021)\n> * MySQL: v8.0.23\n> * SQLite: latest commit (21 Feb, 2021)\n>\n\n\n\n\nHi,\n\nIt is worthy work trying to compare performance across multiple database\n vendors, but unfortunately, it does not really come across as comparing\n apples to apples. \n\nFor instance, configuration parameters:� I do not see where you are \ndoing any modification of configuration at all.� Since DBVendors are \ndifferent in how they apply \"out of the box\" configuration,� this alone \ncan severely affect your comparison tests even though you are using a \nstandard in benchmark testing, TPCC-C.� Postgres is especially \nconservative in \"out of the box\" configuration.� For instance, \n\"work_mem\" is set to an incredibly low value of 4MB.� This has a big \nimpact on many types of queries. Oracle has something called SGA_TARGET,\n which if enabled, self-regulates where the memory is utilized, thus not\n limiting query memory specifically in the way Postgres does.� This is \njust one example of a bazillion others where differences in \"out of the \nbox\" configuration makes these tests more like comparing apples to \noranges.� There are many other areas of configuration related to memory,\n disk, parallel execution, io concurrency, etc.\n\nIn sum, when comparing performance across different database vendors, \nthere are many other factors that must be taken into account when trying\n to do an impartial comparison.� I just showed one: how configuration \ndifferences can skew the results.\n\nRegards,\nMichael Vitale\n\n\n\n\nJung, Jinho wrote on 2/28/2021 10:04 AM:\n\n\n\n\n# Performance issues discovered from differential test\n\n\nHello. We are studying DBMS from GeorgiaTech and reporting \ninteresting queries that potentially show performance problems.\n\n\nTo discover such cases, we used the following procedures:\n\n\n* Install four DBMSs with the latest version (PostgreSQL, SQLite, \nMySQL, CockroachDB)\n* Import TPCC-C benchmark for each DBMS\n* Generate random query (and translate the query to handle \ndifferent dialects)\n* Run the query and measure the query execution time\n� �* Remove `LIMIT` to prevent any non-deterministic behaviors\n� �* Discard the test case if any DBMS returned an error\n� �* Some DBMS does not show the actual query execution time. In \nthis case, query the `current time` before and after the actual query, \nand then we calculate the elapsed time.\n\n\nIn this report, we attached a few queries. We believe that there \nare many duplicated or false-positive cases. It would be great if we can\n get feedback about the reported queries. Once we know the root cause of\n the problem or false positive, we will make\n a follow-up report after we remove them all.\n\n\nFor example, the below query runs x1000 slower than other DBMSs \nfrom PostgreSQL.\n\n\n\n� � select ref_0.ol_amount as c0\n� � from order_line as ref_0\n� � � � left join stock as ref_1\n� � � � � on (ref_0.ol_o_id = ref_1.s_w_id )\n� � � � inner join warehouse as ref_2\n� � � � on (ref_1.s_dist_09 is NULL)\n� � where ref_2.w_tax is NULL;\n\n\n\n\n* Query files link:\n\n\nwget https://gts3.org/~jjung/report1/pg.tar.gz\n\n\n\n* Execution result (execution time (second))\n\n\n| Filename | Postgres | � Mysql �| Cockroachdb | �Sqlite �| � Ratio\n �|\n|---------:|---------:|---------:|------------:|---------:|---------:|\n| � �34065 | �1.31911 | � �0.013 | � � 0.02493 | � �1.025 | � \n101.47 |\n| � �36399 | �3.60298 | � �0.015 | � � 1.05593 | � �3.487 | � \n240.20 |\n| � �35767 | �4.01327 | � �0.032 | � � 0.00727 | � �2.311 | � \n552.19 |\n| � �11132 | � 4.3518 | � �0.022 | � � 0.00635 | � �3.617 | � \n684.88 |\n| � �29658 | � 4.6783 | � �0.034 | � � 0.00778 | � � 2.63 | � \n601.10 |\n| � �19522 | �1.06943 | � �0.014 | � � 0.00569 | � 0.0009 | \n�1188.26 |\n| � �38388 | �3.21383 | � �0.013 | � � 0.00913 | � �2.462 | � \n352.09 |\n| � � 7187 | �1.20267 | � �0.015 | � � 0.00316 | � 0.0009 | \n�1336.30 |\n| � �24121 | �2.80611 | � �0.014 | � � 0.03083 | � �0.005 | � \n561.21 |\n| � �25800 | �3.95163 | � �0.024 | � � 0.73027 | � �3.876 | � \n164.65 |\n| � � 2030 | �1.91181 | � �0.013 | � � 0.04123 | � �1.634 | � \n147.06 |\n| � �17383 | �3.28785 | � �0.014 | � � 0.00611 | � � �2.4 | � \n538.45 |\n| � �19551 | �4.70967 | � �0.014 | � � 0.00329 | � 0.0009 | \n�5232.97 |\n| � �26595 | �3.70423 | � �0.014 | � � 0.00601 | � �2.747 | � \n615.92 |\n| � � �469 | �4.18906 | � �0.013 | � � 0.12343 | � �0.016 | � \n322.23 |\n\n\n\n\n# Reproduce: install DBMSs, import TPCC benchmark, run query\n\n\n### Cockroach (from binary)\n\n\n```sh\n# install DBMS\nwget \nhttps://binaries.cockroachdb.com/cockroach-v20.2.5.linux-amd64.tgz\ntar xzvf cockroach-v20.2.5.linux-amd64.tgz\nsudo cp -i cockroach-v20.2.5.linux-amd64/cockroach \n/usr/local/bin/cockroach20\n\n\nsudo mkdir -p /usr/local/lib/cockroach\nsudo cp -i cockroach-v20.2.5.linux-amd64/lib/libgeos.so \n/usr/local/lib/cockroach/\nsudo cp -i cockroach-v20.2.5.linux-amd64/lib/libgeos_c.so \n/usr/local/lib/cockroach/\n\n\n# test\nwhich cockroach20\ncockroach20 demo\n\n\n# start the DBMS (to make initial node files)\ncd ~\ncockroach20 start-single-node --insecure --store=node20 \n--listen-addr=localhost:26259 --http-port=28080 --max-sql-memory=1GB \n--background\n# quit\ncockroach20 quit --insecure --host=localhost:26259\n\n\n# import DB\nmkdir -p node20/extern\nwget https://gts3.org/~jjung/tpcc-perf/tpcc_cr.tar.gz\ntar xzvf tpcc_cr.tar.gz\ncp tpcc_cr.sql node20/tpcc.sql\n\n\n# start the DBMS again and createdb\ncockroach20 sql --insecure --host=localhost:26259 --execute=\"CREATE\n DATABASE IF NOT EXISTS cockroachdb;\"\n--cockroach20 sql --insecure --host=localhost:26259 --execute=\"DROP\n DATABASE cockroachdb;\"\n\n\ncockroach20 sql --insecure --host=localhost:26259 \n--database=cockroachdb --execute=\"IMPORT PGDUMP \n'nodelocal://self/tpcc.sql';\"\n\n\n# test\ncockroach20 sql --insecure --host=localhost:26259 \n--database=cockroachdb --execute=\"explain analyze select count(*) from \norder_line;\"\n\n\n# run query\ncockroach20 sql --insecure --host=localhost --port=26259 \n--database=cockroachdb < query.sql\n```\n\n\n\n\n### Postgre (from SRC)\n\n\n```sh\n# remove any previous postgres (if exist)\nsudo apt-get --purge remove postgresql postgresql-doc \npostgresql-common\n\n\n# build latest postgres\ngit clone https://github.com/postgres/postgres.git\nmkdir bld\ncd bld\n../configure\nmake -j 20\n\n\n# install DBMS\nsudo su\nmake install\nadduser postgres\nrm -rf /usr/local/pgsql/data\nmkdir /usr/local/pgsql/data\nchown -R postgres /usr/local/pgsql/data\nsu - postgres\n/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data\n/usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data -l logfile \nstart\n/usr/local/pgsql/bin/createdb jjung\n#/usr/local/pgsql/bin/psql postgresdb\n\n\n/usr/local/pgsql/bin/createuser -s {username}\n/usr/local/pgsql/bin/createdb postgresdb\n/usr/local/pgsql/bin/psql\n\n\n=# alter {username} with superuser\n\n\n# import DB\nwget https://gts3.org/~jjung/tpcc-perf/tpcc_pg.tar.gz\ntar xzvf tpcc_pg.tar.gz\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -f tpcc_pg.sql\n\n\n# test\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -c \"select * from \nwarehouse\"\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -c \"\\\\dt\"\n\n\n# run query\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -f query.sql\n```\n\n\n\n\n### Sqlite (from SRC)\n\n\n```sh\n# uninstall any existing\nsudo apt purge sliqte3\n\n\n# build latest sqlite from src\ngit clone https://github.com/sqlite/sqlite.git\ncd sqlite\nmkdir bld\ncd bld\n../configure\nmake -j 20\n\n\n# install DBMS\nsudo make install\n\n\n# import DB\nwget https://gts3.org/~jjung/tpcc-perf/tpcc_sq.tar.gz\ntar xzvf tpcc_sq.tar.gz\n\n\n# test\nsqlite3 tpcc_sq.db\nsqlite> select * from warehouse;\n\n\n# run query\nsqlite3 tpcc_sq.db < query.sql\n```\n\n\n\n\n### Mysql (install V8.0.X)\n\n\n```sh\n# remove mysql v5.X (if exist)\nsudo apt purge mysql-server mysql-common mysql-client\n\n\n# install\nwget https://dev.mysql.com/get/mysql-apt-config_0.8.16-1_all.deb\nsudo dpkg -i mysql-apt-config_0.8.16-1_all.deb\n�# then select mysql 8.0 server\nsudo apt update\nsudo apt install mysql-client mysql-community-server mysql-server\n\n\n# check\nmysql -u root -p\n\n\n# create user mysql\n�CREATE USER 'mysql'@'localhost' IDENTIFIED BY 'mysql';\n�alter user 'root'@'localhost' identified by 'mysql';\n\n\n# modify the conf (should add \"skip-grant-tables\" under [mysqld])\nsudo vim /etc/mysql/mysql.conf.d/mysqld.cnf\n\n\n# optimize\n# e.g., https://gist.github.com/fevangelou/fb72f36bbe333e059b66\n\n\n# import DB\nwget https://gts3.org/~jjung/tpcc-perf/tpcc_my.tar.gz\ntar xzvf tpcc_my.tar.gz\nmysql -u mysql -pmysql -e \"create database mysqldb\"\nmysql -u mysql -pmysql mysqldb < tpcc_my.sql\n\n\n# test\nmysql -u mysql -pmysql mysqldb -e \"show tables\"\nmysql -u mysql -pmysql mysqldb -e \"select * from customer\"\n\n\n# run query\nmysql -u mysql -pmysql mysqldb < query.sql\n```\n\n\n\n\n# Evaluation environment\n\n\n* Server: Ubuntu 18.04 (64bit)\n* CockroachDB: v20.2.5\n* PostgreSQL: latest commit (21 Feb, 2021)\n* MySQL: v8.0.23\n* SQLite: latest commit (21 Feb, 2021)",
"msg_date": "Mon, 1 Mar 2021 08:04:19 -0500",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential performance issues"
},
{
"msg_contents": "Ha, Andrew beat me to the punch!\n\nAndrew Dunstan wrote on 3/1/2021 7:59 AM:\n> On 2/28/21 10:04 AM, Jung, Jinho wrote:\n>> # install DBMS\n>> sudo su\n>> make install\n>> adduser postgres\n>> rm -rf /usr/local/pgsql/data\n>> mkdir /usr/local/pgsql/data\n>> chown -R postgres /usr/local/pgsql/data\n>> su - postgres\n>> /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data\n>> /usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start\n>> /usr/local/pgsql/bin/createdb jjung\n>\n> Using an untuned Postgres is fairly useless for a performance test. Out\n> of the box, shared_buffers and work_mem are too low for almost all\n> situations, and many other settings can also usually be improved. The\n> default settings are deliberately very conservative.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n>\n> -- Andrew Dunstan EDB: https://www.enterprisedb.com\n>\n>\n\n\n\n",
"msg_date": "Mon, 1 Mar 2021 08:05:39 -0500",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential performance issues"
},
{
"msg_contents": "Was just about to reply similarly. Mind you it perhaps does raise the\nquestion : are the default postgresql settings perhaps too\nconservative or too static. For example, in the absence of other\nexplicit configuration, might it make more sense for many use cases\nfor postgres to assess the physical memory available and make some\nhalf-sensible allocations based on that? I know there are downsides\nto assuming that postgresql has free reign to all that it sees, but\nthere are clearly also some downsides in assuming it has next to\nnothing. This could also be more correctly part of a package\ninstallation procedure, but just floating the idea ... some kind of\nauto-tuning vs ultra-conservative defaults.\n\nOn Mon, 1 Mar 2021 at 13:05, MichaelDBA <[email protected]> wrote:\n>\n> Ha, Andrew beat me to the punch!\n>\n> Andrew Dunstan wrote on 3/1/2021 7:59 AM:\n> > On 2/28/21 10:04 AM, Jung, Jinho wrote:\n> >> # install DBMS\n> >> sudo su\n> >> make install\n> >> adduser postgres\n> >> rm -rf /usr/local/pgsql/data\n> >> mkdir /usr/local/pgsql/data\n> >> chown -R postgres /usr/local/pgsql/data\n> >> su - postgres\n> >> /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data\n> >> /usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start\n> >> /usr/local/pgsql/bin/createdb jjung\n> >\n> > Using an untuned Postgres is fairly useless for a performance test. Out\n> > of the box, shared_buffers and work_mem are too low for almost all\n> > situations, and many other settings can also usually be improved. The\n> > default settings are deliberately very conservative.\n> >\n> >\n> > cheers\n> >\n> >\n> > andrew\n> >\n> >\n> >\n> > -- Andrew Dunstan EDB: https://www.enterprisedb.com\n> >\n> >\n>\n>\n>\n\n\n",
"msg_date": "Mon, 1 Mar 2021 13:44:38 +0000",
"msg_from": "Bob Jolliffe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential performance issues"
},
{
"msg_contents": "Andrew, Bob, Michael\n\nThanks for the valuable feedback! Even with the default setting, PostgreSQL mostly showed good performance than other DBMSs. The reported queries are a very tiny portion among all executed queries (e.g., <0.001%).\n\nAs you guided, we will make the follow-up report after we test again with the performance-tuned PostgreSQL.\n\nHope we can contribute to improving PostgreSQL.\n\nThanks,\nJinho Jung\n\n________________________________\nFrom: MichaelDBA <[email protected]>\nSent: Monday, March 1, 2021 8:04 AM\nTo: Jung, Jinho <[email protected]>\nCc: [email protected] <[email protected]>\nSubject: Re: Potential performance issues\n\nHi,\n\nIt is worthy work trying to compare performance across multiple database vendors, but unfortunately, it does not really come across as comparing apples to apples.\n\nFor instance, configuration parameters: I do not see where you are doing any modification of configuration at all. Since DBVendors are different in how they apply \"out of the box\" configuration, this alone can severely affect your comparison tests even though you are using a standard in benchmark testing, TPCC-C. Postgres is especially conservative in \"out of the box\" configuration. For instance, \"work_mem\" is set to an incredibly low value of 4MB. This has a big impact on many types of queries. Oracle has something called SGA_TARGET, which if enabled, self-regulates where the memory is utilized, thus not limiting query memory specifically in the way Postgres does. This is just one example of a bazillion others where differences in \"out of the box\" configuration makes these tests more like comparing apples to oranges. There are many other areas of configuration related to memory, disk, parallel execution, io concurrency, etc.\n\nIn sum, when comparing performance across different database vendors, there are many other factors that must be taken into account when trying to do an impartial comparison. I just showed one: how configuration differences can skew the results.\n\nRegards,\nMichael Vitale\n\n\n\n\nJung, Jinho wrote on 2/28/2021 10:04 AM:\n# Performance issues discovered from differential test\n\nHello. We are studying DBMS from GeorgiaTech and reporting interesting queries that potentially show performance problems.\n\nTo discover such cases, we used the following procedures:\n\n* Install four DBMSs with the latest version (PostgreSQL, SQLite, MySQL, CockroachDB)\n* Import TPCC-C benchmark for each DBMS\n* Generate random query (and translate the query to handle different dialects)\n* Run the query and measure the query execution time\n * Remove `LIMIT` to prevent any non-deterministic behaviors\n * Discard the test case if any DBMS returned an error\n * Some DBMS does not show the actual query execution time. In this case, query the `current time` before and after the actual query, and then we calculate the elapsed time.\n\nIn this report, we attached a few queries. We believe that there are many duplicated or false-positive cases. It would be great if we can get feedback about the reported queries. Once we know the root cause of the problem or false positive, we will make a follow-up report after we remove them all.\n\nFor example, the below query runs x1000 slower than other DBMSs from PostgreSQL.\n\n select ref_0.ol_amount as c0\n from order_line as ref_0\n left join stock as ref_1\n on (ref_0.ol_o_id = ref_1.s_w_id )\n inner join warehouse as ref_2\n on (ref_1.s_dist_09 is NULL)\n where ref_2.w_tax is NULL;\n\n\n* Query files link:\n\nwget https://gts3.org/~jjung/report1/pg.tar.gz<https://nam12.safelinks.protection.outlook.com/?url=https:%2F%2Fgts3.org%2F~jjung%2Freport1%2Fpg.tar.gz&data=04%7C01%7Cjinho.jung%40gatech.edu%7C09089041b2a04ee4830008d8dcb28a18%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637502008195574204%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=Kk83y66NUIuc%2BQbB2xXaxxb64kQbiphE60Wqudmfkus%3D&reserved=0>\n\n* Execution result (execution time (second))\n\n| Filename | Postgres | Mysql | Cockroachdb | Sqlite | Ratio |\n|---------:|---------:|---------:|------------:|---------:|---------:|\n| 34065 | 1.31911 | 0.013 | 0.02493 | 1.025 | 101.47 |\n| 36399 | 3.60298 | 0.015 | 1.05593 | 3.487 | 240.20 |\n| 35767 | 4.01327 | 0.032 | 0.00727 | 2.311 | 552.19 |\n| 11132 | 4.3518 | 0.022 | 0.00635 | 3.617 | 684.88 |\n| 29658 | 4.6783 | 0.034 | 0.00778 | 2.63 | 601.10 |\n| 19522 | 1.06943 | 0.014 | 0.00569 | 0.0009 | 1188.26 |\n| 38388 | 3.21383 | 0.013 | 0.00913 | 2.462 | 352.09 |\n| 7187 | 1.20267 | 0.015 | 0.00316 | 0.0009 | 1336.30 |\n| 24121 | 2.80611 | 0.014 | 0.03083 | 0.005 | 561.21 |\n| 25800 | 3.95163 | 0.024 | 0.73027 | 3.876 | 164.65 |\n| 2030 | 1.91181 | 0.013 | 0.04123 | 1.634 | 147.06 |\n| 17383 | 3.28785 | 0.014 | 0.00611 | 2.4 | 538.45 |\n| 19551 | 4.70967 | 0.014 | 0.00329 | 0.0009 | 5232.97 |\n| 26595 | 3.70423 | 0.014 | 0.00601 | 2.747 | 615.92 |\n| 469 | 4.18906 | 0.013 | 0.12343 | 0.016 | 322.23 |\n\n\n# Reproduce: install DBMSs, import TPCC benchmark, run query\n\n### Cockroach (from binary)\n\n```sh\n# install DBMS\nwget https://binaries.cockroachdb.com/cockroach-v20.2.5.linux-amd64.tgz<https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbinaries.cockroachdb.com%2Fcockroach-v20.2.5.linux-amd64.tgz&data=04%7C01%7Cjinho.jung%40gatech.edu%7C09089041b2a04ee4830008d8dcb28a18%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637502008195574204%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=yRiMQP9tuhmMg6QCeYMHCoLvSARheHptOSHUhMZLo2Y%3D&reserved=0>\ntar xzvf cockroach-v20.2.5.linux-amd64.tgz\nsudo cp -i cockroach-v20.2.5.linux-amd64/cockroach /usr/local/bin/cockroach20\n\nsudo mkdir -p /usr/local/lib/cockroach\nsudo cp -i cockroach-v20.2.5.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/\nsudo cp -i cockroach-v20.2.5.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/\n\n# test\nwhich cockroach20\ncockroach20 demo\n\n# start the DBMS (to make initial node files)\ncd ~\ncockroach20 start-single-node --insecure --store=node20 --listen-addr=localhost:26259 --http-port=28080 --max-sql-memory=1GB --background\n# quit\ncockroach20 quit --insecure --host=localhost:26259\n\n# import DB\nmkdir -p node20/extern\nwget https://gts3.org/~jjung/tpcc-perf/tpcc_cr.tar.gz<https://nam12.safelinks.protection.outlook.com/?url=https:%2F%2Fgts3.org%2F~jjung%2Ftpcc-perf%2Ftpcc_cr.tar.gz&data=04%7C01%7Cjinho.jung%40gatech.edu%7C09089041b2a04ee4830008d8dcb28a18%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637502008195584197%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=9OQRA5Zt8DCBk6t4Sn4NBRFFDDY5W2R9yKhbOJJ9s9o%3D&reserved=0>\ntar xzvf tpcc_cr.tar.gz\ncp tpcc_cr.sql node20/tpcc.sql\n\n# start the DBMS again and createdb\ncockroach20 sql --insecure --host=localhost:26259 --execute=\"CREATE DATABASE IF NOT EXISTS cockroachdb;\"\n--cockroach20 sql --insecure --host=localhost:26259 --execute=\"DROP DATABASE cockroachdb;\"\n\ncockroach20 sql --insecure --host=localhost:26259 --database=cockroachdb --execute=\"IMPORT PGDUMP 'nodelocal://self/tpcc.sql';\"\n\n# test\ncockroach20 sql --insecure --host=localhost:26259 --database=cockroachdb --execute=\"explain analyze select count(*) from order_line;\"\n\n# run query\ncockroach20 sql --insecure --host=localhost --port=26259 --database=cockroachdb < query.sql\n```\n\n\n### Postgre (from SRC)\n\n```sh\n# remove any previous postgres (if exist)\nsudo apt-get --purge remove postgresql postgresql-doc postgresql-common\n\n# build latest postgres\ngit clone https://github.com/postgres/postgres.git<https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpostgres%2Fpostgres.git&data=04%7C01%7Cjinho.jung%40gatech.edu%7C09089041b2a04ee4830008d8dcb28a18%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637502008195594191%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=PIb%2BUGT9Fu1CvkbxpJscUj5qapTPFNQpUtKWDVfQXPE%3D&reserved=0>\nmkdir bld\ncd bld\n../configure\nmake -j 20\n\n# install DBMS\nsudo su\nmake install\nadduser postgres\nrm -rf /usr/local/pgsql/data\nmkdir /usr/local/pgsql/data\nchown -R postgres /usr/local/pgsql/data\nsu - postgres\n/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data\n/usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start\n/usr/local/pgsql/bin/createdb jjung\n#/usr/local/pgsql/bin/psql postgresdb\n\n/usr/local/pgsql/bin/createuser -s {username}\n/usr/local/pgsql/bin/createdb postgresdb\n/usr/local/pgsql/bin/psql\n\n=# alter {username} with superuser\n\n# import DB\nwget https://gts3.org/~jjung/tpcc-perf/tpcc_pg.tar.gz<https://nam12.safelinks.protection.outlook.com/?url=https:%2F%2Fgts3.org%2F~jjung%2Ftpcc-perf%2Ftpcc_pg.tar.gz&data=04%7C01%7Cjinho.jung%40gatech.edu%7C09089041b2a04ee4830008d8dcb28a18%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637502008195594191%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=kDWbBCvTt2lzWTsdsIZrJvWsUCZUQSVS0OErqCTceVA%3D&reserved=0>\ntar xzvf tpcc_pg.tar.gz\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -f tpcc_pg.sql\n\n# test\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -c \"select * from warehouse\"\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -c \"\\\\dt\"\n\n# run query\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -f query.sql\n```\n\n\n### Sqlite (from SRC)\n\n```sh\n# uninstall any existing\nsudo apt purge sliqte3\n\n# build latest sqlite from src\ngit clone https://github.com/sqlite/sqlite.git<https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fsqlite%2Fsqlite.git&data=04%7C01%7Cjinho.jung%40gatech.edu%7C09089041b2a04ee4830008d8dcb28a18%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637502008195604185%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=i7uMgx6QTVX0LjQ61m4kJPnJbW6cFDZcmz5x0hJC9Hk%3D&reserved=0>\ncd sqlite\nmkdir bld\ncd bld\n../configure\nmake -j 20\n\n# install DBMS\nsudo make install\n\n# import DB\nwget https://gts3.org/~jjung/tpcc-perf/tpcc_sq.tar.gz<https://nam12.safelinks.protection.outlook.com/?url=https:%2F%2Fgts3.org%2F~jjung%2Ftpcc-perf%2Ftpcc_sq.tar.gz&data=04%7C01%7Cjinho.jung%40gatech.edu%7C09089041b2a04ee4830008d8dcb28a18%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637502008195604185%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=vC8vdNyyekSFkbsUKFn9PkIZHZ9nOudUFBSBlWYe5kw%3D&reserved=0>\ntar xzvf tpcc_sq.tar.gz\n\n# test\nsqlite3 tpcc_sq.db\nsqlite> select * from warehouse;\n\n# run query\nsqlite3 tpcc_sq.db < query.sql\n```\n\n\n### Mysql (install V8.0.X)\n\n```sh\n# remove mysql v5.X (if exist)\nsudo apt purge mysql-server mysql-common mysql-client\n\n# install\nwget https://dev.mysql.com/get/mysql-apt-config_0.8.16-1_all.deb<https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdev.mysql.com%2Fget%2Fmysql-apt-config_0.8.16-1_all.deb&data=04%7C01%7Cjinho.jung%40gatech.edu%7C09089041b2a04ee4830008d8dcb28a18%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637502008195614177%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=O65HyWp3z%2Bjh0g5eXX7SSEnzpM1Q6YRbFofoDsBb%2BQ4%3D&reserved=0>\nsudo dpkg -i mysql-apt-config_0.8.16-1_all.deb\n # then select mysql 8.0 server\nsudo apt update\nsudo apt install mysql-client mysql-community-server mysql-server\n\n# check\nmysql -u root -p\n\n# create user mysql\n CREATE USER 'mysql'@'localhost' IDENTIFIED BY 'mysql';\n alter user 'root'@'localhost' identified by 'mysql';\n\n# modify the conf (should add \"skip-grant-tables\" under [mysqld])\nsudo vim /etc/mysql/mysql.conf.d/mysqld.cnf\n\n# optimize\n# e.g., https://gist.github.com/fevangelou/fb72f36bbe333e059b66<https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgist.github.com%2Ffevangelou%2Ffb72f36bbe333e059b66&data=04%7C01%7Cjinho.jung%40gatech.edu%7C09089041b2a04ee4830008d8dcb28a18%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637502008195624175%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=sM2dgn%2BMZB4J37OWV7rt%2Bxvr1kSUhMCEjk3AEf2%2BOcg%3D&reserved=0>\n\n# import DB\nwget https://gts3.org/~jjung/tpcc-perf/tpcc_my.tar.gz<https://nam12.safelinks.protection.outlook.com/?url=https:%2F%2Fgts3.org%2F~jjung%2Ftpcc-perf%2Ftpcc_my.tar.gz&data=04%7C01%7Cjinho.jung%40gatech.edu%7C09089041b2a04ee4830008d8dcb28a18%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637502008195624175%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=vUUh%2Fxe130fW9zw61uXK%2B9a8aXZi%2F0xx9Mfp47mXsNg%3D&reserved=0>\ntar xzvf tpcc_my.tar.gz\nmysql -u mysql -pmysql -e \"create database mysqldb\"\nmysql -u mysql -pmysql mysqldb < tpcc_my.sql\n\n# test\nmysql -u mysql -pmysql mysqldb -e \"show tables\"\nmysql -u mysql -pmysql mysqldb -e \"select * from customer\"\n\n# run query\nmysql -u mysql -pmysql mysqldb < query.sql\n```\n\n\n# Evaluation environment\n\n* Server: Ubuntu 18.04 (64bit)\n* CockroachDB: v20.2.5\n* PostgreSQL: latest commit (21 Feb, 2021)\n* MySQL: v8.0.23\n* SQLite: latest commit (21 Feb, 2021)\n\n\n\n\n\n\n\n\n\n\nAndrew, Bob, Michael\n\n\n\n\nThanks for the valuable feedback! Even with the default setting, PostgreSQL mostly showed good performance than other DBMSs. The reported queries are a very tiny portion among all executed queries\n(e.g., <0.001%).\n\n\n\nAs you guided, we will make the follow-up report after we test again with the performance-tuned PostgreSQL.\n\n\n\nHope we can contribute to improving PostgreSQL. \n\n\nThanks,\nJinho Jung\n\n\n\n\n\n\n\nFrom: MichaelDBA <[email protected]>\nSent: Monday, March 1, 2021 8:04 AM\nTo: Jung, Jinho <[email protected]>\nCc: [email protected] <[email protected]>\nSubject: Re: Potential performance issues\n \n\nHi,\n\nIt is worthy work trying to compare performance across multiple database vendors, but unfortunately, it does not really come across as comparing apples to apples.\n\n\nFor instance, configuration parameters: I do not see where you are doing any modification of configuration at all. Since DBVendors are different in how they apply \"out of the box\" configuration, this alone can severely affect your comparison tests even though\n you are using a standard in benchmark testing, TPCC-C. Postgres is especially conservative in \"out of the box\" configuration. For instance, \"work_mem\" is set to an incredibly low value of 4MB. This has a big impact on many types of queries. Oracle has something\n called SGA_TARGET, which if enabled, self-regulates where the memory is utilized, thus not limiting query memory specifically in the way Postgres does. This is just one example of a bazillion others where differences in \"out of the box\" configuration makes\n these tests more like comparing apples to oranges. There are many other areas of configuration related to memory, disk, parallel execution, io concurrency, etc.\n\nIn sum, when comparing performance across different database vendors, there are many other factors that must be taken into account when trying to do an impartial comparison. I just showed one: how configuration differences can skew the results.\n\nRegards,\nMichael Vitale\n\n\n\n\nJung, Jinho wrote on 2/28/2021 10:04 AM:\n\n\n# Performance issues discovered from differential test\n\n\nHello. We are studying DBMS from GeorgiaTech and reporting interesting queries that potentially show performance problems.\n\n\nTo discover such cases, we used the following procedures:\n\n\n* Install four DBMSs with the latest version (PostgreSQL, SQLite, MySQL, CockroachDB)\n* Import TPCC-C benchmark for each DBMS\n* Generate random query (and translate the query to handle different dialects)\n* Run the query and measure the query execution time\n * Remove `LIMIT` to prevent any non-deterministic behaviors\n * Discard the test case if any DBMS returned an error\n * Some DBMS does not show the actual query execution time. In this case, query the `current time` before and after the actual query, and then we calculate the elapsed time.\n\n\nIn this report, we attached a few queries. We believe that there are many duplicated or false-positive cases. It would be great if we can get feedback about the reported queries. Once we know the root cause of the problem or false positive, we will make\n a follow-up report after we remove them all.\n\n\nFor example, the below query runs x1000 slower than other DBMSs from PostgreSQL.\n\n\n\n select ref_0.ol_amount as c0\n from order_line as ref_0\n left join stock as ref_1\n on (ref_0.ol_o_id = ref_1.s_w_id )\n inner join warehouse as ref_2\n on (ref_1.s_dist_09 is NULL)\n where ref_2.w_tax is NULL;\n\n\n\n\n* Query files link:\n\n\nwget \nhttps://gts3.org/~jjung/report1/pg.tar.gz\n\n\n\n* Execution result (execution time (second))\n\n\n| Filename | Postgres | Mysql | Cockroachdb | Sqlite | Ratio |\n|---------:|---------:|---------:|------------:|---------:|---------:|\n| 34065 | 1.31911 | 0.013 | 0.02493 | 1.025 | 101.47 |\n| 36399 | 3.60298 | 0.015 | 1.05593 | 3.487 | 240.20 |\n| 35767 | 4.01327 | 0.032 | 0.00727 | 2.311 | 552.19 |\n| 11132 | 4.3518 | 0.022 | 0.00635 | 3.617 | 684.88 |\n| 29658 | 4.6783 | 0.034 | 0.00778 | 2.63 | 601.10 |\n| 19522 | 1.06943 | 0.014 | 0.00569 | 0.0009 | 1188.26 |\n| 38388 | 3.21383 | 0.013 | 0.00913 | 2.462 | 352.09 |\n| 7187 | 1.20267 | 0.015 | 0.00316 | 0.0009 | 1336.30 |\n| 24121 | 2.80611 | 0.014 | 0.03083 | 0.005 | 561.21 |\n| 25800 | 3.95163 | 0.024 | 0.73027 | 3.876 | 164.65 |\n| 2030 | 1.91181 | 0.013 | 0.04123 | 1.634 | 147.06 |\n| 17383 | 3.28785 | 0.014 | 0.00611 | 2.4 | 538.45 |\n| 19551 | 4.70967 | 0.014 | 0.00329 | 0.0009 | 5232.97 |\n| 26595 | 3.70423 | 0.014 | 0.00601 | 2.747 | 615.92 |\n| 469 | 4.18906 | 0.013 | 0.12343 | 0.016 | 322.23 |\n\n\n\n\n# Reproduce: install DBMSs, import TPCC benchmark, run query\n\n\n### Cockroach (from binary)\n\n\n```sh\n# install DBMS\nwget \nhttps://binaries.cockroachdb.com/cockroach-v20.2.5.linux-amd64.tgz\ntar xzvf cockroach-v20.2.5.linux-amd64.tgz\nsudo cp -i cockroach-v20.2.5.linux-amd64/cockroach /usr/local/bin/cockroach20\n\n\nsudo mkdir -p /usr/local/lib/cockroach\nsudo cp -i cockroach-v20.2.5.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/\nsudo cp -i cockroach-v20.2.5.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/\n\n\n# test\nwhich cockroach20\ncockroach20 demo\n\n\n# start the DBMS (to make initial node files)\ncd ~\ncockroach20 start-single-node --insecure --store=node20 --listen-addr=localhost:26259 --http-port=28080 --max-sql-memory=1GB --background\n# quit\ncockroach20 quit --insecure --host=localhost:26259\n\n\n# import DB\nmkdir -p node20/extern\nwget \nhttps://gts3.org/~jjung/tpcc-perf/tpcc_cr.tar.gz\ntar xzvf tpcc_cr.tar.gz\ncp tpcc_cr.sql node20/tpcc.sql\n\n\n# start the DBMS again and createdb\ncockroach20 sql --insecure --host=localhost:26259 --execute=\"CREATE DATABASE IF NOT EXISTS cockroachdb;\"\n--cockroach20 sql --insecure --host=localhost:26259 --execute=\"DROP DATABASE cockroachdb;\"\n\n\ncockroach20 sql --insecure --host=localhost:26259 --database=cockroachdb --execute=\"IMPORT PGDUMP 'nodelocal://self/tpcc.sql';\"\n\n\n# test\ncockroach20 sql --insecure --host=localhost:26259 --database=cockroachdb --execute=\"explain analyze select count(*) from order_line;\"\n\n\n# run query\ncockroach20 sql --insecure --host=localhost --port=26259 --database=cockroachdb < query.sql\n```\n\n\n\n\n### Postgre (from SRC)\n\n\n```sh\n# remove any previous postgres (if exist)\nsudo apt-get --purge remove postgresql postgresql-doc postgresql-common\n\n\n# build latest postgres\ngit clone \nhttps://github.com/postgres/postgres.git\nmkdir bld\ncd bld\n../configure\nmake -j 20\n\n\n# install DBMS\nsudo su\nmake install\nadduser postgres\nrm -rf /usr/local/pgsql/data\nmkdir /usr/local/pgsql/data\nchown -R postgres /usr/local/pgsql/data\nsu - postgres\n/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data\n/usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start\n/usr/local/pgsql/bin/createdb jjung\n#/usr/local/pgsql/bin/psql postgresdb\n\n\n/usr/local/pgsql/bin/createuser -s {username}\n/usr/local/pgsql/bin/createdb postgresdb\n/usr/local/pgsql/bin/psql\n\n\n=# alter {username} with superuser\n\n\n# import DB\nwget \nhttps://gts3.org/~jjung/tpcc-perf/tpcc_pg.tar.gz\ntar xzvf tpcc_pg.tar.gz\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -f tpcc_pg.sql\n\n\n# test\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -c \"select * from warehouse\"\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -c \"\\\\dt\"\n\n\n# run query\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -f query.sql\n```\n\n\n\n\n### Sqlite (from SRC)\n\n\n```sh\n# uninstall any existing\nsudo apt purge sliqte3\n\n\n# build latest sqlite from src\ngit clone \nhttps://github.com/sqlite/sqlite.git\ncd sqlite\nmkdir bld\ncd bld\n../configure\nmake -j 20\n\n\n# install DBMS\nsudo make install\n\n\n# import DB\nwget \nhttps://gts3.org/~jjung/tpcc-perf/tpcc_sq.tar.gz\ntar xzvf tpcc_sq.tar.gz\n\n\n# test\nsqlite3 tpcc_sq.db\nsqlite> select * from warehouse;\n\n\n# run query\nsqlite3 tpcc_sq.db < query.sql\n```\n\n\n\n\n### Mysql (install V8.0.X)\n\n\n```sh\n# remove mysql v5.X (if exist)\nsudo apt purge mysql-server mysql-common mysql-client\n\n\n# install\nwget \nhttps://dev.mysql.com/get/mysql-apt-config_0.8.16-1_all.deb\nsudo dpkg -i mysql-apt-config_0.8.16-1_all.deb\n # then select mysql 8.0 server\nsudo apt update\nsudo apt install mysql-client mysql-community-server mysql-server\n\n\n# check\nmysql -u root -p\n\n\n# create user mysql\n CREATE USER 'mysql'@'localhost' IDENTIFIED BY 'mysql';\n alter user 'root'@'localhost' identified by 'mysql';\n\n\n# modify the conf (should add \"skip-grant-tables\" under [mysqld])\nsudo vim /etc/mysql/mysql.conf.d/mysqld.cnf\n\n\n# optimize\n# e.g., \nhttps://gist.github.com/fevangelou/fb72f36bbe333e059b66\n\n\n# import DB\nwget \nhttps://gts3.org/~jjung/tpcc-perf/tpcc_my.tar.gz\ntar xzvf tpcc_my.tar.gz\nmysql -u mysql -pmysql -e \"create database mysqldb\"\nmysql -u mysql -pmysql mysqldb < tpcc_my.sql\n\n\n# test\nmysql -u mysql -pmysql mysqldb -e \"show tables\"\nmysql -u mysql -pmysql mysqldb -e \"select * from customer\"\n\n\n# run query\nmysql -u mysql -pmysql mysqldb < query.sql\n```\n\n\n\n\n# Evaluation environment\n\n\n* Server: Ubuntu 18.04 (64bit)\n* CockroachDB: v20.2.5\n* PostgreSQL: latest commit (21 Feb, 2021)\n* MySQL: v8.0.23\n* SQLite: latest commit (21 Feb, 2021)",
"msg_date": "Mon, 1 Mar 2021 14:41:22 +0000",
"msg_from": "\"Jung, Jinho\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Potential performance issues"
},
{
"msg_contents": "Jung, Jinho schrieb am 28.02.2021 um 16:04:\n> # Performance issues discovered from differential test\n>\n> For example, the below query runs x1000 slower than other DBMSs from PostgreSQL.\n>\n> select ref_0.ol_amount as c0\n> from order_line as ref_0\n> left join stock as ref_1\n> on (ref_0.ol_o_id = ref_1.s_w_id )\n> inner join warehouse as ref_2\n> on (ref_1.s_dist_09 is NULL)\n> where ref_2.w_tax is NULL;\n\nI find this query extremely weird to be honest.\n\nThere is no join condition between warehouse and the other two tables which results in a cross join.\nWhich is \"reduced\" somehow by applying the IS NULL conditions - but still, to me this makes no sense.\n\nMaybe the Postgres optimizer doesn't handle this ugly \"join condition\" the same way the others do.\n\nI would rather expect a NOT EXISTS against the warehouse table.\n\nThomas\n\n\n",
"msg_date": "Mon, 1 Mar 2021 15:44:02 +0100",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential performance issues"
},
{
"msg_contents": "On Mon, Mar 1, 2021 at 8:44 AM Bob Jolliffe <[email protected]> wrote:\n\n> Was just about to reply similarly. Mind you it perhaps does raise the\n> question : are the default postgresql settings perhaps too\n> conservative or too static. For example, in the absence of other\n> explicit configuration, might it make more sense for many use cases\n> for postgres to assess the physical memory available and make some\n> half-sensible allocations based on that? I know there are downsides\n> to assuming that postgresql has free reign to all that it sees, but\n> there are clearly also some downsides in assuming it has next to\n> nothing. This could also be more correctly part of a package\n> installation procedure, but just floating the idea ... some kind of\n> auto-tuning vs ultra-conservative defaults.\n>\n>\nWhen you spin up an Aurora or RDS instance in AWS, their default parameter\ngroup values are mostly set by formulas which derive values based on the\ninstance size. Of course they can assume free reign of the entire system,\nbut the values they choose are still somewhat interesting.\n\nFor example, they set `maintenance_work_mem` like this:\n\"GREATEST({DBInstanceClassMemory/63963136*1024},65536)\"\n\nIt doesn't completely remove the need for a human to optimize the parameter\ngroup based on your use case, but it does seem to give you a better novice\nstarting point to work from. And there are definitely some formulas that I\ndisagree with in the general case. However it is something that is\nadaptable for those times when you bump up the server size, but don't want\nto have to revisit and update every parameter to support the change.\n\nI've been thinking a lot about running PG in containers for dev\nenvironments lately, and trying to tune to get reasonable dev performance\nout of a container without crushing the other services and containers on\nthe laptop. Most developers that I've worked with over the past few years\nonly have exposure to running PG in a container. They've simply never run\nit on a server or even barebones on their laptop. I think any modern\napproach to a default set of tuning parameters would probably also need to\nbe \"container aware\", which is for all practical purposes the new default\n\"minimal configuration\" on multi-purpose systems.\n\nOn Mon, Mar 1, 2021 at 8:44 AM Bob Jolliffe <[email protected]> wrote:Was just about to reply similarly. Mind you it perhaps does raise the\nquestion : are the default postgresql settings perhaps too\nconservative or too static. For example, in the absence of other\nexplicit configuration, might it make more sense for many use cases\nfor postgres to assess the physical memory available and make some\nhalf-sensible allocations based on that? I know there are downsides\nto assuming that postgresql has free reign to all that it sees, but\nthere are clearly also some downsides in assuming it has next to\nnothing. This could also be more correctly part of a package\ninstallation procedure, but just floating the idea ... some kind of\nauto-tuning vs ultra-conservative defaults.When you spin up an Aurora or RDS instance in AWS, their default parameter group values are mostly set by formulas which derive values based on the instance size. Of course they can assume free reign of the entire system, but the values they choose are still somewhat interesting.For example, they set `maintenance_work_mem` like this: \"GREATEST({DBInstanceClassMemory/63963136*1024},65536)\"It doesn't completely remove the need for a human to optimize the parameter group based on your use case, but it does seem to give you a better novice starting point to work from. And there are definitely some formulas that I disagree with in the general case. However it is something that is adaptable for those times when you bump up the server size, but don't want to have to revisit and update every parameter to support the change.I've been thinking a lot about running PG in containers for dev environments lately, and trying to tune to get reasonable dev performance out of a container without crushing the other services and containers on the laptop. Most developers that I've worked with over the past few years only have exposure to running PG in a container. They've simply never run it on a server or even barebones on their laptop. I think any modern approach to a default set of tuning parameters would probably also need to be \"container aware\", which is for all practical purposes the new default \"minimal configuration\" on multi-purpose systems.",
"msg_date": "Mon, 1 Mar 2021 09:53:49 -0500",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential performance issues"
},
{
"msg_contents": "Hi\n\npo 1. 3. 2021 v 15:59 odesílatel Jung, Jinho <[email protected]> napsal:\n\n> Andrew, Bob, Michael\n>\n> Thanks for the valuable feedback! Even with the default setting,\n> PostgreSQL mostly showed good performance than other DBMSs. The reported\n> queries are a very tiny portion among all executed queries (e.g., <0.001%).\n>\n>\n> As you guided, we will make the follow-up report after we test again with\n> the performance-tuned PostgreSQL.\n>\n> Hope we can contribute to improving PostgreSQL.\n>\n\nImportant thing - assign execution plan of slow query\n\nhttps://explain.depesz.com/\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nRegards\n\nPavel\n\n\n> Thanks,\n> Jinho Jung\n>\n> ------------------------------\n> *From:* MichaelDBA <[email protected]>\n> *Sent:* Monday, March 1, 2021 8:04 AM\n> *To:* Jung, Jinho <[email protected]>\n> *Cc:* [email protected] <[email protected]>\n> *Subject:* Re: Potential performance issues\n>\n> Hi,\n>\n> It is worthy work trying to compare performance across multiple database\n> vendors, but unfortunately, it does not really come across as comparing\n> apples to apples.\n>\n> For instance, configuration parameters: I do not see where you are doing\n> any modification of configuration at all. Since DBVendors are different in\n> how they apply \"out of the box\" configuration, this alone can severely\n> affect your comparison tests even though you are using a standard in\n> benchmark testing, TPCC-C. Postgres is especially conservative in \"out of\n> the box\" configuration. For instance, \"work_mem\" is set to an incredibly\n> low value of 4MB. This has a big impact on many types of queries. Oracle\n> has something called SGA_TARGET, which if enabled, self-regulates where the\n> memory is utilized, thus not limiting query memory specifically in the way\n> Postgres does. This is just one example of a bazillion others where\n> differences in \"out of the box\" configuration makes these tests more like\n> comparing apples to oranges. There are many other areas of configuration\n> related to memory, disk, parallel execution, io concurrency, etc.\n>\n> In sum, when comparing performance across different database vendors,\n> there are many other factors that must be taken into account when trying to\n> do an impartial comparison. I just showed one: how configuration\n> differences can skew the results.\n>\n> Regards,\n> Michael Vitale\n>\n>\n>\n>\n> Jung, Jinho wrote on 2/28/2021 10:04 AM:\n>\n> # Performance issues discovered from differential test\n>\n> Hello. We are studying DBMS from GeorgiaTech and reporting interesting\n> queries that potentially show performance problems.\n>\n> To discover such cases, we used the following procedures:\n>\n> * Install four DBMSs with the latest version (PostgreSQL, SQLite, MySQL,\n> CockroachDB)\n> * Import TPCC-C benchmark for each DBMS\n> * Generate random query (and translate the query to handle different\n> dialects)\n> * Run the query and measure the query execution time\n> * Remove `LIMIT` to prevent any non-deterministic behaviors\n> * Discard the test case if any DBMS returned an error\n> * Some DBMS does not show the actual query execution time. In this\n> case, query the `current time` before and after the actual query, and then\n> we calculate the elapsed time.\n>\n> In this report, we attached a few queries. We believe that there are many\n> duplicated or false-positive cases. It would be great if we can get\n> feedback about the reported queries. Once we know the root cause of the\n> problem or false positive, we will make a follow-up report after we remove\n> them all.\n>\n> For example, the below query runs x1000 slower than other DBMSs from\n> PostgreSQL.\n>\n> select ref_0.ol_amount as c0\n> from order_line as ref_0\n> left join stock as ref_1\n> on (ref_0.ol_o_id = ref_1.s_w_id )\n> inner join warehouse as ref_2\n> on (ref_1.s_dist_09 is NULL)\n> where ref_2.w_tax is NULL;\n>\n>\n> * Query files link:\n>\n> wget https://gts3.org/~jjung/report1/pg.tar.gz\n> <https://nam12.safelinks.protection.outlook.com/?url=https:%2F%2Fgts3.org%2F~jjung%2Freport1%2Fpg.tar.gz&data=04%7C01%7Cjinho.jung%40gatech.edu%7C09089041b2a04ee4830008d8dcb28a18%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637502008195574204%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=Kk83y66NUIuc%2BQbB2xXaxxb64kQbiphE60Wqudmfkus%3D&reserved=0>\n>\n> * Execution result (execution time (second))\n>\n> | Filename | Postgres | Mysql | Cockroachdb | Sqlite | Ratio |\n> |---------:|---------:|---------:|------------:|---------:|---------:|\n> | 34065 | 1.31911 | 0.013 | 0.02493 | 1.025 | 101.47 |\n> | 36399 | 3.60298 | 0.015 | 1.05593 | 3.487 | 240.20 |\n> | 35767 | 4.01327 | 0.032 | 0.00727 | 2.311 | 552.19 |\n> | 11132 | 4.3518 | 0.022 | 0.00635 | 3.617 | 684.88 |\n> | 29658 | 4.6783 | 0.034 | 0.00778 | 2.63 | 601.10 |\n> | 19522 | 1.06943 | 0.014 | 0.00569 | 0.0009 | 1188.26 |\n> | 38388 | 3.21383 | 0.013 | 0.00913 | 2.462 | 352.09 |\n> | 7187 | 1.20267 | 0.015 | 0.00316 | 0.0009 | 1336.30 |\n> | 24121 | 2.80611 | 0.014 | 0.03083 | 0.005 | 561.21 |\n> | 25800 | 3.95163 | 0.024 | 0.73027 | 3.876 | 164.65 |\n> | 2030 | 1.91181 | 0.013 | 0.04123 | 1.634 | 147.06 |\n> | 17383 | 3.28785 | 0.014 | 0.00611 | 2.4 | 538.45 |\n> | 19551 | 4.70967 | 0.014 | 0.00329 | 0.0009 | 5232.97 |\n> | 26595 | 3.70423 | 0.014 | 0.00601 | 2.747 | 615.92 |\n> | 469 | 4.18906 | 0.013 | 0.12343 | 0.016 | 322.23 |\n>\n>\n> # Reproduce: install DBMSs, import TPCC benchmark, run query\n>\n> ### Cockroach (from binary)\n>\n> ```sh\n> # install DBMS\n> wget https://binaries.cockroachdb.com/cockroach-v20.2.5.linux-amd64.tgz\n> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbinaries.cockroachdb.com%2Fcockroach-v20.2.5.linux-amd64.tgz&data=04%7C01%7Cjinho.jung%40gatech.edu%7C09089041b2a04ee4830008d8dcb28a18%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637502008195574204%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=yRiMQP9tuhmMg6QCeYMHCoLvSARheHptOSHUhMZLo2Y%3D&reserved=0>\n> tar xzvf cockroach-v20.2.5.linux-amd64.tgz\n> sudo cp -i cockroach-v20.2.5.linux-amd64/cockroach\n> /usr/local/bin/cockroach20\n>\n> sudo mkdir -p /usr/local/lib/cockroach\n> sudo cp -i cockroach-v20.2.5.linux-amd64/lib/libgeos.so\n> /usr/local/lib/cockroach/\n> sudo cp -i cockroach-v20.2.5.linux-amd64/lib/libgeos_c.so\n> /usr/local/lib/cockroach/\n>\n> # test\n> which cockroach20\n> cockroach20 demo\n>\n> # start the DBMS (to make initial node files)\n> cd ~\n> cockroach20 start-single-node --insecure --store=node20\n> --listen-addr=localhost:26259 --http-port=28080 --max-sql-memory=1GB\n> --background\n> # quit\n> cockroach20 quit --insecure --host=localhost:26259\n>\n> # import DB\n> mkdir -p node20/extern\n> wget https://gts3.org/~jjung/tpcc-perf/tpcc_cr.tar.gz\n> <https://nam12.safelinks.protection.outlook.com/?url=https:%2F%2Fgts3.org%2F~jjung%2Ftpcc-perf%2Ftpcc_cr.tar.gz&data=04%7C01%7Cjinho.jung%40gatech.edu%7C09089041b2a04ee4830008d8dcb28a18%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637502008195584197%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=9OQRA5Zt8DCBk6t4Sn4NBRFFDDY5W2R9yKhbOJJ9s9o%3D&reserved=0>\n> tar xzvf tpcc_cr.tar.gz\n> cp tpcc_cr.sql node20/tpcc.sql\n>\n> # start the DBMS again and createdb\n> cockroach20 sql --insecure --host=localhost:26259 --execute=\"CREATE\n> DATABASE IF NOT EXISTS cockroachdb;\"\n> --cockroach20 sql --insecure --host=localhost:26259 --execute=\"DROP\n> DATABASE cockroachdb;\"\n>\n> cockroach20 sql --insecure --host=localhost:26259 --database=cockroachdb\n> --execute=\"IMPORT PGDUMP 'nodelocal://self/tpcc.sql';\"\n>\n> # test\n> cockroach20 sql --insecure --host=localhost:26259 --database=cockroachdb\n> --execute=\"explain analyze select count(*) from order_line;\"\n>\n> # run query\n> cockroach20 sql --insecure --host=localhost --port=26259\n> --database=cockroachdb < query.sql\n> ```\n>\n>\n> ### Postgre (from SRC)\n>\n> ```sh\n> # remove any previous postgres (if exist)\n> sudo apt-get --purge remove postgresql postgresql-doc postgresql-common\n>\n> # build latest postgres\n> git clone https://github.com/postgres/postgres.git\n> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpostgres%2Fpostgres.git&data=04%7C01%7Cjinho.jung%40gatech.edu%7C09089041b2a04ee4830008d8dcb28a18%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637502008195594191%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=PIb%2BUGT9Fu1CvkbxpJscUj5qapTPFNQpUtKWDVfQXPE%3D&reserved=0>\n> mkdir bld\n> cd bld\n> ../configure\n> make -j 20\n>\n> # install DBMS\n> sudo su\n> make install\n> adduser postgres\n> rm -rf /usr/local/pgsql/data\n> mkdir /usr/local/pgsql/data\n> chown -R postgres /usr/local/pgsql/data\n> su - postgres\n> /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data\n> /usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start\n> /usr/local/pgsql/bin/createdb jjung\n> #/usr/local/pgsql/bin/psql postgresdb\n>\n> /usr/local/pgsql/bin/createuser -s {username}\n> /usr/local/pgsql/bin/createdb postgresdb\n> /usr/local/pgsql/bin/psql\n>\n> =# alter {username} with superuser\n>\n> # import DB\n> wget https://gts3.org/~jjung/tpcc-perf/tpcc_pg.tar.gz\n> <https://nam12.safelinks.protection.outlook.com/?url=https:%2F%2Fgts3.org%2F~jjung%2Ftpcc-perf%2Ftpcc_pg.tar.gz&data=04%7C01%7Cjinho.jung%40gatech.edu%7C09089041b2a04ee4830008d8dcb28a18%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637502008195594191%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=kDWbBCvTt2lzWTsdsIZrJvWsUCZUQSVS0OErqCTceVA%3D&reserved=0>\n> tar xzvf tpcc_pg.tar.gz\n> /usr/local/pgsql/bin/psql -p 5432 -d postgresdb -f tpcc_pg.sql\n>\n> # test\n> /usr/local/pgsql/bin/psql -p 5432 -d postgresdb -c \"select * from\n> warehouse\"\n> /usr/local/pgsql/bin/psql -p 5432 -d postgresdb -c \"\\\\dt\"\n>\n> # run query\n> /usr/local/pgsql/bin/psql -p 5432 -d postgresdb -f query.sql\n> ```\n>\n>\n> ### Sqlite (from SRC)\n>\n> ```sh\n> # uninstall any existing\n> sudo apt purge sliqte3\n>\n> # build latest sqlite from src\n> git clone https://github.com/sqlite/sqlite.git\n> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fsqlite%2Fsqlite.git&data=04%7C01%7Cjinho.jung%40gatech.edu%7C09089041b2a04ee4830008d8dcb28a18%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637502008195604185%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=i7uMgx6QTVX0LjQ61m4kJPnJbW6cFDZcmz5x0hJC9Hk%3D&reserved=0>\n> cd sqlite\n> mkdir bld\n> cd bld\n> ../configure\n> make -j 20\n>\n> # install DBMS\n> sudo make install\n>\n> # import DB\n> wget https://gts3.org/~jjung/tpcc-perf/tpcc_sq.tar.gz\n> <https://nam12.safelinks.protection.outlook.com/?url=https:%2F%2Fgts3.org%2F~jjung%2Ftpcc-perf%2Ftpcc_sq.tar.gz&data=04%7C01%7Cjinho.jung%40gatech.edu%7C09089041b2a04ee4830008d8dcb28a18%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637502008195604185%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=vC8vdNyyekSFkbsUKFn9PkIZHZ9nOudUFBSBlWYe5kw%3D&reserved=0>\n> tar xzvf tpcc_sq.tar.gz\n>\n> # test\n> sqlite3 tpcc_sq.db\n> sqlite> select * from warehouse;\n>\n> # run query\n> sqlite3 tpcc_sq.db < query.sql\n> ```\n>\n>\n> ### Mysql (install V8.0.X)\n>\n> ```sh\n> # remove mysql v5.X (if exist)\n> sudo apt purge mysql-server mysql-common mysql-client\n>\n> # install\n> wget https://dev.mysql.com/get/mysql-apt-config_0.8.16-1_all.deb\n> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdev.mysql.com%2Fget%2Fmysql-apt-config_0.8.16-1_all.deb&data=04%7C01%7Cjinho.jung%40gatech.edu%7C09089041b2a04ee4830008d8dcb28a18%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637502008195614177%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=O65HyWp3z%2Bjh0g5eXX7SSEnzpM1Q6YRbFofoDsBb%2BQ4%3D&reserved=0>\n> sudo dpkg -i mysql-apt-config_0.8.16-1_all.deb\n> # then select mysql 8.0 server\n> sudo apt update\n> sudo apt install mysql-client mysql-community-server mysql-server\n>\n> # check\n> mysql -u root -p\n>\n> # create user mysql\n> CREATE USER 'mysql'@'localhost' IDENTIFIED BY 'mysql';\n> alter user 'root'@'localhost' identified by 'mysql';\n>\n> # modify the conf (should add \"skip-grant-tables\" under [mysqld])\n> sudo vim /etc/mysql/mysql.conf.d/mysqld.cnf\n>\n> # optimize\n> # e.g., https://gist.github.com/fevangelou/fb72f36bbe333e059b66\n> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgist.github.com%2Ffevangelou%2Ffb72f36bbe333e059b66&data=04%7C01%7Cjinho.jung%40gatech.edu%7C09089041b2a04ee4830008d8dcb28a18%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637502008195624175%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=sM2dgn%2BMZB4J37OWV7rt%2Bxvr1kSUhMCEjk3AEf2%2BOcg%3D&reserved=0>\n>\n> # import DB\n> wget https://gts3.org/~jjung/tpcc-perf/tpcc_my.tar.gz\n> <https://nam12.safelinks.protection.outlook.com/?url=https:%2F%2Fgts3.org%2F~jjung%2Ftpcc-perf%2Ftpcc_my.tar.gz&data=04%7C01%7Cjinho.jung%40gatech.edu%7C09089041b2a04ee4830008d8dcb28a18%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637502008195624175%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=vUUh%2Fxe130fW9zw61uXK%2B9a8aXZi%2F0xx9Mfp47mXsNg%3D&reserved=0>\n> tar xzvf tpcc_my.tar.gz\n> mysql -u mysql -pmysql -e \"create database mysqldb\"\n> mysql -u mysql -pmysql mysqldb < tpcc_my.sql\n>\n> # test\n> mysql -u mysql -pmysql mysqldb -e \"show tables\"\n> mysql -u mysql -pmysql mysqldb -e \"select * from customer\"\n>\n> # run query\n> mysql -u mysql -pmysql mysqldb < query.sql\n> ```\n>\n>\n> # Evaluation environment\n>\n> * Server: Ubuntu 18.04 (64bit)\n> * CockroachDB: v20.2.5\n> * PostgreSQL: latest commit (21 Feb, 2021)\n> * MySQL: v8.0.23\n> * SQLite: latest commit (21 Feb, 2021)\n>\n>\n>\n\nHipo 1. 3. 2021 v 15:59 odesílatel Jung, Jinho <[email protected]> napsal:\n\n\nAndrew, Bob, Michael\n\n\n\n\nThanks for the valuable feedback! Even with the default setting, PostgreSQL mostly showed good performance than other DBMSs. The reported queries are a very tiny portion among all executed queries\n(e.g., <0.001%).\n\n\n\nAs you guided, we will make the follow-up report after we test again with the performance-tuned PostgreSQL.\n\n\n\nHope we can contribute to improving PostgreSQL. Important thing - assign execution plan of slow queryhttps://explain.depesz.com/https://wiki.postgresql.org/wiki/Slow_Query_QuestionsRegardsPavel\n\n\nThanks,\nJinho Jung\n\n\n\n\n\n\n\nFrom: MichaelDBA <[email protected]>\nSent: Monday, March 1, 2021 8:04 AM\nTo: Jung, Jinho <[email protected]>\nCc: [email protected] <[email protected]>\nSubject: Re: Potential performance issues\n \n\nHi,\n\nIt is worthy work trying to compare performance across multiple database vendors, but unfortunately, it does not really come across as comparing apples to apples.\n\n\nFor instance, configuration parameters: I do not see where you are doing any modification of configuration at all. Since DBVendors are different in how they apply \"out of the box\" configuration, this alone can severely affect your comparison tests even though\n you are using a standard in benchmark testing, TPCC-C. Postgres is especially conservative in \"out of the box\" configuration. For instance, \"work_mem\" is set to an incredibly low value of 4MB. This has a big impact on many types of queries. Oracle has something\n called SGA_TARGET, which if enabled, self-regulates where the memory is utilized, thus not limiting query memory specifically in the way Postgres does. This is just one example of a bazillion others where differences in \"out of the box\" configuration makes\n these tests more like comparing apples to oranges. There are many other areas of configuration related to memory, disk, parallel execution, io concurrency, etc.\n\nIn sum, when comparing performance across different database vendors, there are many other factors that must be taken into account when trying to do an impartial comparison. I just showed one: how configuration differences can skew the results.\n\nRegards,\nMichael Vitale\n\n\n\n\nJung, Jinho wrote on 2/28/2021 10:04 AM:\n\n\n# Performance issues discovered from differential test\n\n\nHello. We are studying DBMS from GeorgiaTech and reporting interesting queries that potentially show performance problems.\n\n\nTo discover such cases, we used the following procedures:\n\n\n* Install four DBMSs with the latest version (PostgreSQL, SQLite, MySQL, CockroachDB)\n* Import TPCC-C benchmark for each DBMS\n* Generate random query (and translate the query to handle different dialects)\n* Run the query and measure the query execution time\n * Remove `LIMIT` to prevent any non-deterministic behaviors\n * Discard the test case if any DBMS returned an error\n * Some DBMS does not show the actual query execution time. In this case, query the `current time` before and after the actual query, and then we calculate the elapsed time.\n\n\nIn this report, we attached a few queries. We believe that there are many duplicated or false-positive cases. It would be great if we can get feedback about the reported queries. Once we know the root cause of the problem or false positive, we will make\n a follow-up report after we remove them all.\n\n\nFor example, the below query runs x1000 slower than other DBMSs from PostgreSQL.\n\n\n\n select ref_0.ol_amount as c0\n from order_line as ref_0\n left join stock as ref_1\n on (ref_0.ol_o_id = ref_1.s_w_id )\n inner join warehouse as ref_2\n on (ref_1.s_dist_09 is NULL)\n where ref_2.w_tax is NULL;\n\n\n\n\n* Query files link:\n\n\nwget \nhttps://gts3.org/~jjung/report1/pg.tar.gz\n\n\n\n* Execution result (execution time (second))\n\n\n| Filename | Postgres | Mysql | Cockroachdb | Sqlite | Ratio |\n|---------:|---------:|---------:|------------:|---------:|---------:|\n| 34065 | 1.31911 | 0.013 | 0.02493 | 1.025 | 101.47 |\n| 36399 | 3.60298 | 0.015 | 1.05593 | 3.487 | 240.20 |\n| 35767 | 4.01327 | 0.032 | 0.00727 | 2.311 | 552.19 |\n| 11132 | 4.3518 | 0.022 | 0.00635 | 3.617 | 684.88 |\n| 29658 | 4.6783 | 0.034 | 0.00778 | 2.63 | 601.10 |\n| 19522 | 1.06943 | 0.014 | 0.00569 | 0.0009 | 1188.26 |\n| 38388 | 3.21383 | 0.013 | 0.00913 | 2.462 | 352.09 |\n| 7187 | 1.20267 | 0.015 | 0.00316 | 0.0009 | 1336.30 |\n| 24121 | 2.80611 | 0.014 | 0.03083 | 0.005 | 561.21 |\n| 25800 | 3.95163 | 0.024 | 0.73027 | 3.876 | 164.65 |\n| 2030 | 1.91181 | 0.013 | 0.04123 | 1.634 | 147.06 |\n| 17383 | 3.28785 | 0.014 | 0.00611 | 2.4 | 538.45 |\n| 19551 | 4.70967 | 0.014 | 0.00329 | 0.0009 | 5232.97 |\n| 26595 | 3.70423 | 0.014 | 0.00601 | 2.747 | 615.92 |\n| 469 | 4.18906 | 0.013 | 0.12343 | 0.016 | 322.23 |\n\n\n\n\n# Reproduce: install DBMSs, import TPCC benchmark, run query\n\n\n### Cockroach (from binary)\n\n\n```sh\n# install DBMS\nwget \nhttps://binaries.cockroachdb.com/cockroach-v20.2.5.linux-amd64.tgz\ntar xzvf cockroach-v20.2.5.linux-amd64.tgz\nsudo cp -i cockroach-v20.2.5.linux-amd64/cockroach /usr/local/bin/cockroach20\n\n\nsudo mkdir -p /usr/local/lib/cockroach\nsudo cp -i cockroach-v20.2.5.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/\nsudo cp -i cockroach-v20.2.5.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/\n\n\n# test\nwhich cockroach20\ncockroach20 demo\n\n\n# start the DBMS (to make initial node files)\ncd ~\ncockroach20 start-single-node --insecure --store=node20 --listen-addr=localhost:26259 --http-port=28080 --max-sql-memory=1GB --background\n# quit\ncockroach20 quit --insecure --host=localhost:26259\n\n\n# import DB\nmkdir -p node20/extern\nwget \nhttps://gts3.org/~jjung/tpcc-perf/tpcc_cr.tar.gz\ntar xzvf tpcc_cr.tar.gz\ncp tpcc_cr.sql node20/tpcc.sql\n\n\n# start the DBMS again and createdb\ncockroach20 sql --insecure --host=localhost:26259 --execute=\"CREATE DATABASE IF NOT EXISTS cockroachdb;\"\n--cockroach20 sql --insecure --host=localhost:26259 --execute=\"DROP DATABASE cockroachdb;\"\n\n\ncockroach20 sql --insecure --host=localhost:26259 --database=cockroachdb --execute=\"IMPORT PGDUMP 'nodelocal://self/tpcc.sql';\"\n\n\n# test\ncockroach20 sql --insecure --host=localhost:26259 --database=cockroachdb --execute=\"explain analyze select count(*) from order_line;\"\n\n\n# run query\ncockroach20 sql --insecure --host=localhost --port=26259 --database=cockroachdb < query.sql\n```\n\n\n\n\n### Postgre (from SRC)\n\n\n```sh\n# remove any previous postgres (if exist)\nsudo apt-get --purge remove postgresql postgresql-doc postgresql-common\n\n\n# build latest postgres\ngit clone \nhttps://github.com/postgres/postgres.git\nmkdir bld\ncd bld\n../configure\nmake -j 20\n\n\n# install DBMS\nsudo su\nmake install\nadduser postgres\nrm -rf /usr/local/pgsql/data\nmkdir /usr/local/pgsql/data\nchown -R postgres /usr/local/pgsql/data\nsu - postgres\n/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data\n/usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start\n/usr/local/pgsql/bin/createdb jjung\n#/usr/local/pgsql/bin/psql postgresdb\n\n\n/usr/local/pgsql/bin/createuser -s {username}\n/usr/local/pgsql/bin/createdb postgresdb\n/usr/local/pgsql/bin/psql\n\n\n=# alter {username} with superuser\n\n\n# import DB\nwget \nhttps://gts3.org/~jjung/tpcc-perf/tpcc_pg.tar.gz\ntar xzvf tpcc_pg.tar.gz\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -f tpcc_pg.sql\n\n\n# test\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -c \"select * from warehouse\"\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -c \"\\\\dt\"\n\n\n# run query\n/usr/local/pgsql/bin/psql -p 5432 -d postgresdb -f query.sql\n```\n\n\n\n\n### Sqlite (from SRC)\n\n\n```sh\n# uninstall any existing\nsudo apt purge sliqte3\n\n\n# build latest sqlite from src\ngit clone \nhttps://github.com/sqlite/sqlite.git\ncd sqlite\nmkdir bld\ncd bld\n../configure\nmake -j 20\n\n\n# install DBMS\nsudo make install\n\n\n# import DB\nwget \nhttps://gts3.org/~jjung/tpcc-perf/tpcc_sq.tar.gz\ntar xzvf tpcc_sq.tar.gz\n\n\n# test\nsqlite3 tpcc_sq.db\nsqlite> select * from warehouse;\n\n\n# run query\nsqlite3 tpcc_sq.db < query.sql\n```\n\n\n\n\n### Mysql (install V8.0.X)\n\n\n```sh\n# remove mysql v5.X (if exist)\nsudo apt purge mysql-server mysql-common mysql-client\n\n\n# install\nwget \nhttps://dev.mysql.com/get/mysql-apt-config_0.8.16-1_all.deb\nsudo dpkg -i mysql-apt-config_0.8.16-1_all.deb\n # then select mysql 8.0 server\nsudo apt update\nsudo apt install mysql-client mysql-community-server mysql-server\n\n\n# check\nmysql -u root -p\n\n\n# create user mysql\n CREATE USER 'mysql'@'localhost' IDENTIFIED BY 'mysql';\n alter user 'root'@'localhost' identified by 'mysql';\n\n\n# modify the conf (should add \"skip-grant-tables\" under [mysqld])\nsudo vim /etc/mysql/mysql.conf.d/mysqld.cnf\n\n\n# optimize\n# e.g., \nhttps://gist.github.com/fevangelou/fb72f36bbe333e059b66\n\n\n# import DB\nwget \nhttps://gts3.org/~jjung/tpcc-perf/tpcc_my.tar.gz\ntar xzvf tpcc_my.tar.gz\nmysql -u mysql -pmysql -e \"create database mysqldb\"\nmysql -u mysql -pmysql mysqldb < tpcc_my.sql\n\n\n# test\nmysql -u mysql -pmysql mysqldb -e \"show tables\"\nmysql -u mysql -pmysql mysqldb -e \"select * from customer\"\n\n\n# run query\nmysql -u mysql -pmysql mysqldb < query.sql\n```\n\n\n\n\n# Evaluation environment\n\n\n* Server: Ubuntu 18.04 (64bit)\n* CockroachDB: v20.2.5\n* PostgreSQL: latest commit (21 Feb, 2021)\n* MySQL: v8.0.23\n* SQLite: latest commit (21 Feb, 2021)",
"msg_date": "Mon, 1 Mar 2021 16:06:16 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential performance issues"
},
{
"msg_contents": "...\n\n* Remove `LIMIT` to prevent any non-deterministic behaviors\n\nThis seems counterproductive, as for example PostgreSQL has special\nhandling of \"fast start\" queries which is triggered by presence of\nLIMIT or OFFSET, so this will miss some optimisations.\n\nAlso,it is not like removing LIMIT is some magic bullet which\nguarantees there are not non-deterministic behaviors - cost-based\noptimisers can see lots of plan changes due to many things, like when\nanalyse and/or vacuum was run last time, what is and is not in shared\nbuffers, how much of table fits in disk cache, and which parts etc.\n\nCheers\nHannu\n\n\n\n\nOn Mon, Mar 1, 2021 at 4:07 PM Pavel Stehule <[email protected]> wrote:\n>\n> Hi\n>\n> po 1. 3. 2021 v 15:59 odesílatel Jung, Jinho <[email protected]> napsal:\n>>\n>> Andrew, Bob, Michael\n>>\n>> Thanks for the valuable feedback! Even with the default setting, PostgreSQL mostly showed good performance than other DBMSs. The reported queries are a very tiny portion among all executed queries (e.g., <0.001%).\n>>\n>> As you guided, we will make the follow-up report after we test again with the performance-tuned PostgreSQL.\n>>\n>> Hope we can contribute to improving PostgreSQL.\n>\n>\n> Important thing - assign execution plan of slow query\n>\n> https://explain.depesz.com/\n>\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n> Regards\n>\n> Pavel\n>\n>>\n>> Thanks,\n>> Jinho Jung\n>>\n>> ________________________________\n>> From: MichaelDBA <[email protected]>\n>> Sent: Monday, March 1, 2021 8:04 AM\n>> To: Jung, Jinho <[email protected]>\n>> Cc: [email protected] <[email protected]>\n>> Subject: Re: Potential performance issues\n>>\n>> Hi,\n>>\n>> It is worthy work trying to compare performance across multiple database vendors, but unfortunately, it does not really come across as comparing apples to apples.\n>>\n>> For instance, configuration parameters: I do not see where you are doing any modification of configuration at all. Since DBVendors are different in how they apply \"out of the box\" configuration, this alone can severely affect your comparison tests even though you are using a standard in benchmark testing, TPCC-C. Postgres is especially conservative in \"out of the box\" configuration. For instance, \"work_mem\" is set to an incredibly low value of 4MB. This has a big impact on many types of queries. Oracle has something called SGA_TARGET, which if enabled, self-regulates where the memory is utilized, thus not limiting query memory specifically in the way Postgres does. This is just one example of a bazillion others where differences in \"out of the box\" configuration makes these tests more like comparing apples to oranges. There are many other areas of configuration related to memory, disk, parallel execution, io concurrency, etc.\n>>\n>> In sum, when comparing performance across different database vendors, there are many other factors that must be taken into account when trying to do an impartial comparison. I just showed one: how configuration differences can skew the results.\n>>\n>> Regards,\n>> Michael Vitale\n>>\n>>\n>>\n>>\n>> Jung, Jinho wrote on 2/28/2021 10:04 AM:\n>>\n>> # Performance issues discovered from differential test\n>>\n>> Hello. We are studying DBMS from GeorgiaTech and reporting interesting queries that potentially show performance problems.\n>>\n>> To discover such cases, we used the following procedures:\n>>\n>> * Install four DBMSs with the latest version (PostgreSQL, SQLite, MySQL, CockroachDB)\n>> * Import TPCC-C benchmark for each DBMS\n>> * Generate random query (and translate the query to handle different dialects)\n>> * Run the query and measure the query execution time\n>> * Remove `LIMIT` to prevent any non-deterministic behaviors\n>> * Discard the test case if any DBMS returned an error\n>> * Some DBMS does not show the actual query execution time. In this case, query the `current time` before and after the actual query, and then we calculate the elapsed time.\n>>\n>> In this report, we attached a few queries. We believe that there are many duplicated or false-positive cases. It would be great if we can get feedback about the reported queries. Once we know the root cause of the problem or false positive, we will make a follow-up report after we remove them all.\n>>\n>> For example, the below query runs x1000 slower than other DBMSs from PostgreSQL.\n>>\n>> select ref_0.ol_amount as c0\n>> from order_line as ref_0\n>> left join stock as ref_1\n>> on (ref_0.ol_o_id = ref_1.s_w_id )\n>> inner join warehouse as ref_2\n>> on (ref_1.s_dist_09 is NULL)\n>> where ref_2.w_tax is NULL;\n>>\n>>\n>> * Query files link:\n>>\n>> wget https://gts3.org/~jjung/report1/pg.tar.gz\n>>\n>> * Execution result (execution time (second))\n>>\n>> | Filename | Postgres | Mysql | Cockroachdb | Sqlite | Ratio |\n>> |---------:|---------:|---------:|------------:|---------:|---------:|\n>> | 34065 | 1.31911 | 0.013 | 0.02493 | 1.025 | 101.47 |\n>> | 36399 | 3.60298 | 0.015 | 1.05593 | 3.487 | 240.20 |\n>> | 35767 | 4.01327 | 0.032 | 0.00727 | 2.311 | 552.19 |\n>> | 11132 | 4.3518 | 0.022 | 0.00635 | 3.617 | 684.88 |\n>> | 29658 | 4.6783 | 0.034 | 0.00778 | 2.63 | 601.10 |\n>> | 19522 | 1.06943 | 0.014 | 0.00569 | 0.0009 | 1188.26 |\n>> | 38388 | 3.21383 | 0.013 | 0.00913 | 2.462 | 352.09 |\n>> | 7187 | 1.20267 | 0.015 | 0.00316 | 0.0009 | 1336.30 |\n>> | 24121 | 2.80611 | 0.014 | 0.03083 | 0.005 | 561.21 |\n>> | 25800 | 3.95163 | 0.024 | 0.73027 | 3.876 | 164.65 |\n>> | 2030 | 1.91181 | 0.013 | 0.04123 | 1.634 | 147.06 |\n>> | 17383 | 3.28785 | 0.014 | 0.00611 | 2.4 | 538.45 |\n>> | 19551 | 4.70967 | 0.014 | 0.00329 | 0.0009 | 5232.97 |\n>> | 26595 | 3.70423 | 0.014 | 0.00601 | 2.747 | 615.92 |\n>> | 469 | 4.18906 | 0.013 | 0.12343 | 0.016 | 322.23 |\n>>\n>>\n>> # Reproduce: install DBMSs, import TPCC benchmark, run query\n>>\n>> ### Cockroach (from binary)\n>>\n>> ```sh\n>> # install DBMS\n>> wget https://binaries.cockroachdb.com/cockroach-v20.2.5.linux-amd64.tgz\n>> tar xzvf cockroach-v20.2.5.linux-amd64.tgz\n>> sudo cp -i cockroach-v20.2.5.linux-amd64/cockroach /usr/local/bin/cockroach20\n>>\n>> sudo mkdir -p /usr/local/lib/cockroach\n>> sudo cp -i cockroach-v20.2.5.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/\n>> sudo cp -i cockroach-v20.2.5.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/\n>>\n>> # test\n>> which cockroach20\n>> cockroach20 demo\n>>\n>> # start the DBMS (to make initial node files)\n>> cd ~\n>> cockroach20 start-single-node --insecure --store=node20 --listen-addr=localhost:26259 --http-port=28080 --max-sql-memory=1GB --background\n>> # quit\n>> cockroach20 quit --insecure --host=localhost:26259\n>>\n>> # import DB\n>> mkdir -p node20/extern\n>> wget https://gts3.org/~jjung/tpcc-perf/tpcc_cr.tar.gz\n>> tar xzvf tpcc_cr.tar.gz\n>> cp tpcc_cr.sql node20/tpcc.sql\n>>\n>> # start the DBMS again and createdb\n>> cockroach20 sql --insecure --host=localhost:26259 --execute=\"CREATE DATABASE IF NOT EXISTS cockroachdb;\"\n>> --cockroach20 sql --insecure --host=localhost:26259 --execute=\"DROP DATABASE cockroachdb;\"\n>>\n>> cockroach20 sql --insecure --host=localhost:26259 --database=cockroachdb --execute=\"IMPORT PGDUMP 'nodelocal://self/tpcc.sql';\"\n>>\n>> # test\n>> cockroach20 sql --insecure --host=localhost:26259 --database=cockroachdb --execute=\"explain analyze select count(*) from order_line;\"\n>>\n>> # run query\n>> cockroach20 sql --insecure --host=localhost --port=26259 --database=cockroachdb < query.sql\n>> ```\n>>\n>>\n>> ### Postgre (from SRC)\n>>\n>> ```sh\n>> # remove any previous postgres (if exist)\n>> sudo apt-get --purge remove postgresql postgresql-doc postgresql-common\n>>\n>> # build latest postgres\n>> git clone https://github.com/postgres/postgres.git\n>> mkdir bld\n>> cd bld\n>> ../configure\n>> make -j 20\n>>\n>> # install DBMS\n>> sudo su\n>> make install\n>> adduser postgres\n>> rm -rf /usr/local/pgsql/data\n>> mkdir /usr/local/pgsql/data\n>> chown -R postgres /usr/local/pgsql/data\n>> su - postgres\n>> /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data\n>> /usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start\n>> /usr/local/pgsql/bin/createdb jjung\n>> #/usr/local/pgsql/bin/psql postgresdb\n>>\n>> /usr/local/pgsql/bin/createuser -s {username}\n>> /usr/local/pgsql/bin/createdb postgresdb\n>> /usr/local/pgsql/bin/psql\n>>\n>> =# alter {username} with superuser\n>>\n>> # import DB\n>> wget https://gts3.org/~jjung/tpcc-perf/tpcc_pg.tar.gz\n>> tar xzvf tpcc_pg.tar.gz\n>> /usr/local/pgsql/bin/psql -p 5432 -d postgresdb -f tpcc_pg.sql\n>>\n>> # test\n>> /usr/local/pgsql/bin/psql -p 5432 -d postgresdb -c \"select * from warehouse\"\n>> /usr/local/pgsql/bin/psql -p 5432 -d postgresdb -c \"\\\\dt\"\n>>\n>> # run query\n>> /usr/local/pgsql/bin/psql -p 5432 -d postgresdb -f query.sql\n>> ```\n>>\n>>\n>> ### Sqlite (from SRC)\n>>\n>> ```sh\n>> # uninstall any existing\n>> sudo apt purge sliqte3\n>>\n>> # build latest sqlite from src\n>> git clone https://github.com/sqlite/sqlite.git\n>> cd sqlite\n>> mkdir bld\n>> cd bld\n>> ../configure\n>> make -j 20\n>>\n>> # install DBMS\n>> sudo make install\n>>\n>> # import DB\n>> wget https://gts3.org/~jjung/tpcc-perf/tpcc_sq.tar.gz\n>> tar xzvf tpcc_sq.tar.gz\n>>\n>> # test\n>> sqlite3 tpcc_sq.db\n>> sqlite> select * from warehouse;\n>>\n>> # run query\n>> sqlite3 tpcc_sq.db < query.sql\n>> ```\n>>\n>>\n>> ### Mysql (install V8.0.X)\n>>\n>> ```sh\n>> # remove mysql v5.X (if exist)\n>> sudo apt purge mysql-server mysql-common mysql-client\n>>\n>> # install\n>> wget https://dev.mysql.com/get/mysql-apt-config_0.8.16-1_all.deb\n>> sudo dpkg -i mysql-apt-config_0.8.16-1_all.deb\n>> # then select mysql 8.0 server\n>> sudo apt update\n>> sudo apt install mysql-client mysql-community-server mysql-server\n>>\n>> # check\n>> mysql -u root -p\n>>\n>> # create user mysql\n>> CREATE USER 'mysql'@'localhost' IDENTIFIED BY 'mysql';\n>> alter user 'root'@'localhost' identified by 'mysql';\n>>\n>> # modify the conf (should add \"skip-grant-tables\" under [mysqld])\n>> sudo vim /etc/mysql/mysql.conf.d/mysqld.cnf\n>>\n>> # optimize\n>> # e.g., https://gist.github.com/fevangelou/fb72f36bbe333e059b66\n>>\n>> # import DB\n>> wget https://gts3.org/~jjung/tpcc-perf/tpcc_my.tar.gz\n>> tar xzvf tpcc_my.tar.gz\n>> mysql -u mysql -pmysql -e \"create database mysqldb\"\n>> mysql -u mysql -pmysql mysqldb < tpcc_my.sql\n>>\n>> # test\n>> mysql -u mysql -pmysql mysqldb -e \"show tables\"\n>> mysql -u mysql -pmysql mysqldb -e \"select * from customer\"\n>>\n>> # run query\n>> mysql -u mysql -pmysql mysqldb < query.sql\n>> ```\n>>\n>>\n>> # Evaluation environment\n>>\n>> * Server: Ubuntu 18.04 (64bit)\n>> * CockroachDB: v20.2.5\n>> * PostgreSQL: latest commit (21 Feb, 2021)\n>> * MySQL: v8.0.23\n>> * SQLite: latest commit (21 Feb, 2021)\n>>\n>>\n\n\n",
"msg_date": "Mon, 1 Mar 2021 17:58:06 +0100",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential performance issues"
}
] |
[
{
"msg_contents": "Hello,\n\n\nWe have 2 TPC-H queries which fetch the same tuples but have significant query execution time differences (4.3 times).\n\n\nWe are sharing a pair of TPC-H queries that exhibit this performance difference:\n\n\nFirst query:\n\nSELECT \"ps_comment\",\n\n \"ps_suppkey\",\n\n \"ps_supplycost\",\n\n \"ps_partkey\",\n\n \"ps_availqty\"\n\nFROM \"partsupp\"\n\nWHERE \"ps_partkey\" + 16 < 1\n\n OR \"ps_partkey\" = 2\n\nGROUP BY \"ps_partkey\",\n\n \"ps_suppkey\",\n\n \"ps_availqty\",\n\n \"ps_supplycost\",\n\n \"ps_comment\"\n\n\nSecond query:\n\nSELECT \"ps_comment\",\n\n \"ps_suppkey\",\n\n \"ps_supplycost\",\n\n \"ps_partkey\",\n\n \"ps_availqty\"\n\nFROM \"partsupp\"\n\nWHERE \"ps_partkey\" + 16 < 1\n\n OR \"ps_partkey\" = 2\n\nGROUP BY \"ps_comment\",\n\n \"ps_suppkey\",\n\n \"ps_supplycost\",\n\n \"ps_partkey\",\n\n \"ps_availqty\"\n\n\n* Actual Behavior\n\nWe executed both queries on the TPC-H benchmark of scale factor 5: the first query takes over 1.7 seconds, while the second query only takes 0.4 seconds.\nWe think the time difference results from different plans selected. Specifically, in the first (slow) query, the DBMS performs an index scan on table partsupp using the covering index (ps_partkey, ps_suppkey), while the second (fast) query performs a parallel scan on (ps_suppkey, ps_partkey).\n\n\n* Query Execution Plan\n\n * First query:\n\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------\n\n Group (cost=0.43..342188.58 rows=399262 width=144) (actual time=0.058..1737.659 rows=4 loops=1)\n\n Group Key: ps_partkey, ps_suppkey\n\n Buffers: shared hit=123005 read=98055\n\n -> Index Scan using partsupp_pkey on partsupp (cost=0.43..335522.75 rows=1333167 width=144) (actual time=0.055..1737.651 rows=4 loops=1)\n\n Filter: (((ps_partkey + 16) < 1) OR (ps_partkey = 2))\n\n Rows Removed by Filter: 3999996\n\n Buffers: shared hit=123005 read=98055\n\n Planning Time: 0.926 ms\n\n Execution Time: 1737.754 ms\n\n(9 rows)\n\n\n\n\n\n * Second query:\n\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------------\n\n Group (cost=250110.68..350438.93 rows=399262 width=144) (actual time=400.353..400.361 rows=4 loops=1)\n\n Group Key: ps_suppkey, ps_partkey\n\n Buffers: shared hit=5481 read=24093\n\n -> Gather Merge (cost=250110.68..346446.31 rows=798524 width=144) (actual time=400.351..406.741 rows=4 loops=1)\n\n Workers Planned: 2\n\n Workers Launched: 2\n\n Buffers: shared hit=15151 read=72144\n\n -> Group (cost=249110.66..253276.80 rows=399262 width=144) (actual time=395.882..395.883 rows=1 loops=3)\n\n Group Key: ps_suppkey, ps_partkey\n\n Buffers: shared hit=15151 read=72144\n\n -> Sort (cost=249110.66..250499.37 rows=555486 width=144) (actual time=395.880..395.881 rows=1 loops=3)\n\n Sort Key: ps_suppkey, ps_partkey\n\n Sort Method: quicksort Memory: 25kB\n\n Worker 0: Sort Method: quicksort Memory: 25kB\n\n Worker 1: Sort Method: quicksort Memory: 25kB\n\n Buffers: shared hit=15151 read=72144\n\n -> Parallel Seq Scan on partsupp (cost=0.00..116363.88 rows=555486 width=144) (actual time=395.518..395.615 rows=1 loops=3)\n\n Filter: (((ps_partkey + 16) < 1) OR (ps_partkey = 2))\n\n Rows Removed by Filter: 1333332\n\n Buffers: shared hit=15065 read=72136\n\n Planning Time: 0.360 ms\n\n Execution Time: 406.880 ms\n\n(22 rows)\n\n\n\n\n\n\n\n*Expected Behavior\n\nSince these two queries are semantically equivalent, we were hoping that PostgreSQL would evaluate them in roughly the same amount of time.\nIt looks to me that different order of group by clauses triggers different plans: when the group by clauses (ps_partkey, ps_suppkey) is the same as the covering index, it will trigger an index scan on associated columns;\nhowever, when the group by clauses have different order than the covering index (ps_suppkey, ps_partkey), the index scan will not be triggered.\nGiven that the user might not pay close attention to this subtle difference, I was wondering if it is worth making these two queries have the same and predictable performance on Postgresql.\n\n\n*Test Environment\n\nUbuntu 20.04 machine \"Linux panda 5.4.0-40-generic #44-Ubuntu SMP Tue Jun 23 00:01:04 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux\"\n\nPostgreSQL v12.3\n\nDatabase: TPC-H benchmark (with scale factor 5)\n\nThe description of table partsupp is as follows:\n\ntpch5=# \\d partsupp;\n\n Table \"public.partsupp\"\n\n Column | Type | Collation | Nullable | Default\n\n---------------+------------------------+-----------+----------+---------\n\n ps_partkey | integer | | not null |\n\n ps_suppkey | integer | | not null |\n\n ps_availqty | integer | | not null |\n\n ps_supplycost | numeric(15,2) | | not null |\n\n ps_comment | character varying(199) | | not null |\n\nIndexes:\n\n \"partsupp_pkey\" PRIMARY KEY, btree (ps_partkey, ps_suppkey)\n\nForeign-key constraints:\n\n \"partsupp_fk1\" FOREIGN KEY (ps_suppkey) REFERENCES supplier(s_suppkey)\n\n \"partsupp_fk2\" FOREIGN KEY (ps_partkey) REFERENCES part(p_partkey)\n\nReferenced by:\n\n TABLE \"lineitem\" CONSTRAINT \"lineitem_fk2\" FOREIGN KEY (l_partkey, l_suppkey) REFERENCES partsupp(ps_partkey, ps_suppkey)\n\n\n\n\n\n\n*Here are the steps for reproducing our observations:\n\n 1. Download the dataset from the link: https://drive.google.com/file/d/13rFa1BNDi4e2RmXBn-yEQkcqt6lsBu1c/view?usp=sharing\n\n 2. Set up TPC-H benchmark\n\ntar xzvf tpch5_postgresql.tar.gz\n\ncd tpch5_postgresql\n\ndb=tpch5\n\ncreatedb $db\n\npsql -d $db < dss.ddl\n\nfor i in `ls *.tbl`\n\ndo\n\n echo $i\n\n name=`echo $i|cut -d'.' -f1`\n\n psql -d $db -c \"COPY $name FROM '`pwd`/$i' DELIMITER '|' ENCODING 'LATIN1';\"\n\ndone\n\npsql -d $db < dss_postgres.ri\n\n 1. Execute the queries\n\n\n\n\n\n\n\n\n\n\nHello,\n\nWe have 2 TPC-H queries which fetch the same tuples but have significant query execution time differences (4.3\n times).\n\nWe are sharing a pair of TPC-H queries that exhibit this performance difference:\n\nFirst query:\nSELECT \"ps_comment\", \n \"ps_suppkey\", \n \"ps_supplycost\", \n \"ps_partkey\", \n \"ps_availqty\" \nFROM \n\"partsupp\" \nWHERE \n\"ps_partkey\" + 16\n< 1 \n OR \"ps_partkey\" = 2 \nGROUP \nBY \"ps_partkey\", \n \"ps_suppkey\", \n \"ps_availqty\", \n \"ps_supplycost\", \n \"ps_comment\" \n\nSecond query:\nSELECT\n\"ps_comment\", \n \"ps_suppkey\", \n \"ps_supplycost\", \n \"ps_partkey\", \n \"ps_availqty\" \nFROM \n\"partsupp\" \nWHERE \n\"ps_partkey\" + 16\n< 1 \n OR\n\"ps_partkey\" = 2 \nGROUP BY\n\"ps_comment\", \n \"ps_suppkey\", \n \"ps_supplycost\", \n \"ps_partkey\", \n \"ps_availqty\" \n\n* Actual Behavior\nWe executed both queries on the TPC-H benchmark of scale factor 5: the first query takes\n over 1.7 seconds, while the second query only takes 0.4 seconds. \nWe think the time difference results from different plans selected. Specifically, in the first (slow) query, the DBMS performs an index scan on table partsupp using the covering index (ps_partkey, ps_suppkey), while the second (fast) query performs a parallel\n scan on (ps_suppkey, ps_partkey).\n\n* Query Execution Plan\n\n\nFirst query:\n\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------\n Group (cost=0.43..342188.58 rows=399262 width=144) (actual time=0.058..1737.659 rows=4\n loops=1)\n Group Key: ps_partkey, ps_suppkey\n Buffers: shared hit=123005 read=98055\n -> Index Scan using partsupp_pkey on partsupp (cost=0.43..335522.75 rows=1333167\n width=144) (actual time=0.055..1737.651 rows=4 loops=1)\n Filter: (((ps_partkey + 16) < 1) OR (ps_partkey = 2))\n Rows Removed by Filter: 3999996\n Buffers: shared hit=123005 read=98055\n Planning Time: 0.926 ms\n Execution Time: 1737.754 ms\n(9 rows)\n\n\n\n\n\n\nSecond query:\n\n QUERY\n PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------\n Group (cost=250110.68..350438.93 rows=399262 width=144) (actual time=400.353..400.361\n rows=4 loops=1)\n Group Key: ps_suppkey, ps_partkey\n Buffers: shared hit=5481 read=24093\n -> Gather Merge (cost=250110.68..346446.31 rows=798524 width=144) (actual time=400.351..406.741\n rows=4 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=15151 read=72144\n -> Group (cost=249110.66..253276.80 rows=399262 width=144) (actual time=395.882..395.883\n rows=1 loops=3)\n Group Key: ps_suppkey, ps_partkey\n Buffers: shared hit=15151 read=72144\n -> Sort (cost=249110.66..250499.37 rows=555486 width=144) (actual time=395.880..395.881\n rows=1 loops=3)\n Sort Key: ps_suppkey, ps_partkey\n Sort Method: quicksort Memory: 25kB\n Worker 0: Sort Method: quicksort Memory: 25kB\n Worker 1: Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=15151 read=72144\n -> Parallel Seq Scan on partsupp (cost=0.00..116363.88 rows=555486\n width=144) (actual time=395.518..395.615 rows=1 loops=3)\n Filter: (((ps_partkey + 16) < 1) OR (ps_partkey = 2))\n Rows Removed by Filter: 1333332\n Buffers: shared hit=15065 read=72136\n Planning Time: 0.360 ms\n Execution Time: 406.880 ms\n(22 rows)\n\n\n\n\n\n\n*Expected Behavior\nSince these two queries are semantically equivalent, we were hoping that PostgreSQL\n would evaluate them in roughly the same amount of time. \nIt looks to me that different order of group by clauses triggers different plans: when the group by clauses (ps_partkey, ps_suppkey) is the same as the covering index, it will trigger an index scan on associated columns;\n\nhowever, when the group by clauses have different order than the covering index (ps_suppkey, ps_partkey), the index scan will not be triggered.\n\nGiven that the user might not pay close attention to this subtle difference, I was wondering if it is worth making these two queries have the same and predictable performance on Postgresql.\n\n*Test Environment\nUbuntu 20.04 machine \"Linux panda 5.4.0-40-generic #44-Ubuntu SMP Tue Jun 23 00:01:04\n UTC 2020 x86_64 x86_64 x86_64 GNU/Linux\"\nPostgreSQL v12.3\nDatabase: TPC-H benchmark (with scale factor 5)\nThe description of table partsupp is as follows:\ntpch5=# \\d partsupp;\n Table \"public.partsupp\"\n Column | Type | Collation | Nullable | Default \n---------------+------------------------+-----------+----------+---------\n ps_partkey | integer | | not null | \n ps_suppkey | integer | | not null | \n ps_availqty | integer | | not null | \n ps_supplycost | numeric(15,2) | | not null | \n ps_comment | character varying(199) | | not null | \nIndexes:\n \"partsupp_pkey\" PRIMARY KEY, btree (ps_partkey, ps_suppkey)\nForeign-key constraints:\n \"partsupp_fk1\" FOREIGN KEY (ps_suppkey) REFERENCES supplier(s_suppkey)\n \"partsupp_fk2\" FOREIGN KEY (ps_partkey) REFERENCES part(p_partkey)\nReferenced by:\n TABLE \"lineitem\" CONSTRAINT \"lineitem_fk2\" FOREIGN KEY (l_partkey, l_suppkey) REFERENCES\n partsupp(ps_partkey, ps_suppkey)\n\n\n\n\n\n*Here are the steps for reproducing our observations:\n\n\n\nDownload the dataset from the link: https://drive.google.com/file/d/13rFa1BNDi4e2RmXBn-yEQkcqt6lsBu1c/view?usp=sharing\n\nSet up TPC-H benchmark\n\n\ntar xzvf tpch5_postgresql.tar.gz\n\ncd tpch5_postgresql\n\ndb=tpch5\n\ncreatedb $db\n\npsql -d $db < dss.ddl\n\nfor i in `ls *.tbl`\n\ndo\n\n echo $i\n\n name=`echo $i|cut -d'.' -f1`\n\n psql -d $db -c \"COPY $name FROM '`pwd`/$i' DELIMITER '|' ENCODING 'LATIN1';\"\n\ndone\n\npsql -d $db < dss_postgres.ri\n\n\nExecute the queries",
"msg_date": "Tue, 2 Mar 2021 04:44:07 +0000",
"msg_from": "\"Liu, Xinyu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Potential performance issues related to group by and covering index "
},
{
"msg_contents": "út 2. 3. 2021 v 9:53 odesílatel Liu, Xinyu <[email protected]> napsal:\n\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> * Hello, We have 2 TPC-H queries which fetch the same tuples but have\n> significant query execution time differences (4.3 times). We are sharing a\n> pair of TPC-H queries that exhibit this performance difference: First\n> query: SELECT \"ps_comment\", \"ps_suppkey\", \"ps_supplycost\",\n> \"ps_partkey\", \"ps_availqty\" FROM \"partsupp\" WHERE\n> \"ps_partkey\" + 16 < 1 OR \"ps_partkey\" = 2 GROUP\n> BY \"ps_partkey\", \"ps_suppkey\", \"ps_availqty\",\n> \"ps_supplycost\", \"ps_comment\" Second query: SELECT\n> \"ps_comment\", \"ps_suppkey\", \"ps_supplycost\",\n> \"ps_partkey\", \"ps_availqty\" FROM \"partsupp\" WHERE\n> \"ps_partkey\" + 16 < 1 OR \"ps_partkey\" = 2 GROUP BY\n> \"ps_comment\", \"ps_suppkey\", \"ps_supplycost\",\n> \"ps_partkey\", \"ps_availqty\" * Actual Behavior We\n> executed both queries on the TPC-H benchmark of scale factor 5: the first\n> query takes over 1.7 seconds, while the second query only takes 0.4\n> seconds. We think the time difference results from different plans\n> selected. Specifically, in the first (slow) query, the DBMS performs an\n> index scan on table partsupp using the covering index (ps_partkey,\n> ps_suppkey), while the second (fast) query performs a parallel scan on\n> (ps_suppkey, ps_partkey). * Query Execution Plan - First query:\n> QUERY\n> PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------\n> Group (cost=0.43..342188.58 rows=399262 width=144) (actual\n> time=0.058..1737.659 rows=4 loops=1) Group Key: ps_partkey, ps_suppkey\n> Buffers: shared hit=123005 read=98055 -> Index Scan using\n> partsupp_pkey on partsupp (cost=0.43..335522.75 rows=1333167 width=144)\n> (actual time=0.055..1737.651 rows=4 loops=1) Filter: (((ps_partkey\n> + 16) < 1) OR (ps_partkey = 2)) Rows Removed by Filter: 3999996\n> Buffers: shared hit=123005 read=98055 Planning Time: 0.926 ms\n> Execution Time: 1737.754 ms (9 rows) *\n>\n\nIn this case there is brutal overestimation. Probably due planner\nunfriendly written predicate ps_partkey + 16 < 1) OR ps_parkey = 2. You\ncan try to rewrite this predicate to ps_parthkey < -15 OR ps_parkey = 2\n\nRegards\n\nPavel\n\n\n\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> * - Second query:\n> QUERY\n> PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------\n> Group (cost=250110.68..350438.93 rows=399262 width=144) (actual\n> time=400.353..400.361 rows=4 loops=1) Group Key: ps_suppkey, ps_partkey\n> Buffers: shared hit=5481 read=24093 -> Gather Merge\n> (cost=250110.68..346446.31 rows=798524 width=144) (actual\n> time=400.351..406.741 rows=4 loops=1) Workers Planned: 2\n> Workers Launched: 2 Buffers: shared hit=15151 read=72144\n> -> Group (cost=249110.66..253276.80 rows=399262 width=144)\n> (actual time=395.882..395.883 rows=1 loops=3) Group Key:\n> ps_suppkey, ps_partkey Buffers: shared hit=15151 read=72144\n> -> Sort (cost=249110.66..250499.37 rows=555486 width=144)\n> (actual time=395.880..395.881 rows=1 loops=3) Sort\n> Key: ps_suppkey, ps_partkey Sort Method: quicksort\n> Memory: 25kB Worker 0: Sort Method: quicksort\n> Memory: 25kB Worker 1: Sort Method: quicksort\n> Memory: 25kB Buffers: shared hit=15151 read=72144\n> -> Parallel Seq Scan on partsupp\n> (cost=0.00..116363.88 rows=555486 width=144) (actual time=395.518..395.615\n> rows=1 loops=3) Filter: (((ps_partkey + 16) < 1)\n> OR (ps_partkey = 2)) Rows Removed by Filter:\n> 1333332 Buffers: shared hit=15065 read=72136\n> Planning Time: 0.360 ms Execution Time: 406.880 ms (22 rows) *Expected\n> Behavior Since these two queries are semantically equivalent, we were\n> hoping that PostgreSQL would evaluate them in roughly the same amount of\n> time. It looks to me that different order of group by clauses triggers\n> different plans: when the group by clauses (ps_partkey, ps_suppkey) is the\n> same as the covering index, it will trigger an index scan on associated\n> columns; however, when the group by clauses have different order than the\n> covering index (ps_suppkey, ps_partkey), the index scan will not be\n> triggered. Given that the user might not pay close attention to this subtle\n> difference, I was wondering if it is worth making these two queries have\n> the same and predictable performance on Postgresql. *Test Environment\n> Ubuntu 20.04 machine \"Linux panda 5.4.0-40-generic #44-Ubuntu SMP Tue Jun\n> 23 00:01:04 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux\" PostgreSQL v12.3\n> Database: TPC-H benchmark (with scale factor 5) The description of table\n> partsupp is as follows: tpch5=# \\d partsupp; Table\n> \"public.partsupp\" Column | Type | Collation |\n> Nullable | Default\n> ---------------+------------------------+-----------+----------+---------\n> ps_partkey | integer | | not null |\n> ps_suppkey | integer | | not null |\n> ps_availqty | integer | | not null |\n> ps_supplycost | numeric(15,2) | | not null |\n> ps_comment | character varying(199) | | not null | Indexes:\n> \"partsupp_pkey\" PRIMARY KEY, btree (ps_partkey, ps_suppkey) Foreign-key\n> constraints: \"partsupp_fk1\" FOREIGN KEY (ps_suppkey) REFERENCES\n> supplier(s_suppkey) \"partsupp_fk2\" FOREIGN KEY (ps_partkey) REFERENCES\n> part(p_partkey) Referenced by: TABLE \"lineitem\" CONSTRAINT\n> \"lineitem_fk2\" FOREIGN KEY (l_partkey, l_suppkey) REFERENCES\n> partsupp(ps_partkey, ps_suppkey) *Here are the steps for reproducing our\n> observations: 1. Download the dataset from the link:\n> https://drive.google.com/file/d/13rFa1BNDi4e2RmXBn-yEQkcqt6lsBu1c/view?usp=sharing\n> <https://drive.google.com/file/d/13rFa1BNDi4e2RmXBn-yEQkcqt6lsBu1c/view?usp=sharing>\n> 2. Set up TPC-H benchmark tar xzvf tpch5_postgresql.tar.gz cd\n> tpch5_postgresql db=tpch5 createdb $db psql -d $db < dss.ddl for i in `ls\n> *.tbl` do echo $i name=`echo $i|cut -d'.' -f1` psql -d $db -c\n> \"COPY $name FROM '`pwd`/$i' DELIMITER '|' ENCODING 'LATIN1';\" done psql -d\n> $db < dss_postgres.ri 1. Execute the queries *\n>\n\nút 2. 3. 2021 v 9:53 odesílatel Liu, Xinyu <[email protected]> napsal:\n\n\n\nHello,\n\nWe have 2 TPC-H queries which fetch the same tuples but have significant query execution time differences (4.3\n times).\n\nWe are sharing a pair of TPC-H queries that exhibit this performance difference:\n\nFirst query:\nSELECT \"ps_comment\", \n \"ps_suppkey\", \n \"ps_supplycost\", \n \"ps_partkey\", \n \"ps_availqty\" \nFROM \n\"partsupp\" \nWHERE \n\"ps_partkey\" + 16\n< 1 \n OR \"ps_partkey\" = 2 \nGROUP \nBY \"ps_partkey\", \n \"ps_suppkey\", \n \"ps_availqty\", \n \"ps_supplycost\", \n \"ps_comment\" \n\nSecond query:\nSELECT\n\"ps_comment\", \n \"ps_suppkey\", \n \"ps_supplycost\", \n \"ps_partkey\", \n \"ps_availqty\" \nFROM \n\"partsupp\" \nWHERE \n\"ps_partkey\" + 16\n< 1 \n OR\n\"ps_partkey\" = 2 \nGROUP BY\n\"ps_comment\", \n \"ps_suppkey\", \n \"ps_supplycost\", \n \"ps_partkey\", \n \"ps_availqty\" \n\n* Actual Behavior\nWe executed both queries on the TPC-H benchmark of scale factor 5: the first query takes\n over 1.7 seconds, while the second query only takes 0.4 seconds. \nWe think the time difference results from different plans selected. Specifically, in the first (slow) query, the DBMS performs an index scan on table partsupp using the covering index (ps_partkey, ps_suppkey), while the second (fast) query performs a parallel\n scan on (ps_suppkey, ps_partkey).\n\n* Query Execution Plan\n\n\nFirst query:\n\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------\n Group (cost=0.43..342188.58 rows=399262 width=144) (actual time=0.058..1737.659 rows=4\n loops=1)\n Group Key: ps_partkey, ps_suppkey\n Buffers: shared hit=123005 read=98055\n -> Index Scan using partsupp_pkey on partsupp (cost=0.43..335522.75 rows=1333167\n width=144) (actual time=0.055..1737.651 rows=4 loops=1)\n Filter: (((ps_partkey + 16) < 1) OR (ps_partkey = 2))\n Rows Removed by Filter: 3999996\n Buffers: shared hit=123005 read=98055\n Planning Time: 0.926 ms\n Execution Time: 1737.754 ms\n(9 rows)\nIn this case there is brutal overestimation. Probably due planner unfriendly written predicate ps_partkey + 16 < 1) OR ps_parkey = 2. You can try to rewrite this predicate to ps_parthkey < -15 OR ps_parkey = 2RegardsPavel\n\n\n\n\n\nSecond query:\n\n QUERY\n PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------\n Group (cost=250110.68..350438.93 rows=399262 width=144) (actual time=400.353..400.361\n rows=4 loops=1)\n Group Key: ps_suppkey, ps_partkey\n Buffers: shared hit=5481 read=24093\n -> Gather Merge (cost=250110.68..346446.31 rows=798524 width=144) (actual time=400.351..406.741\n rows=4 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=15151 read=72144\n -> Group (cost=249110.66..253276.80 rows=399262 width=144) (actual time=395.882..395.883\n rows=1 loops=3)\n Group Key: ps_suppkey, ps_partkey\n Buffers: shared hit=15151 read=72144\n -> Sort (cost=249110.66..250499.37 rows=555486 width=144) (actual time=395.880..395.881\n rows=1 loops=3)\n Sort Key: ps_suppkey, ps_partkey\n Sort Method: quicksort Memory: 25kB\n Worker 0: Sort Method: quicksort Memory: 25kB\n Worker 1: Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=15151 read=72144\n -> Parallel Seq Scan on partsupp (cost=0.00..116363.88 rows=555486\n width=144) (actual time=395.518..395.615 rows=1 loops=3)\n Filter: (((ps_partkey + 16) < 1) OR (ps_partkey = 2))\n Rows Removed by Filter: 1333332\n Buffers: shared hit=15065 read=72136\n Planning Time: 0.360 ms\n Execution Time: 406.880 ms\n(22 rows)\n\n\n\n\n\n\n*Expected Behavior\nSince these two queries are semantically equivalent, we were hoping that PostgreSQL\n would evaluate them in roughly the same amount of time. \nIt looks to me that different order of group by clauses triggers different plans: when the group by clauses (ps_partkey, ps_suppkey) is the same as the covering index, it will trigger an index scan on associated columns;\n\nhowever, when the group by clauses have different order than the covering index (ps_suppkey, ps_partkey), the index scan will not be triggered.\n\nGiven that the user might not pay close attention to this subtle difference, I was wondering if it is worth making these two queries have the same and predictable performance on Postgresql.\n\n*Test Environment\nUbuntu 20.04 machine \"Linux panda 5.4.0-40-generic #44-Ubuntu SMP Tue Jun 23 00:01:04\n UTC 2020 x86_64 x86_64 x86_64 GNU/Linux\"\nPostgreSQL v12.3\nDatabase: TPC-H benchmark (with scale factor 5)\nThe description of table partsupp is as follows:\ntpch5=# \\d partsupp;\n Table \"public.partsupp\"\n Column | Type | Collation | Nullable | Default \n---------------+------------------------+-----------+----------+---------\n ps_partkey | integer | | not null | \n ps_suppkey | integer | | not null | \n ps_availqty | integer | | not null | \n ps_supplycost | numeric(15,2) | | not null | \n ps_comment | character varying(199) | | not null | \nIndexes:\n \"partsupp_pkey\" PRIMARY KEY, btree (ps_partkey, ps_suppkey)\nForeign-key constraints:\n \"partsupp_fk1\" FOREIGN KEY (ps_suppkey) REFERENCES supplier(s_suppkey)\n \"partsupp_fk2\" FOREIGN KEY (ps_partkey) REFERENCES part(p_partkey)\nReferenced by:\n TABLE \"lineitem\" CONSTRAINT \"lineitem_fk2\" FOREIGN KEY (l_partkey, l_suppkey) REFERENCES\n partsupp(ps_partkey, ps_suppkey)\n\n\n\n\n\n*Here are the steps for reproducing our observations:\n\n\n\nDownload the dataset from the link: https://drive.google.com/file/d/13rFa1BNDi4e2RmXBn-yEQkcqt6lsBu1c/view?usp=sharing\n\nSet up TPC-H benchmark\n\n\ntar xzvf tpch5_postgresql.tar.gz\n\ncd tpch5_postgresql\n\ndb=tpch5\n\ncreatedb $db\n\npsql -d $db < dss.ddl\n\nfor i in `ls *.tbl`\n\ndo\n\n echo $i\n\n name=`echo $i|cut -d'.' -f1`\n\n psql -d $db -c \"COPY $name FROM '`pwd`/$i' DELIMITER '|' ENCODING 'LATIN1';\"\n\ndone\n\npsql -d $db < dss_postgres.ri\n\n\nExecute the queries",
"msg_date": "Tue, 2 Mar 2021 10:08:47 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential performance issues related to group by and covering\n index"
},
{
"msg_contents": "On Tue, 2 Mar 2021 at 21:53, Liu, Xinyu <[email protected]> wrote:\n> *Expected Behavior\n>\n> Since these two queries are semantically equivalent, we were hoping that PostgreSQL would evaluate them in roughly the same amount of time.\n> It looks to me that different order of group by clauses triggers different plans: when the group by clauses (ps_partkey, ps_suppkey) is the same as the covering index, it will trigger an index scan on associated columns;\n> however, when the group by clauses have different order than the covering index (ps_suppkey, ps_partkey), the index scan will not be triggered.\n> Given that the user might not pay close attention to this subtle difference, I was wondering if it is worth making these two queries have the same and predictable performance on Postgresql.\n\nUnfortunately, it would take a pretty major overhaul of the query\nplanner to do that efficiently.\n\nFor now, have a few smarts involved in trying to make the GROUP BY\nprocessing more efficient:\n\n1) We remove columns from the GROUP BY if they're functionally\ndependent on the primary key, providing the primary key is present\ntoo. (you're seeing this in your example query)\n2) We also change the order of the GROUP BY columns if it's a subset\nof the ORDER BY columns. This is quite good as we'd do grouping by\n{b,a} if someone wrote GROUP BY a,b ORDER BY b,a; to which would save\nhaving to re-sort the data for the ORDER BY after doing the GROUP BY.\nThat's especially useful for queries with a LIMIT clause.\n\nIf we want to do anything much smarter than that like trying every\ncombination of the GROUP BY clause, then plan times are likely going\nto explode. The join order search is done based on the chosen query\npathkeys, which in many queries is the pathkeys for the GROUP BY\nclause (see standard_qp_callback()). This means throughout the join\nsearch, planner will try and form paths that provide pre-sorted input\nthat allows the group by to be implemented efficiently with pre-sorted\ndata. You might see Merge Joins rather than Hash Joins, for example.\n\nIf we want to try every combination of the GROUP BY columns then it\nmeans repeating that join search once per combination. The join search\nis often, *by far*, the most expensive part of planning a query.\n\nWhile it would be nice if the planner did a better job on selecting\nthe best order for group by columns, unless we can come up with some\nheuristics that allow us to just try a single combination that is\nlikely good, then I don't think anyone would thank us for slowing down\nthe planner by a factor of the number of possible combinations of the\ngroup by columns.\n\nDavid\n\n\n",
"msg_date": "Tue, 2 Mar 2021 22:49:20 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential performance issues related to group by and covering\n index"
},
{
"msg_contents": ">\n> If we want to do anything much smarter than that like trying every\n> combination of the GROUP BY clause, then plan times are likely going\n> to explode. The join order search is done based on the chosen query\n> pathkeys, which in many queries is the pathkeys for the GROUP BY\n> clause (see standard_qp_callback()). This means throughout the join\n> search, planner will try and form paths that provide pre-sorted input\n> that allows the group by to be implemented efficiently with pre-sorted\n> data. You might see Merge Joins rather than Hash Joins, for example.\n>\n\nAre there guidelines or principles you could share about writing the group\nby clause such that it is more efficient?\n\nIf we want to do anything much smarter than that like trying every\ncombination of the GROUP BY clause, then plan times are likely going\nto explode. The join order search is done based on the chosen query\npathkeys, which in many queries is the pathkeys for the GROUP BY\nclause (see standard_qp_callback()). This means throughout the join\nsearch, planner will try and form paths that provide pre-sorted input\nthat allows the group by to be implemented efficiently with pre-sorted\ndata. You might see Merge Joins rather than Hash Joins, for example.Are there guidelines or principles you could share about writing the group by clause such that it is more efficient?",
"msg_date": "Tue, 2 Mar 2021 14:04:24 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential performance issues related to group by and covering\n index"
},
{
"msg_contents": "On Wed, 3 Mar 2021 at 10:04, Michael Lewis <[email protected]> wrote:\n> Are there guidelines or principles you could share about writing the group by clause such that it is more efficient?\n\nIf you have the option of writing them in the same order as an\nexisting btree index that covers the entire GROUP BY clause (in\nversion < PG13) or at least prefix of the GROUP BY clause (version >=\nPG13), then the planner has a chance to make use of that index to\nprovide pre-sorted input to do group aggregate.\n\nSince PG13 has Incremental Sort, having an index that covers only a\nprefix of the GROUP BY clause may still help.\n\nIf no indexes exist then you might get better performance by putting\nthe most distinct column first. That's because sorts don't need to\ncompare the remaining columns once it receives two different values\nfor one column. That gets more complex when the most distinct column\nis wider than the others. e.g a text compare is more expensive than\ncomparing two ints. For Hash Aggregate, I don't think the order will\nmatter much.\n\nDavid\n\n\n",
"msg_date": "Wed, 3 Mar 2021 10:41:24 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential performance issues related to group by and covering\n index"
},
{
"msg_contents": "In the original example it looks like using the index (and not running\na parallel query) is what made the query slow\n\nThe fast version was brute-force sequscan(s) + sort with 3 parallel\nbackends (leader + 2 workers) sharing the work.\n\n\nOn Tue, Mar 2, 2021 at 10:42 PM David Rowley <[email protected]> wrote:\n>\n> On Wed, 3 Mar 2021 at 10:04, Michael Lewis <[email protected]> wrote:\n> > Are there guidelines or principles you could share about writing the group by clause such that it is more efficient?\n>\n> If you have the option of writing them in the same order as an\n> existing btree index that covers the entire GROUP BY clause (in\n> version < PG13) or at least prefix of the GROUP BY clause (version >=\n> PG13), then the planner has a chance to make use of that index to\n> provide pre-sorted input to do group aggregate.\n>\n> Since PG13 has Incremental Sort, having an index that covers only a\n> prefix of the GROUP BY clause may still help.\n>\n> If no indexes exist then you might get better performance by putting\n> the most distinct column first. That's because sorts don't need to\n> compare the remaining columns once it receives two different values\n> for one column. That gets more complex when the most distinct column\n> is wider than the others. e.g a text compare is more expensive than\n> comparing two ints. For Hash Aggregate, I don't think the order will\n> matter much.\n>\n> David\n>\n>\n\n\n",
"msg_date": "Fri, 5 Mar 2021 02:14:40 +0100",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential performance issues related to group by and covering\n index"
}
] |
[
{
"msg_contents": "Hello,\n\n\nWe have 2 TPC-H queries which fetch the same tuples but have significant query execution time differences (22.0 times).\n\n\nWe are sharing a pair of TPC-H queries that exhibit this performance difference:\n\n\nFirst query:\n\nSELECT \"orders3\".\"o_comment\",\n\n \"orders3\".\"o_orderstatus\",\n\n \"orders3\".\"o_orderkey\",\n\n \"t17\".\"ps_partkey\",\n\n \"t17\".\"ps_supplycost\",\n\n \"t17\".\"ps_comment\",\n\n \"orders3\".\"o_clerk\",\n\n \"orders3\".\"o_totalprice\",\n\n \"t17\".\"ps_availqty\",\n\n \"t17\".\"ps_suppkey\"\n\nFROM (\n\n SELECT *\n\n FROM \"partsupp\"\n\n WHERE \"ps_comment\" LIKE ', even theodolites. regular, final theodolites eat after the carefully pending foxes. furiously regular deposits sleep slyly. carefully bold realms above the ironic dependencies haggle careful') AS \"t17\"\n\nLEFT JOIN \"orders\" AS \"orders3\"\n\nON true\n\nORDER BY \"t17\".\"ps_supplycost\"FETCH next 14 rows only\n\n\nSecond query:\n\nSELECT \"orders3\".\"o_comment\",\n\n \"orders3\".\"o_orderstatus\",\n\n \"orders3\".\"o_orderkey\",\n\n \"t17\".\"ps_partkey\",\n\n \"t17\".\"ps_supplycost\",\n\n \"t17\".\"ps_comment\",\n\n \"orders3\".\"o_clerk\",\n\n \"orders3\".\"o_totalprice\",\n\n \"t17\".\"ps_availqty\",\n\n \"t17\".\"ps_suppkey\"\n\nFROM (\n\n SELECT *\n\n FROM \"partsupp\"\n\n WHERE \"ps_comment\" LIKE ', even theodolites. regular, final theodolites eat after the carefully pending foxes. furiously regular deposits sleep slyly. carefully bold realms above the ironic dependencies haggle careful'\n\n ORDER BY \"ps_supplycost\"FETCH next 14 rows only) AS \"t17\"\n\nLEFT JOIN \"orders\" AS \"orders3\"\n\nON true\n\nORDER BY \"t17\".\"ps_supplycost\"FETCH next 14 rows only\n\n\n\n* Actual Behavior\n\nWe executed both queries on the TPC-H benchmark of scale factor 5: the first query takes over 8 seconds, while the second query only takes 0.3 seconds.\nWe think the time difference results from different plans selected. Specifically, in the first (slow) query, the DBMS performs a left join using entire table partsupp, while the second (fast) query performs a left join using only 14 rows from partsupp).\n\n\n* Query Execution Plan\n\n * First query:\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Limit (cost=464628.69..464630.32 rows=14 width=223) (actual time=8082.764..8082.767 rows=14 loops=1)\n\n -> Gather Merge (cost=464628.69..1193917.91 rows=6250614 width=223) (actual time=8082.762..8087.639 rows=14 loops=1)\n\n Workers Planned: 2\n\n Workers Launched: 2\n\n -> Sort (cost=463628.66..471441.93 rows=3125307 width=223) (actual time=2933.506..2933.506 rows=5 loops=3)\n\n Sort Key: partsupp.ps_supplycost\n\n Sort Method: quicksort Memory: 25kB\n\n Worker 0: Sort Method: top-N heapsort Memory: 32kB\n\n Worker 1: Sort Method: quicksort Memory: 25kB\n\n -> Nested Loop Left Join (cost=0.00..388506.36 rows=3125307 width=223) (actual time=360.602..1643.471 rows=2500000 loops=3)\n\n -> Parallel Seq Scan on partsupp (cost=0.00..108031.62 rows=1 width=144) (actual time=360.577..360.599 rows=0 loops=3)\n\n Filter: ((ps_comment)::text ~~ ', even theodolites. regular, final theodolites eat after the carefully pending foxes. furiously regular deposits sleep slyly. carefully bold realms above the ironic dependencies haggle careful'::text)\n\n Rows Removed by Filter: 1333333\n\n -> Seq Scan on orders orders3 (cost=0.00..205467.37 rows=7500737 width=79) (actual time=0.064..1544.990 rows=7500000 loops=1)\n\n Planning Time: 0.278 ms\n\n Execution Time: 8087.714 ms\n\n(16 rows)\n\n\n * Second query:\n\n QUERY PLAN\n\n\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n-----\n\n Limit (cost=109031.74..109032.26 rows=14 width=223) (actual time=363.883..363.890 rows=14 loops=1)\n\n -> Nested Loop Left Join (cost=109031.74..389506.49 rows=7500737 width=223) (actual time=363.882..363.887 rows=14 loops=1)\n\n -> Limit (cost=109031.74..109031.74 rows=1 width=144) (actual time=363.859..363.859 rows=1 loops=1)\n\n -> Sort (cost=109031.74..109031.74 rows=1 width=144) (actual time=363.858..363.858 rows=1 loops=1)\n\n Sort Key: partsupp.ps_supplycost\n\n Sort Method: quicksort Memory: 25kB\n\n -> Gather (cost=1000.00..109031.73 rows=1 width=144) (actual time=363.447..370.107 rows=1 loops=1)\n\n Workers Planned: 2\n\n Workers Launched: 2\n\n -> Parallel Seq Scan on partsupp (cost=0.00..108031.62 rows=1 width=144) (actual time=360.033..360.101 rows=0 loops=3)\n\n Filter: ((ps_comment)::text ~~ ', even theodolites. regular, final theodolites eat after the carefully pending foxes. furiously regular deposits sleep slyly. carefully bold realms above the ironic dependencies haggle careful'::t\n\next)\n\n Rows Removed by Filter: 1333333\n\n -> Seq Scan on orders orders3 (cost=0.00..205467.37 rows=7500737 width=79) (actual time=0.016..0.017 rows=14 loops=1)\n\n Planning Time: 0.228 ms\n\n Execution Time: 370.200 ms\n\n(15 rows)\n\n\n\n\n\n\n\n\n*Expected Behavior\n\nSince these two queries are semantically equivalent, we were hoping that PostgreSQL would evaluate them in roughly the same amount of time.\nIt looks to me that there is a missing optimization rule related to pushing the sort operator (i.e., order and limit) through the left join.\nGiven the significant query execution time difference, I was wondering if it is worth adding such a rule to make the system evaluate the first query more efficiently.\nIt would also be helpful if you could comment on if there is a standard practice to evaluate the tradeoff associated with adding such a rule in Postgresql.\n\n\n*Test Environment\n\nUbuntu 20.04 machine \"Linux panda 5.4.0-40-generic #44-Ubuntu SMP Tue Jun 23 00:01:04 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux\"\n\nPostgreSQL v12.3\n\nDatabase: TPC-H benchmark (with scale factor 5)\n\nThe description of table partsupp and orders is as follows:\n\ntpch5=# \\d partsupp;\n\n Table \"public.partsupp\"\n\n Column | Type | Collation | Nullable | Default\n\n---------------+------------------------+-----------+----------+---------\n\n ps_partkey | integer | | not null |\n\n ps_suppkey | integer | | not null |\n\n ps_availqty | integer | | not null |\n\n ps_supplycost | numeric(15,2) | | not null |\n\n ps_comment | character varying(199) | | not null |\n\nIndexes:\n\n \"partsupp_pkey\" PRIMARY KEY, btree (ps_partkey, ps_suppkey)\n\nForeign-key constraints:\n\n \"partsupp_fk1\" FOREIGN KEY (ps_suppkey) REFERENCES supplier(s_suppkey)\n\n \"partsupp_fk2\" FOREIGN KEY (ps_partkey) REFERENCES part(p_partkey)\n\nReferenced by:\n\n TABLE \"lineitem\" CONSTRAINT \"lineitem_fk2\" FOREIGN KEY (l_partkey, l_suppkey) REFERENCES partsupp(ps_partkey, ps_suppkey)\n\ntpch5=# \\d orders;\n\n Table \"public.orders\"\n\n Column | Type | Collation | Nullable | Default\n\n-----------------+-----------------------+-----------+----------+---------\n\n o_orderkey | integer | | not null |\n\n o_custkey | integer | | not null |\n\n o_orderstatus | character(1) | | not null |\n\n o_totalprice | numeric(15,2) | | not null |\n\n o_orderdate | date | | not null |\n\n o_orderpriority | character(15) | | not null |\n\n o_clerk | character(15) | | not null |\n\n o_shippriority | integer | | not null |\n\n o_comment | character varying(79) | | not null |\n\nIndexes:\n\n \"orders_pkey\" PRIMARY KEY, btree (o_orderkey)\n\nForeign-key constraints:\n\n \"orders_fk1\" FOREIGN KEY (o_custkey) REFERENCES customer(c_custkey)\n\nReferenced by:\n\n TABLE \"lineitem\" CONSTRAINT \"lineitem_fk1\" FOREIGN KEY (l_orderkey) REFERENCES orders(o_orderkey)\n\n\n\n\n\n*Here are the steps for reproducing our observations:\n\n 1. Download the dataset from the link: https://drive.google.com/file/d/13rFa1BNDi4e2RmXBn-yEQkcqt6lsBu1c/view?usp=sharing\n\n 2. Set up TPC-H benchmark\n\ntar xzvf tpch5_postgresql.tar.gz\n\ncd tpch5_postgresql\n\ndb=tpch5\n\ncreatedb $db\n\npsql -d $db < dss.ddl\n\nfor i in `ls *.tbl`\n\ndo\n\n echo $i\n\n name=`echo $i|cut -d'.' -f1`\n\n psql -d $db -c \"COPY $name FROM '`pwd`/$i' DELIMITER '|' ENCODING 'LATIN1';\"\n\ndone\n\npsql -d $db < dss_postgres.ri\n\n 1. Execute the queries\n\n\n\n\n\n\n\n\n\n\nHello,\n\nWe have 2 TPC-H queries which fetch the same tuples but have significant query execution time differences (22.0\n times).\n\nWe are sharing a pair of TPC-H queries that exhibit this performance difference:\n\nFirst query:\nSELECT \n\"orders3\".\"o_comment\", \n \"orders3\".\"o_orderstatus\", \n \"orders3\".\"o_orderkey\", \n \"t17\".\"ps_partkey\", \n \"t17\".\"ps_supplycost\", \n \"t17\".\"ps_comment\", \n \"orders3\".\"o_clerk\", \n \"orders3\".\"o_totalprice\", \n \"t17\".\"ps_availqty\", \n \"t17\".\"ps_suppkey\" \nFROM \n( \n SELECT * \n FROM \n\"partsupp\" \n WHERE \n\"ps_comment\" LIKE ',\n even theodolites. regular, final theodolites eat after the carefully pending foxes. furiously regular deposits sleep slyly. carefully bold realms above the ironic dependencies haggle careful') AS \"t17\"\nLEFT JOIN \"orders\" \n \nAS \"orders3\"\nON \ntrue \nORDER BY \n\"t17\".\"ps_supplycost\"FETCH next 14\nrows only \n\nSecond query:\nSELECT \n\"orders3\".\"o_comment\", \n \"orders3\".\"o_orderstatus\", \n \"orders3\".\"o_orderkey\", \n \"t17\".\"ps_partkey\", \n \"t17\".\"ps_supplycost\", \n \"t17\".\"ps_comment\", \n \"orders3\".\"o_clerk\", \n \"orders3\".\"o_totalprice\", \n \"t17\".\"ps_availqty\", \n \"t17\".\"ps_suppkey\" \nFROM \n( \n SELECT \n* \n FROM \n\"partsupp\" \n WHERE \n\"ps_comment\" LIKE\n', even theodolites. regular, final theodolites eat after the carefully pending foxes. furiously regular deposits sleep slyly. carefully bold realms above the ironic\n dependencies haggle careful'\n ORDER BY\n\"ps_supplycost\"FETCH\nnext 14\nrows only) AS\n\"t17\" \nLEFT JOIN\n\"orders\" AS\n\"orders3\" \nON \ntrue \nORDER BY \n\"t17\".\"ps_supplycost\"FETCH\nnext 14\nrows only\n\n\n* Actual Behavior\nWe executed both queries on the TPC-H benchmark of scale factor 5: the first query takes\n over 8 seconds, while the second query only takes 0.3 seconds. \nWe think the time difference results from different plans selected. Specifically, in the first (slow) query, the DBMS performs a left join using entire table partsupp, while the second (fast) query performs a left join using only 14 rows from partsupp).\n\n* Query Execution Plan\n\n\nFirst query:\n\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=464628.69..464630.32 rows=14 width=223) (actual time=8082.764..8082.767\n rows=14 loops=1)\n -> Gather Merge (cost=464628.69..1193917.91 rows=6250614 width=223) (actual time=8082.762..8087.639\n rows=14 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Sort (cost=463628.66..471441.93 rows=3125307 width=223) (actual time=2933.506..2933.506\n rows=5 loops=3)\n Sort Key: partsupp.ps_supplycost\n Sort Method: quicksort Memory: 25kB\n Worker 0: Sort Method: top-N heapsort Memory: 32kB\n Worker 1: Sort Method: quicksort Memory: 25kB\n -> Nested Loop Left Join (cost=0.00..388506.36 rows=3125307 width=223)\n (actual time=360.602..1643.471 rows=2500000 loops=3)\n -> Parallel Seq Scan on partsupp (cost=0.00..108031.62 rows=1\n width=144) (actual time=360.577..360.599 rows=0 loops=3)\n Filter: ((ps_comment)::text ~~ ', even theodolites. regular,\n final theodolites eat after the carefully pending foxes. furiously regular deposits sleep slyly. carefully bold realms above the ironic dependencies haggle careful'::text)\n Rows Removed by Filter: 1333333\n -> Seq Scan on orders orders3 (cost=0.00..205467.37 rows=7500737\n width=79) (actual time=0.064..1544.990 rows=7500000 loops=1)\n Planning Time: 0.278 ms\n Execution Time: 8087.714 ms\n(16 rows)\n\n\n\nSecond query:\n\n QUERY\n PLAN \n \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----\n Limit (cost=109031.74..109032.26 rows=14 width=223) (actual time=363.883..363.890\n rows=14 loops=1)\n -> Nested Loop Left Join (cost=109031.74..389506.49 rows=7500737 width=223) (actual\n time=363.882..363.887 rows=14 loops=1)\n -> Limit (cost=109031.74..109031.74 rows=1 width=144) (actual time=363.859..363.859\n rows=1 loops=1)\n -> Sort (cost=109031.74..109031.74 rows=1 width=144) (actual time=363.858..363.858\n rows=1 loops=1)\n Sort Key: partsupp.ps_supplycost\n Sort Method: quicksort Memory: 25kB\n -> Gather (cost=1000.00..109031.73 rows=1 width=144) (actual\n time=363.447..370.107 rows=1 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Seq Scan on partsupp (cost=0.00..108031.62\n rows=1 width=144) (actual time=360.033..360.101 rows=0 loops=3)\n Filter: ((ps_comment)::text ~~ ', even theodolites.\n regular, final theodolites eat after the carefully pending foxes. furiously regular deposits sleep slyly. carefully bold realms above the ironic dependencies haggle careful'::t\next)\n Rows Removed by Filter: 1333333\n -> Seq Scan on orders orders3 (cost=0.00..205467.37 rows=7500737 width=79)\n (actual time=0.016..0.017 rows=14 loops=1)\n Planning Time: 0.228 ms\n Execution Time: 370.200 ms\n(15 rows)\n\n\n\n\n\n\n\n*Expected Behavior\nSince these two queries are semantically equivalent, we were hoping that PostgreSQL\n would evaluate them in roughly the same amount of time. \nIt looks to me that there is a missing optimization rule related to pushing the sort operator (i.e., order and limit) through the left join.\n\nGiven the significant query execution time difference, I was wondering if it is worth adding such a rule to make the system evaluate the first query more efficiently.\n\nIt would also be helpful if you could comment on if there is a standard practice to evaluate the tradeoff associated with adding such a rule in Postgresql.\n\n*Test Environment\nUbuntu 20.04 machine \"Linux panda 5.4.0-40-generic #44-Ubuntu SMP Tue Jun 23 00:01:04\n UTC 2020 x86_64 x86_64 x86_64 GNU/Linux\"\nPostgreSQL v12.3\nDatabase: TPC-H benchmark (with scale factor 5)\nThe description of table partsupp and orders is as follows:\ntpch5=# \\d partsupp;\n Table \"public.partsupp\"\n Column | Type | Collation | Nullable | Default \n---------------+------------------------+-----------+----------+---------\n ps_partkey | integer | | not null | \n ps_suppkey | integer | | not null | \n ps_availqty | integer | | not null | \n ps_supplycost | numeric(15,2) | | not null | \n ps_comment | character varying(199) | | not null | \nIndexes:\n \"partsupp_pkey\" PRIMARY KEY, btree (ps_partkey, ps_suppkey)\nForeign-key constraints:\n \"partsupp_fk1\" FOREIGN KEY (ps_suppkey) REFERENCES supplier(s_suppkey)\n \"partsupp_fk2\" FOREIGN KEY (ps_partkey) REFERENCES part(p_partkey)\nReferenced by:\n TABLE \"lineitem\" CONSTRAINT \"lineitem_fk2\" FOREIGN KEY (l_partkey, l_suppkey) REFERENCES\n partsupp(ps_partkey, ps_suppkey)\ntpch5=# \\d orders;\n Table \"public.orders\"\n Column | Type | Collation | Nullable | Default \n-----------------+-----------------------+-----------+----------+---------\n o_orderkey | integer | | not null | \n o_custkey | integer | | not null | \n o_orderstatus | character(1) | | not null | \n o_totalprice | numeric(15,2) | | not null | \n o_orderdate | date | | not null | \n o_orderpriority | character(15) | | not null | \n o_clerk | character(15) | | not null | \n o_shippriority | integer | | not null | \n o_comment | character varying(79) | | not null | \nIndexes:\n \"orders_pkey\" PRIMARY KEY, btree (o_orderkey)\nForeign-key constraints:\n \"orders_fk1\" FOREIGN KEY (o_custkey) REFERENCES customer(c_custkey)\nReferenced by:\n TABLE \"lineitem\" CONSTRAINT \"lineitem_fk1\" FOREIGN KEY (l_orderkey) REFERENCES orders(o_orderkey)\n\n\n\n\n*Here are the steps for reproducing our observations:\n\n\n\nDownload the dataset from the link: https://drive.google.com/file/d/13rFa1BNDi4e2RmXBn-yEQkcqt6lsBu1c/view?usp=sharing\n\nSet up TPC-H benchmark\n\n\ntar xzvf tpch5_postgresql.tar.gz\n\ncd tpch5_postgresql\n\ndb=tpch5\n\ncreatedb $db\n\npsql -d $db < dss.ddl\n\nfor i in `ls *.tbl`\n\ndo\n\n echo $i\n\n name=`echo $i|cut -d'.' -f1`\n\n psql -d $db -c \"COPY $name FROM '`pwd`/$i' DELIMITER '|' ENCODING 'LATIN1';\"\n\ndone\n\npsql -d $db < dss_postgres.ri\n\n\nExecute the queries",
"msg_date": "Tue, 2 Mar 2021 04:47:00 +0000",
"msg_from": "\"Liu, Xinyu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance issues related to left join and order by"
},
{
"msg_contents": "On Tue, 2 Mar 2021 at 21:53, Liu, Xinyu <[email protected]> wrote:\n> *Expected Behavior\n>\n> Since these two queries are semantically equivalent, we were hoping that PostgreSQL would evaluate them in roughly the same amount of time.\n> It looks to me that there is a missing optimization rule related to pushing the sort operator (i.e., order and limit) through the left join.\n> Given the significant query execution time difference, I was wondering if it is worth adding such a rule to make the system evaluate the first query more efficiently.\n> It would also be helpful if you could comment on if there is a standard practice to evaluate the tradeoff associated with adding such a rule in Postgresql.\n\nWe currently don't attempt to push down LIMIT clauses into subqueries.\nBefore we did that we'd need to get much better at figuring out how\njoins duplicate rows so that we could be sure that we're not limiting\nthe subquery more than the number of records that the outer query will\nneed to reach its limit.\n\nIf you want some advice, you're likely to get more people on your side\nand possible support for making improvements to the query planner if\nyou provide examples that look remotely like real-world queries. In\nthe other emails that I've read from you on this list [1], it seems\nyou're example queries are all completely bogus. I suspect that the\nqueries are generated by some fuzz testing tool. I very much imagine\nthat really don't need help with these at all. With respect, it seems\nto me that there's about zero chance that you genuinely need the\nresults of this query more quickly and you've come for help with that.\n\nBecause PostgreSQL does not proactively cache query plans, ad-hoc\nqueries are always parsed, planned then executed. This means that\nit's often not practical to spend excessive amounts of time planning a\nquery that gets executed just once. Adding new optimisations to the\nquery planner means they either have to be very cheap to detect, or\nthey must pay off in many cases.\n\nIf you happen to think there's a genuine case for having the query\nplanner do a better job of doing LIMIT pushdowns into subqueries, then\nyou're welcome to submit a patch to implement that. You'll also need\nto carefully document exactly which cases the LIMIT can be pushed down\nand when it cannot. That's the hard part. The actual pushing down of\nthe clause is dead easy. If you're going to do that, then I'd suggest\nyou come up with better examples than this one. I don't think many\npeople will get on board with your newly proposed optimisations when\nthe queries are obviously not real. It's hard to imagine the\noptimisation being useful to any queries with a query that's so\nobviously not a real one.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/BN7PR07MB52024B973EAB075F4DF6C19ACD999%40BN7PR07MB5202.namprd07.prod.outlook.com\n\n\n",
"msg_date": "Tue, 2 Mar 2021 23:26:26 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issues related to left join and order by"
}
] |
[
{
"msg_contents": "Hi, I have to configure a postgresql in high availability.\nI want to ask you what tool you recommend to manage replication and\nfailover or switchover.\nThanks.\nRegards.-\n\nPablo.\n\nHi, I have to configure a postgresql in high availability.I want to ask you what tool you recommend to manage replication and failover or switchover.Thanks.Regards.-Pablo.",
"msg_date": "Tue, 2 Mar 2021 08:58:51 -0300",
"msg_from": "Rodriguez Pablo A <[email protected]>",
"msg_from_op": true,
"msg_subject": "High availability management tool."
},
{
"msg_contents": "Look into patroni, pg_auto_failover, postgres-operator.\n\nOn Tue, Mar 2, 2021 at 12:59 PM Rodriguez Pablo A <\[email protected]> wrote:\n\n> Hi, I have to configure a postgresql in high availability.\n> I want to ask you what tool you recommend to manage replication and\n> failover or switchover.\n> Thanks.\n> Regards.-\n>\n> Pablo.\n>\n>\n\nLook into patroni, pg_auto_failover, postgres-operator.On Tue, Mar 2, 2021 at 12:59 PM Rodriguez Pablo A <[email protected]> wrote:Hi, I have to configure a postgresql in high availability.I want to ask you what tool you recommend to manage replication and failover or switchover.Thanks.Regards.-Pablo.",
"msg_date": "Thu, 1 Apr 2021 09:59:46 +0200",
"msg_from": "Dorian Hoxha <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High availability management tool."
},
{
"msg_contents": "Hi,\n\nYou can investigate patroni in order to manage failover,replication and\nswitchover. The tool mentioned is an open-source tool.\n\nhttps://github.com/zalando/patroni\n\n\nRegards.\n\n\nRodriguez Pablo A <[email protected]>, 2 Mar 2021 Sal, 14:59\ntarihinde şunu yazdı:\n\n> Hi, I have to configure a postgresql in high availability.\n> I want to ask you what tool you recommend to manage replication and\n> failover or switchover.\n> Thanks.\n> Regards.-\n>\n> Pablo.\n>\n>\n\n-- \nHüseyin DEMİR\nDatabase Engineer\n\nHi, You can investigate patroni in order to manage failover,replication and switchover. The tool mentioned is an open-source tool.https://github.com/zalando/patroniRegards.Rodriguez Pablo A <[email protected]>, 2 Mar 2021 Sal, 14:59 tarihinde şunu yazdı:Hi, I have to configure a postgresql in high availability.I want to ask you what tool you recommend to manage replication and failover or switchover.Thanks.Regards.-Pablo.\n-- Hüseyin DEMİRDatabase Engineer",
"msg_date": "Thu, 1 Apr 2021 12:08:05 +0300",
"msg_from": "=?UTF-8?Q?H=C3=BCseyin_Demir?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High availability management tool."
},
{
"msg_contents": "In my experience (4 Postgresql cluster 2 pg12 in test and 2 pg10 in\nproduction, 3 of them in Kubernetes, one on discrete VM's), Patroni is\nseverely biased towards AWS-based containers, so it was not an option for\nmy standalone K8s clusters managed by Rancher and based on OCI and\nDigitalOcean VM's.\nIn my tries, Patroni was continuously creating and destroying the first\nreplica, because of a failed connection to itself, on the very same\nendpoint you found successful some log lines above.\nInvestigating the issue further, I've noticed that was not an issue in AWS,\nbut nobody had been testing it outside of AWS, at least at that moment\n(October 2020).\n\nSo, to make a long story short, if you're on AWS you can try Patroni,\notherways I would recommend maybe bitnami-repmgr, which in my tries was\nvery close to working in a real scenario. (I'm still living without an\nauto-failover solution, at the moment).\n\nAdalberto\n\n\n\n\nIl giorno gio 1 apr 2021 alle ore 11:08 Hüseyin Demir <\[email protected]> ha scritto:\n\n> Hi,\n>\n> You can investigate patroni in order to manage failover,replication and\n> switchover. The tool mentioned is an open-source tool.\n>\n> https://github.com/zalando/patroni\n>\n>\n> Regards.\n>\n>\n> Rodriguez Pablo A <[email protected]>, 2 Mar 2021 Sal, 14:59\n> tarihinde şunu yazdı:\n>\n>> Hi, I have to configure a postgresql in high availability.\n>> I want to ask you what tool you recommend to manage replication and\n>> failover or switchover.\n>> Thanks.\n>> Regards.-\n>>\n>> Pablo.\n>>\n>>\n>\n> --\n> Hüseyin DEMİR\n> Database Engineer\n>\n\nIn my experience (4 Postgresql cluster 2 pg12 in test and 2 pg10 in production, 3 of them in Kubernetes, one on discrete VM's), Patroni is severely biased towards AWS-based containers, so it was not an option for my standalone K8s clusters managed by Rancher and based on OCI and DigitalOcean VM's.In my tries, Patroni was continuously creating and destroying the first replica, because of a failed connection to itself, on the very same endpoint you found successful some log lines above.Investigating the issue further, I've noticed that was not an issue in AWS, but nobody had been testing it outside of AWS, at least at that moment (October 2020).So, to make a long story short, if you're on AWS you can try Patroni, otherways I would recommend maybe bitnami-repmgr, which in my tries was very close to working in a real scenario. (I'm still living without an auto-failover solution, at the moment).AdalbertoIl giorno gio 1 apr 2021 alle ore 11:08 Hüseyin Demir <[email protected]> ha scritto:Hi, You can investigate patroni in order to manage failover,replication and switchover. The tool mentioned is an open-source tool.https://github.com/zalando/patroniRegards.Rodriguez Pablo A <[email protected]>, 2 Mar 2021 Sal, 14:59 tarihinde şunu yazdı:Hi, I have to configure a postgresql in high availability.I want to ask you what tool you recommend to manage replication and failover or switchover.Thanks.Regards.-Pablo.\n-- Hüseyin DEMİRDatabase Engineer",
"msg_date": "Thu, 1 Apr 2021 11:23:53 +0200",
"msg_from": "Adalberto Caccia <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High availability management tool."
}
] |
[
{
"msg_contents": "Hi Everyone,\nI was trying to collect table metadata with a description; the use case is that I need to show all columns of the tables whether it has the description or not. \nI tried the below query, but it only gives column details that have a description and ignore others if not. \n\nPostgres 11 | db<>fiddle\n\n\n| \n| \n| | \nPostgres 11 | db<>fiddle\n\nFree online SQL environment for experimenting and sharing.\n |\n\n |\n\n |\n\n\n\n\n\ncreate table test(id int);create table test1(id int Primary key );comment on column test.id is 'Test descr';\n\n SELECT c.table_schema,c.table_name,c.column_name,case when c.domain_name is not null then c.domain_name when c.data_type='character varying' THEN 'character varying('||c.character_maximum_length||')' when c.data_type='character' THEN 'character('||c.character_maximum_length||')' when c.data_type='numeric' THEN 'numeric('||c.numeric_precision||','||c.numeric_scale||')' else c.data_typeend as data_type,c.is_nullable, (select 'Y' from information_schema.table_constraints tcojoin information_schema.key_column_usage kcu on kcu.constraint_name = tco.constraint_name and kcu.constraint_schema = tco.constraint_schema and kcu.constraint_schema = c.table_schema and kcu.table_name = c.table_name and kcu.column_name = c.column_namewhere tco.constraint_type = 'PRIMARY KEY' ) as is_in_PK,(select distinct 'Y' from information_schema.table_constraints tcojoin information_schema.key_column_usage kcu on kcu.constraint_name = tco.constraint_name and kcu.constraint_schema = tco.constraint_schema and kcu.constraint_schema = c.table_schema and kcu.table_name = c.table_name and kcu.column_name = c.column_namewhere tco.constraint_type = 'FOREIGN KEY' ) as is_in_FK,pgd.description\n\nFROM pg_catalog.pg_statio_all_tables as st Left outer join pg_catalog.pg_description pgd on (pgd.objoid=st.relid) left outer join information_schema.columns c on (pgd.objsubid=c.ordinal_position and c.table_schema=st.schemaname and c.table_name=st.relname)where c.table_name='test'order by c.table_schema,c.table_name,c.ordinal_position; \n\nexpected formate is :\n\n| table_schema | table_name | column_name | data_type | is_nullable | is_in_pk | is_in_fk | description |\n\n\n\nany suggestions?\nThanks,Rj\n\nHi Everyone,I was trying to collect table metadata with a description; the use case is that I need to show all columns of the tables whether it has the description or not. I tried the below query, but it only gives column details that have a description and ignore others if not. Postgres 11 | db<>fiddlePostgres 11 | db<>fiddleFree online SQL environment for experimenting and sharing.create table test(id int);create table test1(id int Primary key );comment on column test.id is 'Test descr'; SELECT c.table_schema,c.table_name,c.column_name,case when c.domain_name is not null then c.domain_name when c.data_type='character varying' THEN 'character varying('||c.character_maximum_length||')' when c.data_type='character' THEN 'character('||c.character_maximum_length||')' when c.data_type='numeric' THEN 'numeric('||c.numeric_precision||','||c.numeric_scale||')' else c.data_typeend as data_type,c.is_nullable, (select 'Y' from information_schema.table_constraints tcojoin information_schema.key_column_usage kcu on kcu.constraint_name = tco.constraint_name and kcu.constraint_schema = tco.constraint_schema and kcu.constraint_schema = c.table_schema and kcu.table_name = c.table_name and kcu.column_name = c.column_namewhere tco.constraint_type = 'PRIMARY KEY' ) as is_in_PK,(select distinct 'Y' from information_schema.table_constraints tcojoin information_schema.key_column_usage kcu on kcu.constraint_name = tco.constraint_name and kcu.constraint_schema = tco.constraint_schema and kcu.constraint_schema = c.table_schema and kcu.table_name = c.table_name and kcu.column_name = c.column_namewhere tco.constraint_type = 'FOREIGN KEY' ) as is_in_FK,pgd.descriptionFROM pg_catalog.pg_statio_all_tables as st Left outer join pg_catalog.pg_description pgd on (pgd.objoid=st.relid) left outer join information_schema.columns c on (pgd.objsubid=c.ordinal_position and c.table_schema=st.schemaname and c.table_name=st.relname)where c.table_name='test'order by c.table_schema,c.table_name,c.ordinal_position; expected formate is :table_schematable_namecolumn_namedata_typeis_nullableis_in_pkis_in_fkdescriptionany suggestions?Thanks,Rj",
"msg_date": "Wed, 3 Mar 2021 01:20:51 +0000 (UTC)",
"msg_from": "Nagaraj Raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "tables meta data collection"
},
{
"msg_contents": "On Wed, Mar 03, 2021 at 01:20:51AM +0000, Nagaraj Raj wrote:\n> I was trying to collect table metadata with a description; the use case is that I need to show all columns of the tables whether it has the description or not. \n> I tried the below query, but it only gives column details that have a description and ignore others if not. \n\nLooks like you should join information_schema.columns *before*\npg_catalog.pg_description, otherwise the columns view has nothing to join to\nunless the column has a description.\n...or you could use FULL OUTER JOIN on the columns view.\n\n> SELECT c.table_schema,c.table_name,c.column_name,case when c.domain_name is not null then c.domain_name when c.data_type='character varying' THEN 'character varying('||c.character_maximum_length||')' when c.data_type='character' THEN 'character('||c.character_maximum_length||')' when c.data_type='numeric' THEN 'numeric('||c.numeric_precision||','||c.numeric_scale||')' else c.data_typeend as data_type,c.is_nullable, (select 'Y' from information_schema.table_constraints tcojoin information_schema.key_column_usage kcu on kcu.constraint_name = tco.constraint_name and kcu.constraint_schema = tco.constraint_schema and kcu.constraint_schema = c.table_schema and kcu.table_name = c.table_name and kcu.column_name = c.column_namewhere tco.constraint_type = 'PRIMARY KEY' ) as is_in_PK,(select distinct 'Y' from information_schema.table_constraints tcojoin information_schema.key_column_usage kcu on kcu.constraint_name = tco.constraint_name and kcu.constraint_schema = tco.constraint_schema and kcu.constraint_schema = c.table_schema and kcu.table_name = c.table_name and kcu.column_name = c.column_namewhere tco.constraint_type = 'FOREIGN KEY' ) as is_in_FK,pgd.description\n> \n> FROM pg_catalog.pg_statio_all_tables as st Left outer join pg_catalog.pg_description pgd on (pgd.objoid=st.relid) left outer join information_schema.columns c on (pgd.objsubid=c.ordinal_position and c.table_schema=st.schemaname and c.table_name=st.relname)where c.table_name='test'order by c.table_schema,c.table_name,c.ordinal_position; \n> \n> expected formate is :\n> \n> | table_schema | table_name | column_name | data_type | is_nullable | is_in_pk | is_in_fk | description |\n\n-- \nJustin Pryzby\nSystem Administrator\nTelsasoft\n+1-952-707-8581\n\n\n",
"msg_date": "Tue, 2 Mar 2021 22:20:23 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tables meta data collection"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI have a SELECT query that uses a long chain of CTEs (6) and is executed\nrepeatedly as part of the transaction (with different parameters). It is\nexecuted quickly most of the time, but sometimes becomes very slow. I\nmanaged to consistently reproduce the issue by executing a transaction\ncontaining this query on an empty database. The query is fast for the first\n150-170 inserted resources, but ~50% of the executions afterwards take 5.6s\ninstead of 1.4ms. Additionally it only becomes slow if resources are\ninserted in a random order, if I insert resources sorted by\n`start_date_time` column the query is always fast. \n\nThe slow query is part of the transaction (with Repeatable Read isolation\nlevel) that executes *create if not exists* type flow for 211 resources.\nEach resource insertion, inserts multiple rows into each table. Each\nresource has a unique `resource_surrogate_id`, which is part of every row\ninserted for that resource. Inside a transaction search is performed using\nvalues for a resource, before that resource is inserted, hence it always\nreturns 0 rows (only date values are changing and never overlap). Resources\nare inserted in the order of increasing `resource_surrogate_id` (only\n`resource_type_id == 52` rows are part of the transaction).\n\nSome more info about the CTEs:\n- cte0: always matches N rows (from 5*N rows inserted for each resource)\n- cte1: always matches N rows (from 5*N rows inserted for each resource)\n- cte2: always matches N rows (from 17*N rows inserted for each resource)\n- cte3 when combined with cte4: always matches 0 rows (from 2*N rows\ninserted for each resource)\n- (if insertions are ordered by start_date_time, cte3 matches 0 rows, and\nquery is fast - execution plan not included)\n\nHere are the results of EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS, FORMAT\nJSON):\n- Slow: https://explain.depesz.com/s/e4Fo\n- Fast: https://explain.depesz.com/s/7HFJ\n\nI have also created a gist:\nhttps://gist.github.com/anyname2/e908d13d515e8970e599eb650cab15fe\n- init.sql - is a script to create tables and indexes\n- parametrized-query.sql - is the query being executed\n- pgdump.sql - database dump\n\nPostgreSQL Version: PostgreSQL 13.2 on x86_64-pc-linux-musl, compiled by gcc\n(Alpine 10.2.1_pre1) 10.2.1 20201203, 64-bit\nSetup: PostgreSQL is running inside a docker container (with default\nparameters), issue is reproducible both in the Kubernetes cluster and\nlocally.\n\nCan anyone help diagnose this?\n\nThere are a few question I have:\n- Repeatable Read transaction is running on an empty database, hence it\nshould not match anything. Why are resources inserted in the current\ntransaction considered in the query? (if I understood the execution plan\ncorrectly)\n- What causes the slow case? How can I rewrite the query to avoid the slow\ncase?\n- Is it my query not optimised? Or should the execution planner handle it\nbetter somehow?\n\n\nThank you,\nValentinas\n\n\n\n",
"msg_date": "Fri, 5 Mar 2021 17:55:37 -0000",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query performance inside a transaction on a clean database"
},
{
"msg_contents": "On Fri, 2021-03-05 at 17:55 +0000, [email protected] wrote:\n> I have a SELECT query that uses a long chain of CTEs (6) and is executed\n> repeatedly as part of the transaction (with different parameters). It is\n> executed quickly most of the time, but sometimes becomes very slow. I\n> managed to consistently reproduce the issue by executing a transaction\n> containing this query on an empty database. The query is fast for the first\n> 150-170 inserted resources, but ~50% of the executions afterwards take 5.6s\n> instead of 1.4ms. Additionally it only becomes slow if resources are\n> inserted in a random order, if I insert resources sorted by\n> `start_date_time` column the query is always fast.\n> \n> Here are the results of EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS, FORMAT\n> JSON):\n> - Slow: https://explain.depesz.com/s/e4Fo\n> - Fast: https://explain.depesz.com/s/7HFJ\n\nIf your transaction modifies the data significantly (which it does if the\ntable is empty before you start), you should throw in an ANALYZE on the\naffected tables occasionally.\n\nNormally, autovacuum takes care of that, but it cannot see your data\nuntil the transaction is committed.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Mon, 08 Mar 2021 10:17:59 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query performance inside a transaction on a clean database"
}
] |
[
{
"msg_contents": "Hi all,\nI would appreciate if somebody could help me understanding why the query\ndescribed below takes very different execution times to complete, almost\ncompletely randomly.\n\nI have two very \"simple\" tables, A and B:\n\nCREATE TABLE A (\na1 varchar(10) NULL,\na2 varchar(10) NULL,\nv int4 NULL\n);\nCREATE TABLE B (\na1 varchar(10) NULL,\na2 varchar(10) NULL,\nv int4 NULL\n);\n\nI load A and B with some random numbers (10^7 records each one) and, when\nthe loading is complete, I run a query that \"essentially\" counts the number\nof the records obtained through a sequence of JOINs, which has the\nfollowing form:\n SELECT count(*) from (... A join B ....)\n\nThe query does not write/update any value, it is simply a SELECT applied\nover the aforementioned tables (plus some unions and intersections).\n\nThe problem is the following: the query can take between 20 seconds and 4\nminutes to complete. Most of times, when I run the query for the first time\nafter the server initialisation, it takes 20 seconds; but if I re-run it\nagain (without changing anything) right after the first execution, the\nprobability to take more than 4 minutes is very high.\nMy impression is that the \"status\" of the internal structures of the DBMS\nis somehow affected by the first execution, but I cannot figure out neither\nwhat nor how I can fix it.\n\nTo give some additional information, I can state the following facts:\n- There are no other processes reading or writing the tables on the schema\nand the status of A and B is constant.\n- During the loading of A and B (process that takes more or less one minute\nto complete), I get the following messages:\n LOG: checkpoints are occurring too frequently (29 seconds apart).\n HINT: Consider increasing the configuration parameter \"max_wal_size\".\n LOG: checkpoints are occurring too frequently (18 seconds apart)\n HINT: Consider increasing the configuration parameter \"max_wal_size\".\n- I am running my query through DBeaver and PostgreSQL runs in a Docker\ncontainer.\n\nThe version that I am using is the following: 13.2 (Debian\n13.2-1.pgdg100+1).\nThe only configuration parameters I have changed are the following:\n- force_parallel_mode = on\n- max_parallel_workers_per_gather = 64\n- parallel_setup_cost = 1\n- parallel_tuple_cost = 0.001\n- work_mem = '800MB'\n\nI hope somebody could help me, as I really don't know why I am experiencing\nsuch a strange behaviour.\n\nThanks a lot.\n\nBest regards,\nFrancesco\n\nHi all,I would appreciate if somebody could help me understanding why the query described below takes very different execution times to complete, almost completely randomly.I have two very \"simple\" tables, A and B:CREATE TABLE A (\ta1 varchar(10) NULL,\ta2 varchar(10) NULL,\tv int4 NULL);CREATE TABLE B (\ta1 varchar(10) NULL,\ta2 varchar(10) NULL,\tv int4 NULL);I load A and B with some random numbers (10^7 records each one) and, when the loading is complete, I run a query that \"essentially\" counts the number of the records obtained through a sequence of JOINs, which has the following form: SELECT count(*) from (... A join B ....)The query does not write/update any value, it is simply a SELECT applied over the aforementioned tables (plus some unions and intersections).The problem is the following: the query can take between 20 seconds and 4 minutes to complete. Most of times, when I run the query for the first time after the server initialisation, it takes 20 seconds; but if I re-run it again (without changing anything) right after the first execution, the probability to take more than 4 minutes is very high.My impression is that the \"status\" of the internal structures of the DBMS is somehow affected by the first execution, but I cannot figure out neither what nor how I can fix it.To give some additional information, I can state the following facts:- There are no other processes reading or writing the tables on the schema and the status of A and B is constant.- During the loading of A and B (process that takes more or less one minute to complete), I get the following messages: LOG: checkpoints are occurring too frequently (29 seconds apart). HINT: Consider increasing the configuration parameter \"max_wal_size\". LOG: checkpoints are occurring too frequently (18 seconds apart) HINT: Consider increasing the configuration parameter \"max_wal_size\".- I am running my query through DBeaver and PostgreSQL runs in a Docker container.The version that I am using is the following: 13.2 (Debian 13.2-1.pgdg100+1).The only configuration parameters I have changed are the following:- force_parallel_mode = on- max_parallel_workers_per_gather = 64- parallel_setup_cost = 1- parallel_tuple_cost = 0.001- work_mem = '800MB'I hope somebody could help me, as I really don't know why I am experiencing such a strange behaviour. Thanks a lot.Best regards,Francesco",
"msg_date": "Sat, 6 Mar 2021 22:40:00 +0100",
"msg_from": "Francesco De Angelis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: different execution time for the same query (and same DB status)"
},
{
"msg_contents": "Hi,\n\n \n\nHave you tried to use EXPLAIN ANALYZE at least?\n\nIt could give valuable information about why this is occurring.\n\n \n\nMichel SALAIS\n\nDe : Francesco De Angelis <[email protected]> \nEnvoyé : samedi 6 mars 2021 22:40\nÀ : [email protected]\nObjet : Fwd: different execution time for the same query (and same DB status)\n\n \n\n \n\nHi all,\n\nI would appreciate if somebody could help me understanding why the query described below takes very different execution times to complete, almost completely randomly.\n\n \n\nI have two very \"simple\" tables, A and B:\n\n \n\nCREATE TABLE A (\na1 varchar(10) NULL,\na2 varchar(10) NULL,\nv int4 NULL\n);\nCREATE TABLE B (\na1 varchar(10) NULL,\na2 varchar(10) NULL,\nv int4 NULL\n);\n\n \n\nI load A and B with some random numbers (10^7 records each one) and, when the loading is complete, I run a query that \"essentially\" counts the number of the records obtained through a sequence of JOINs, which has the following form:\n\n SELECT count(*) from (... A join B ....)\n\n \n\nThe query does not write/update any value, it is simply a SELECT applied over the aforementioned tables (plus some unions and intersections).\n\n \n\nThe problem is the following: the query can take between 20 seconds and 4 minutes to complete. Most of times, when I run the query for the first time after the server initialisation, it takes 20 seconds; but if I re-run it again (without changing anything) right after the first execution, the probability to take more than 4 minutes is very high.\n\nMy impression is that the \"status\" of the internal structures of the DBMS is somehow affected by the first execution, but I cannot figure out neither what nor how I can fix it.\n\n \n\nTo give some additional information, I can state the following facts:\n\n- There are no other processes reading or writing the tables on the schema and the status of A and B is constant.\n\n- During the loading of A and B (process that takes more or less one minute to complete), I get the following messages:\n LOG: checkpoints are occurring too frequently (29 seconds apart).\n\n HINT: Consider increasing the configuration parameter \"max_wal_size\".\n\n LOG: checkpoints are occurring too frequently (18 seconds apart)\n\n HINT: Consider increasing the configuration parameter \"max_wal_size\".\n\n- I am running my query through DBeaver and PostgreSQL runs in a Docker container.\n\n \n\nThe version that I am using is the following: 13.2 (Debian 13.2-1.pgdg100+1).\n\nThe only configuration parameters I have changed are the following:\n\n- force_parallel_mode = on\n\n- max_parallel_workers_per_gather = 64\n\n- parallel_setup_cost = 1\n\n- parallel_tuple_cost = 0.001\n\n- work_mem = '800MB'\n\n \n\nI hope somebody could help me, as I really don't know why I am experiencing such a strange behaviour. \n\n \n\nThanks a lot.\n\n \n\nBest regards,\n\nFrancesco\n\n \n\n \n\n \n\n \n\n \n\n\nHi, Have you tried to use EXPLAIN ANALYZE at least?It could give valuable information about why this is occurring. Michel SALAISDe : Francesco De Angelis <[email protected]> Envoyé : samedi 6 mars 2021 22:40À : [email protected] : Fwd: different execution time for the same query (and same DB status) Hi all,I would appreciate if somebody could help me understanding why the query described below takes very different execution times to complete, almost completely randomly. I have two very \"simple\" tables, A and B: CREATE TABLE A (a1 varchar(10) NULL,a2 varchar(10) NULL,v int4 NULL);CREATE TABLE B (a1 varchar(10) NULL,a2 varchar(10) NULL,v int4 NULL); I load A and B with some random numbers (10^7 records each one) and, when the loading is complete, I run a query that \"essentially\" counts the number of the records obtained through a sequence of JOINs, which has the following form: SELECT count(*) from (... A join B ....) The query does not write/update any value, it is simply a SELECT applied over the aforementioned tables (plus some unions and intersections). The problem is the following: the query can take between 20 seconds and 4 minutes to complete. Most of times, when I run the query for the first time after the server initialisation, it takes 20 seconds; but if I re-run it again (without changing anything) right after the first execution, the probability to take more than 4 minutes is very high.My impression is that the \"status\" of the internal structures of the DBMS is somehow affected by the first execution, but I cannot figure out neither what nor how I can fix it. To give some additional information, I can state the following facts:- There are no other processes reading or writing the tables on the schema and the status of A and B is constant.- During the loading of A and B (process that takes more or less one minute to complete), I get the following messages: LOG: checkpoints are occurring too frequently (29 seconds apart). HINT: Consider increasing the configuration parameter \"max_wal_size\". LOG: checkpoints are occurring too frequently (18 seconds apart) HINT: Consider increasing the configuration parameter \"max_wal_size\".- I am running my query through DBeaver and PostgreSQL runs in a Docker container. The version that I am using is the following: 13.2 (Debian 13.2-1.pgdg100+1).The only configuration parameters I have changed are the following:- force_parallel_mode = on- max_parallel_workers_per_gather = 64- parallel_setup_cost = 1- parallel_tuple_cost = 0.001- work_mem = '800MB' I hope somebody could help me, as I really don't know why I am experiencing such a strange behaviour. Thanks a lot. Best regards,Francesco",
"msg_date": "Sun, 7 Mar 2021 15:51:05 +0100",
"msg_from": "\"Michel SALAIS\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: different execution time for the same query (and same DB status)"
},
{
"msg_contents": "On Sun, Mar 07, 2021 at 03:51:05PM +0100, Michel SALAIS wrote:\n> \n> Have you tried to use EXPLAIN ANALYZE at least?\n> \n> It could give valuable information about why this is occurring.\n\n+1, and more generally please follow\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions.\n\n\n",
"msg_date": "Sun, 7 Mar 2021 23:08:18 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: different execution time for the same query (and same DB status)"
},
{
"msg_contents": "Julien Rouhaud <[email protected]> writes:\n> +1, and more generally please follow\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions.\n\nYeah. FWIW, the most likely explanation for the change in behavior is\nthat by the time of the second execution, auto-analyze has managed to\nupdate the table's statistics, and that for some reason that's changing\nthe plan for the worse. But then the next question is why and what\ncan be done about that. See the wiki entry for info that would be\nhelpful in diagnosing this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Mar 2021 11:37:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: different execution time for the same query (and same DB status)"
},
{
"msg_contents": "You don't mention shared_buffers, which is quite low by default. Not sure\nof the memory of your docker container, but it might be prudent to increase\nshared_buffers to keep as much data as possible in memory rather than\nneeding to read from disk by a second run. To test the possibility Tom Lane\nsuggested, do a manual analyze after data insert to see if it is also slow.\nExplain (analyze, buffers) select... and then using\nhttps://explain.depesz.com/ or https://tatiyants.com/pev/#/plans/new would\nbe a good option to have some visualization on where the query is going off\nthe rails.\n\nYou don't mention shared_buffers, which is quite low by default. Not sure of the memory of your docker container, but it might be prudent to increase shared_buffers to keep as much data as possible in memory rather than needing to read from disk by a second run. To test the possibility Tom Lane suggested, do a manual analyze after data insert to see if it is also slow. Explain (analyze, buffers) select... and then using https://explain.depesz.com/ or https://tatiyants.com/pev/#/plans/new would be a good option to have some visualization on where the query is going off the rails.",
"msg_date": "Mon, 8 Mar 2021 10:03:25 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: different execution time for the same query (and same DB status)"
},
{
"msg_contents": "Hello,\nmany thanks to all the persons who replied.\nI am back with the information requested in\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions.\nHere you can find the results of the EXPLAIN commands:\n1) First execution: https://explain.depesz.com/s/X2as\n2) Second execution (right after the first one):\nhttps://explain.depesz.com/s/gHrb\n\nTable A and B are both loaded with 7500000 records, whereas Table C\ncontains 1600 records.\nBoth queries are executed with work_mem=800MB (the other parameters are\nreported in Section 7).\nWith such a value, I noticed also the following phenomenon: in addition to\nvariable execution times (as previusly stated, the range is between 20\nseconds and 4 minutes),\nsometimes the query crashes, returning the following error: SQL Error\n[57P03]: FATAL: the database system is in recovery mode.\nWhen this happens, in the log file I find many messages like these ones:\n...\nUTC [1] LOG: terminating any other active server processes\nUTC [115] WARNING: terminating connection because of crash of another\nserver process\nUTC [115] DETAIL: The postmaster has commanded this server process to roll\nback the current transaction and exit, because another server process\nexited abnormally and possibly corrupted shared memory.\nHINT: In a moment you should be able to reconnect to the database and\nrepeat your command.\n...\nFATAL: the database system is in recovery mode\nLOG: redo starts at 0/E8BFC8D0\nLOG: invalid record length at 0/E8BFC9B8: wanted 24, got 0\nLOG: redo done at 0/E8BFC980\n\nIf I re-run the query with with work_mem=400MB, the execution never crashes\n(at least in all the tests I carried out) and response times are pretty\nstable (always 4 minutes, plus/minus a small delta).\nSo I started wondering whether variable response times and crashes are\nsomehow correlated and due to the fact that the work_mem value is too high.\nAnyway, in the following sections I tried to report all the information\ndescribed in the wiki.\n\n============\n1. Version\n============\n\nPostgreSQL 13.2 (Debian 13.2-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled\nby gcc (Debian 8.3.0-6) 8.3.0, 64-bit\n\n============\n2. Tables\n============\n\nCREATE TABLE A (\na1 varchar(100),\na2 varchar(100),\nv int4 primary key\n);\n\n\nCREATE TABLE B (\na1 varchar(100),\na2 varchar(100),\nv int4 primary key\n);\n\nCREATE TABLE C (\nla1 varchar(100),\nla2 varchar(100),\nva1 varchar(100),\nva2 varchar(100),\nres varchar (100),\nc text\n);\n\n Schema | Name | Type | Owner\n--------+------+-------+-------\n public | a | table | admin\n public | b | table | admin\n public | c | table | admin\n(3 rows)\n\n List of relations\n Schema | Name | Type | Owner | Persistence | Size | Description\n--------+------+-------+-------+-------------+--------+-------------\n public | a | table | admin | permanent | 317 MB |\n(1 row)\n\n List of relations\n Schema | Name | Type | Owner | Persistence | Size | Description\n--------+------+-------+-------+-------------+--------+-------------\n public | b | table | admin | permanent | 317 MB |\n(1 row)\n\n List of relations\n Schema | Name | Type | Owner | Persistence | Size | Description\n--------+------+-------+-------+-------------+--------+-------------\n public | c | table | admin | permanent | 152 kB |\n(1 row)\n\n============\n3. Indices\n============\n\ntablename | indexname | indexdef\n-----------+-------------+--------------------------------------------------------\n a | a_pkey | CREATE UNIQUE INDEX a_pkey ON public.a USING\nbtree (v)\n a | hash_pka | CREATE INDEX hash_pka ON public.a USING hash (v)\n b | b_pkey | CREATE UNIQUE INDEX b_pkey ON public.b USING\nbtree (v)\n b | hash_pkb | CREATE INDEX hash_pkb ON public.b USING hash (v)\n c | hash_class0 | CREATE INDEX hash_class0 ON public.c USING hash\n(la1)\n c | hash_class1 | CREATE INDEX hash_class1 ON public.c USING hash\n(la2)\n c | hash_class2 | CREATE INDEX hash_class2 ON public.c USING hash\n(va1)\n c | hash_class3 | CREATE INDEX hash_class3 ON public.c USING hash\n(va2)\n c | hash_pkc | CREATE INDEX hash_pkc ON public.c USING hash (c)\n(9 rows)\n\n============\n3. Table metadata\n============\nrelname|relpages|reltuples|relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|\n-------|--------|---------|-------------|-------|--------|--------------|----------|-------------|\nb | 40541|7499970.0| 40541|r | 3|false\n|NULL | 332226560|\na | 40541|7499970.0| 40541|r | 3|false\n|NULL | 332226560|\nc | 14| 1600.0| 14|r | 6|false\n|NULL | 155648|\n\n============\n4. Hardware and its performances\n============\nMacBook Pro (15-inch, 2018),\n2.2 GHz Intel Core i7,\n16 GB 2400 MHz DDR4\n\nResults from bonnie++:\nroot@c1a8bd8320a0:/# bonnie++ -f -n0 -x4 -d /var/lib/postgresql/ -u root\nUsing uid:0, gid:0.\nformat_version,bonnie_version,name,concurrency,seed,file_size,io_chunk_size,putc,putc_cpu,put_block,put_block_cpu,rewrite,rewrite_cpu,getc,getc_cpu,get_block,get_block_cpu,seeks,seeks_cpu,num_files,max_size,min_size,num_dirs,file_chunk_size,seq_create,seq_create_cpu,seq_stat,seq_stat_cpu,seq_del,seq_del_cpu,ran_create,ran_create_cpu,ran_stat,ran_stat_cpu,ran_del,ran_del_cpu,putc_latency,put_block_latency,rewrite_latency,getc_latency,get_block_latency,seeks_latency,seq_create_latency,seq_stat_latency,seq_del_latency,ran_create_latency,ran_stat_latency,ran_del_latency\nWriting intelligently...done\nRewriting...done\nReading intelligently...done\nstart 'em...done...done...done...done...done...\n1.98,1.98,c1a8bd8320a0,1,1615294449,4G,,8192,5,,,248685,79,223758,81,,,362721,93,11439,463,,,,,,,,,,,,,,,,,,,1181ms,1186ms,,5784us,13082us,,,,,,\nWriting intelligently...done\nRewriting...done\nReading intelligently...done\nstart 'em...done...done...done...done...done...\n1.98,1.98,c1a8bd8320a0,1,1615294449,4G,,8192,5,,,259711,83,235606,81,,,402573,99,13113,687,,,,,,,,,,,,,,,,,,,692ms,1553ms,,1569us,21954us,,,,,,\nWriting intelligently...done\nRewriting...done\nReading intelligently...done\nstart 'em...done...done...done...done...done...\n1.98,1.98,c1a8bd8320a0,1,1615294449,4G,,8192,5,,,249913,84,207899,79,,,383606,99,12570,675,,,,,,,,,,,,,,,,,,,1363ms,940ms,,4864us,64727us,,,,,,\nWriting intelligently...done\nRewriting...done\nReading intelligently...done\nstart 'em...done...done...done...done...done...\n1.98,1.98,c1a8bd8320a0,1,1615294449,4G,,8192,5,,,265425,83,215268,81,,,373339,100,+++++,+++,,,,,,,,,,,,,,,,,,,817ms,920ms,,2355us,4786us,,,,,,\n\nResults from dd:\n14032+0 records in\n14032+0 records out\n14713618432 bytes transferred in 4.399248 secs (3344575895 bytes/sec)\n 4.40 real 0.01 user 3.39 sys\n\n============\n5. Maintenance setup\n============\n\nrelid|schemaname|relname|seq_scan|seq_tup_read|idx_scan|idx_tup_fetch|n_tup_ins|n_tup_upd|n_tup_del|n_tup_hot_upd|n_live_tup|n_dead_tup|n_mod_since_analyze|n_ins_since_vacuum|last_vacuum|last_autovacuum\n |last_analyze|last_autoanalyze\n|vacuum_count|autovacuum_count|analyze_count|autoanalyze_count|\n-----|----------|-------|--------|------------|--------|-------------|---------|---------|---------|-------------|----------|----------|-------------------|------------------|-----------|-------------------|------------|-------------------|------------|----------------|-------------|-----------------|\n34944|public |b | 23| 30000000| 8| 0|\n 7500000| 0| 0| 0| 7499970| 0|\n 0| 0| |2021-03-09 19:52:43|\n |2021-03-09 19:52:47| 0| 1| 0|\n 1|\n34949|public |a | 24| 37500000| 8| 0|\n 7500000| 0| 0| 0| 7499970| 0|\n 0| 0| |2021-03-09 19:53:08|\n |2021-03-09 19:53:12| 0| 1| 0|\n 1|\n34956|public |c | 7| 1600|20000000| 2025000000|\n1600| 0| 0| 0| 1600| 0|\n 0| 0| |2021-03-09 20:20:25|\n |2021-03-09 20:20:25| 0| 1| 0|\n 1|\n\n\n============\n6. Statistics\n============\n\nfrac_mcv\n |tablename|attname|inherited|null_frac|n_distinct|n_mcv|n_hist|correlation|\n----------|---------|-------|---------|---------|----------|-----|------|-----------|\n |a |v |false | 0.0| -1.0| | 101|\n 1.0|\n |c |c |false | 0.0| -1.0| |\n101|0.016762745|\n |b |v |false | 0.0| -1.0| | 101|\n 1.0|\n |c |res |false | 0.0| -1.0| |\n101|0.008681587|\n0.37374988|c |la1 |false | 0.0| -0.500625| 100| 101|\n0.04232038|\n0.37374988|c |la2 |false | 0.0| -0.500625| 100| 101|\n0.03672261|\n0.37374988|c |va1 |false | 0.0| -0.500625| 100| 101|\n0.06667662|\n0.37374988|c |va2 |false | 0.0| -0.500625| 100| 101|\n0.06553146|\n0.19663344|a |a1 |false | 0.0| 800.0| 100| 101|\n0.66633326|\n0.19663344|a |a2 |false | 0.0| 800.0| 100| 101|\n0.66686845|\n0.19660011|b |a1 |false | 0.0| 800.0| 100| 101|\n 0.6654024|\n0.19660011|b |a2 |false | 0.0| 800.0| 100| 101|\n 0.6656135|\n\n============\n7. Settings\n============\nallow_system_table_mods off\napplication_name DBeaver 21.0.0 - SQLEditor <Script-1.sql>\narchive_cleanup_command\narchive_command (disabled)\narchive_mode off\narchive_timeout 0\narray_nulls on\nauthentication_timeout 1min\nautovacuum on\nautovacuum_analyze_scale_factor 0.1\nautovacuum_analyze_threshold 50\nautovacuum_freeze_max_age 200000000\nautovacuum_max_workers 3\nautovacuum_multixact_freeze_max_age 400000000\nautovacuum_naptime 1min\nautovacuum_vacuum_cost_delay 2ms\nautovacuum_vacuum_cost_limit -1\nautovacuum_vacuum_insert_scale_factor 0.2\nautovacuum_vacuum_insert_threshold 1000\nautovacuum_vacuum_scale_factor 0.2\nautovacuum_vacuum_threshold 50\nautovacuum_work_mem -1\nbackend_flush_after 0\nbackslash_quote safe_encoding\nbacktrace_functions\nbgwriter_delay 200ms\nbgwriter_flush_after 512kB\nbgwriter_lru_maxpages 100\nbgwriter_lru_multiplier 2\nblock_size 8192\nbonjour off\nbonjour_name\nbytea_output hex\ncheck_function_bodies on\ncheckpoint_completion_target 0.5\ncheckpoint_flush_after 256kB\ncheckpoint_timeout 5min\ncheckpoint_warning 30s\nclient_encoding UTF8\nclient_min_messages notice\ncluster_name\ncommit_delay 0\ncommit_siblings 5\nconfig_file /var/lib/postgresql/data/pgdata/postgresql.conf\nconstraint_exclusion partition\ncpu_index_tuple_cost 0.005\ncpu_operator_cost 0.0025\ncpu_tuple_cost 0.01\ncursor_tuple_fraction 0.1\ndata_checksums off\ndata_directory /var/lib/postgresql/data/pgdata\ndata_directory_mode 0700\ndata_sync_retry off\nDateStyle ISO, MDY\ndb_user_namespace off\ndeadlock_timeout 1s\ndebug_assertions off\ndebug_pretty_print on\ndebug_print_parse off\ndebug_print_plan off\ndebug_print_rewritten off\ndefault_statistics_target 100\ndefault_table_access_method heap\ndefault_tablespace\ndefault_text_search_config pg_catalog.english\ndefault_transaction_deferrable off\ndefault_transaction_isolation read committed\ndefault_transaction_read_only off\ndynamic_library_path $libdir\ndynamic_shared_memory_type posix\neffective_cache_size 4GB\neffective_io_concurrency 1\nenable_bitmapscan on\nenable_gathermerge on\nenable_hashagg on\nenable_hashjoin on\nenable_incremental_sort on\nenable_indexonlyscan on\nenable_indexscan on\nenable_material on\nenable_mergejoin on\nenable_nestloop on\nenable_parallel_append on\nenable_parallel_hash on\nenable_partition_pruning on\nenable_partitionwise_aggregate off\nenable_partitionwise_join off\nenable_seqscan on\nenable_sort on\nenable_tidscan on\nescape_string_warning on\nevent_source PostgreSQL\nexit_on_error off\nextension_destdir\nexternal_pid_file\nextra_float_digits 3\nforce_parallel_mode on\nfrom_collapse_limit 8\nfsync on\nfull_page_writes on\ngeqo on\ngeqo_effort 5\ngeqo_generations 0\ngeqo_pool_size 0\ngeqo_seed 0\ngeqo_selection_bias 2\ngeqo_threshold 12\ngin_fuzzy_search_limit 0\ngin_pending_list_limit 4MB\nhash_mem_multiplier 1\nhba_file /var/lib/postgresql/data/pgdata/pg_hba.conf\nhot_standby on\nhot_standby_feedback off\nhuge_pages try\nident_file /var/lib/postgresql/data/pgdata/pg_ident.conf\nidle_in_transaction_session_timeout 0\nignore_checksum_failure off\nignore_invalid_pages off\nignore_system_indexes off\ninteger_datetimes on\nIntervalStyle postgres\njit on\njit_above_cost 100000\njit_debugging_support off\njit_dump_bitcode off\njit_expressions on\njit_inline_above_cost 500000\njit_optimize_above_cost 500000\njit_profiling_support off\njit_provider llvmjit\njit_tuple_deforming on\njoin_collapse_limit 8\nkrb_caseins_users off\nkrb_server_keyfile FILE:/etc/postgresql-common/krb5.keytab\nlc_collate en_US.utf8\nlc_ctype en_US.utf8\nlc_messages en_US.utf8\nlc_monetary en_US.utf8\nlc_numeric en_US.utf8\nlc_time en_US.utf8\nlisten_addresses *\nlo_compat_privileges off\nlocal_preload_libraries\nlock_timeout 0\nlog_autovacuum_min_duration -1\nlog_checkpoints off\nlog_connections off\nlog_destination stderr\nlog_directory log\nlog_disconnections off\nlog_duration off\nlog_error_verbosity default\nlog_executor_stats off\nlog_file_mode 0600\nlog_filename postgresql-%Y-%m-%d_%H%M%S.log\nlog_hostname off\nlog_line_prefix %m [%p]\nlog_lock_waits off\nlog_min_duration_sample -1\nlog_min_duration_statement -1\nlog_min_error_statement error\nlog_min_messages warning\nlog_parameter_max_length -1\nlog_parameter_max_length_on_error 0\nlog_parser_stats off\nlog_planner_stats off\nlog_replication_commands off\nlog_rotation_age 1d\nlog_rotation_size 10MB\nlog_statement none\nlog_statement_sample_rate 1\nlog_statement_stats off\nlog_temp_files -1\nlog_timezone Etc/UTC\nlog_transaction_sample_rate 0\nlog_truncate_on_rotation off\nlogging_collector off\nlogical_decoding_work_mem 64MB\nmaintenance_io_concurrency 10\nmaintenance_work_mem 64MB\nmax_connections 100\nmax_files_per_process 1000\nmax_function_args 100\nmax_identifier_length 63\nmax_index_keys 32\nmax_locks_per_transaction 64\nmax_logical_replication_workers 4\nmax_parallel_maintenance_workers 2\nmax_parallel_workers 8\nmax_parallel_workers_per_gather 16\nmax_pred_locks_per_page 2\nmax_pred_locks_per_relation -2\nmax_pred_locks_per_transaction 64\nmax_prepared_transactions 0\nmax_replication_slots 10\nmax_slot_wal_keep_size -1\nmax_stack_depth 2MB\nmax_standby_archive_delay 30s\nmax_standby_streaming_delay 30s\nmax_sync_workers_per_subscription 2\nmax_wal_senders 10\nmax_wal_size 1GB\nmax_worker_processes 8\nmin_parallel_index_scan_size 512kB\nmin_parallel_table_scan_size 8MB\nmin_wal_size 80MB\nold_snapshot_threshold -1\noperator_precedence_warning off\nparallel_leader_participation on\nparallel_setup_cost 1\nparallel_tuple_cost 0.001\npassword_encryption md5\nplan_cache_mode auto\nplpgsql.check_asserts on\nplpgsql.extra_errors none\nplpgsql.extra_warnings none\nplpgsql.print_strict_params off\nplpgsql.variable_conflict error\nport 5432\npost_auth_delay 0\npre_auth_delay 0\nprimary_conninfo\nprimary_slot_name\npromote_trigger_file\nquote_all_identifiers off\nrandom_page_cost 4\nrecovery_end_command\nrecovery_min_apply_delay 0\nrecovery_target\nrecovery_target_action pause\nrecovery_target_inclusive on\nrecovery_target_lsn\nrecovery_target_name\nrecovery_target_time\nrecovery_target_timeline latest\nrecovery_target_xid\nrestart_after_crash on\nrestore_command\nrow_security on\nsearch_path \"$user\", public\nsegment_size 1GB\nseq_page_cost 1\nserver_encoding UTF8\nserver_version 13.2 (Debian 13.2-1.pgdg100+1)\nserver_version_num 130002\nsession_preload_libraries\nsession_replication_role origin\nshared_buffers 128MB\nshared_memory_type mmap\nshared_preload_libraries plugin_debugger\nssl off\nssl_ca_file\nssl_cert_file server.crt\nssl_ciphers HIGH:MEDIUM:+3DES:!aNULL\nssl_crl_file\nssl_dh_params_file\nssl_ecdh_curve prime256v1\nssl_key_file server.key\nssl_library OpenSSL\nssl_max_protocol_version\nssl_min_protocol_version TLSv1.2\nssl_passphrase_command\nssl_passphrase_command_supports_reload off\nssl_prefer_server_ciphers on\nstandard_conforming_strings on\nstatement_timeout 0\nstats_temp_directory pg_stat_tmp\nsuperuser_reserved_connections 3\nsynchronize_seqscans on\nsynchronous_commit on\nsynchronous_standby_names\nsyslog_facility local0\nsyslog_ident postgres\nsyslog_sequence_numbers on\nsyslog_split_messages on\ntcp_keepalives_count 9\ntcp_keepalives_idle 7200\ntcp_keepalives_interval 75\ntcp_user_timeout 0\ntemp_buffers 8MB\ntemp_file_limit -1\ntemp_tablespaces\nTimeZone Europe/Zurich\ntimezone_abbreviations Default\ntrace_notify off\ntrace_recovery_messages log\ntrace_sort off\ntrack_activities on\ntrack_activity_query_size 1kB\ntrack_commit_timestamp off\ntrack_counts on\ntrack_functions none\ntrack_io_timing on\ntransaction_deferrable off\ntransaction_isolation read committed\ntransaction_read_only off\ntransform_null_equals off\nunix_socket_directories /var/run/postgresql\nunix_socket_group\nunix_socket_permissions 0777\nupdate_process_title on\nvacuum_cleanup_index_scale_factor 0.1\nvacuum_cost_delay 0\nvacuum_cost_limit 200\nvacuum_cost_page_dirty 20\nvacuum_cost_page_hit 1\nvacuum_cost_page_miss 10\nvacuum_defer_cleanup_age 0\nvacuum_freeze_min_age 50000000\nvacuum_freeze_table_age 150000000\nvacuum_multixact_freeze_min_age 5000000\nvacuum_multixact_freeze_table_age 150000000\nwal_block_size 8192\nwal_buffers 4MB\nwal_compression off\nwal_consistency_checking\nwal_init_zero on\nwal_keep_size 0\nwal_level replica\nwal_log_hints off\nwal_receiver_create_temp_slot off\nwal_receiver_status_interval 10s\nwal_receiver_timeout 1min\nwal_recycle on\nwal_retrieve_retry_interval 5s\nwal_segment_size 16MB\nwal_sender_timeout 1min\nwal_skip_threshold 2MB\nwal_sync_method fdatasync\nwal_writer_delay 200ms\nwal_writer_flush_after 1MB\nwork_mem 800MB\nxmlbinary base64\nxmloption content\nzero_damaged_pages off\n\nAgain, thanks a lot for the time you spent on my messages.\n\nBest regards,\nFrancesco De Angelis\n\n\nYou don't mention shared_buffers, which is quite low by default. Not sure\n> of the memory of your docker container, but it might be prudent to increase\n> shared_buffers to keep as much data as possible in memory rather than\n> needing to read from disk by a second run. To test the possibility Tom Lane\n> suggested, do a manual analyze after data insert to see if it is also slow.\n> Explain (analyze, buffers) select... and then using\n> https://explain.depesz.com/ or https://tatiyants.com/pev/#/plans/new\n> would be a good option to have some visualization on where the query is\n> going off the rails.\n>\n\nHello,many thanks to all the persons who replied.I am back with the information requested in https://wiki.postgresql.org/wiki/Slow_Query_Questions.Here you can find the results of the EXPLAIN commands:1) First execution: https://explain.depesz.com/s/X2as2) Second execution (right after the first one): https://explain.depesz.com/s/gHrb Table A and B are both loaded with 7500000 records, whereas Table C contains 1600 records.Both queries are executed with work_mem=800MB (the other parameters are reported in Section 7). With such a value, I noticed also the following phenomenon: in addition to variable execution times (as previusly stated, the range is between 20 seconds and 4 minutes),sometimes the query crashes, returning the following error: SQL Error [57P03]: FATAL: the database system is in recovery mode.When this happens, in the log file I find many messages like these ones:...UTC [1] LOG: terminating any other active server processesUTC [115] WARNING: terminating connection because of crash of another server processUTC [115] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.HINT: In a moment you should be able to reconnect to the database and repeat your command....FATAL: the database system is in recovery modeLOG: redo starts at 0/E8BFC8D0LOG: invalid record length at 0/E8BFC9B8: wanted 24, got 0LOG: redo done at 0/E8BFC980If I re-run the query with with work_mem=400MB, the execution never crashes (at least in all the tests I carried out) and response times are pretty stable (always 4 minutes, plus/minus a small delta).So I started wondering whether variable response times and crashes are somehow correlated and due to the fact that the work_mem value is too high.Anyway, in the following sections I tried to report all the information described in the wiki.============1. Version============PostgreSQL 13.2 (Debian 13.2-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit============2. Tables============CREATE TABLE A (\ta1 varchar(100),\ta2 varchar(100),\tv int4 primary key);CREATE TABLE B (\ta1 varchar(100),\ta2 varchar(100),\tv int4 primary key);CREATE TABLE C (\tla1 varchar(100),\tla2 varchar(100),\tva1 varchar(100),\tva2 varchar(100),\tres varchar (100),\tc text); Schema | Name | Type | Owner--------+------+-------+------- public | a | table | admin public | b | table | admin public | c | table | admin(3 rows) List of relations Schema | Name | Type | Owner | Persistence | Size | Description--------+------+-------+-------+-------------+--------+------------- public | a | table | admin | permanent | 317 MB |(1 row) List of relations Schema | Name | Type | Owner | Persistence | Size | Description--------+------+-------+-------+-------------+--------+------------- public | b | table | admin | permanent | 317 MB |(1 row) List of relations Schema | Name | Type | Owner | Persistence | Size | Description--------+------+-------+-------+-------------+--------+------------- public | c | table | admin | permanent | 152 kB |(1 row)============3. Indices============tablename | indexname | indexdef-----------+-------------+-------------------------------------------------------- a | a_pkey | CREATE UNIQUE INDEX a_pkey ON public.a USING btree (v) a | hash_pka | CREATE INDEX hash_pka ON public.a USING hash (v) b | b_pkey | CREATE UNIQUE INDEX b_pkey ON public.b USING btree (v) b | hash_pkb | CREATE INDEX hash_pkb ON public.b USING hash (v) c | hash_class0 | CREATE INDEX hash_class0 ON public.c USING hash (la1) c | hash_class1 | CREATE INDEX hash_class1 ON public.c USING hash (la2) c | hash_class2 | CREATE INDEX hash_class2 ON public.c USING hash (va1) c | hash_class3 | CREATE INDEX hash_class3 ON public.c USING hash (va2) c | hash_pkc | CREATE INDEX hash_pkc ON public.c USING hash (c)(9 rows)============3. Table metadata============relname|relpages|reltuples|relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|-------|--------|---------|-------------|-------|--------|--------------|----------|-------------|b | 40541|7499970.0| 40541|r | 3|false |NULL | 332226560|a | 40541|7499970.0| 40541|r | 3|false |NULL | 332226560|c | 14| 1600.0| 14|r | 6|false |NULL | 155648|============4. Hardware and its performances============MacBook Pro (15-inch, 2018),2.2 GHz Intel Core i7,16 GB 2400 MHz DDR4Results from bonnie++:root@c1a8bd8320a0:/# bonnie++ -f -n0 -x4 -d /var/lib/postgresql/ -u rootUsing uid:0, gid:0.format_version,bonnie_version,name,concurrency,seed,file_size,io_chunk_size,putc,putc_cpu,put_block,put_block_cpu,rewrite,rewrite_cpu,getc,getc_cpu,get_block,get_block_cpu,seeks,seeks_cpu,num_files,max_size,min_size,num_dirs,file_chunk_size,seq_create,seq_create_cpu,seq_stat,seq_stat_cpu,seq_del,seq_del_cpu,ran_create,ran_create_cpu,ran_stat,ran_stat_cpu,ran_del,ran_del_cpu,putc_latency,put_block_latency,rewrite_latency,getc_latency,get_block_latency,seeks_latency,seq_create_latency,seq_stat_latency,seq_del_latency,ran_create_latency,ran_stat_latency,ran_del_latencyWriting intelligently...doneRewriting...doneReading intelligently...donestart 'em...done...done...done...done...done...1.98,1.98,c1a8bd8320a0,1,1615294449,4G,,8192,5,,,248685,79,223758,81,,,362721,93,11439,463,,,,,,,,,,,,,,,,,,,1181ms,1186ms,,5784us,13082us,,,,,,Writing intelligently...doneRewriting...doneReading intelligently...donestart 'em...done...done...done...done...done...1.98,1.98,c1a8bd8320a0,1,1615294449,4G,,8192,5,,,259711,83,235606,81,,,402573,99,13113,687,,,,,,,,,,,,,,,,,,,692ms,1553ms,,1569us,21954us,,,,,,Writing intelligently...doneRewriting...doneReading intelligently...donestart 'em...done...done...done...done...done...1.98,1.98,c1a8bd8320a0,1,1615294449,4G,,8192,5,,,249913,84,207899,79,,,383606,99,12570,675,,,,,,,,,,,,,,,,,,,1363ms,940ms,,4864us,64727us,,,,,,Writing intelligently...doneRewriting...doneReading intelligently...donestart 'em...done...done...done...done...done...1.98,1.98,c1a8bd8320a0,1,1615294449,4G,,8192,5,,,265425,83,215268,81,,,373339,100,+++++,+++,,,,,,,,,,,,,,,,,,,817ms,920ms,,2355us,4786us,,,,,,Results from dd:14032+0 records in14032+0 records out14713618432 bytes transferred in 4.399248 secs (3344575895 bytes/sec) 4.40 real 0.01 user 3.39 sys============5. Maintenance setup============relid|schemaname|relname|seq_scan|seq_tup_read|idx_scan|idx_tup_fetch|n_tup_ins|n_tup_upd|n_tup_del|n_tup_hot_upd|n_live_tup|n_dead_tup|n_mod_since_analyze|n_ins_since_vacuum|last_vacuum|last_autovacuum |last_analyze|last_autoanalyze |vacuum_count|autovacuum_count|analyze_count|autoanalyze_count|-----|----------|-------|--------|------------|--------|-------------|---------|---------|---------|-------------|----------|----------|-------------------|------------------|-----------|-------------------|------------|-------------------|------------|----------------|-------------|-----------------|34944|public |b | 23| 30000000| 8| 0| 7500000| 0| 0| 0| 7499970| 0| 0| 0| |2021-03-09 19:52:43| |2021-03-09 19:52:47| 0| 1| 0| 1|34949|public |a | 24| 37500000| 8| 0| 7500000| 0| 0| 0| 7499970| 0| 0| 0| |2021-03-09 19:53:08| |2021-03-09 19:53:12| 0| 1| 0| 1|34956|public |c | 7| 1600|20000000| 2025000000| 1600| 0| 0| 0| 1600| 0| 0| 0| |2021-03-09 20:20:25| |2021-03-09 20:20:25| 0| 1| 0| 1|============6. Statistics============frac_mcv |tablename|attname|inherited|null_frac|n_distinct|n_mcv|n_hist|correlation|----------|---------|-------|---------|---------|----------|-----|------|-----------| |a |v |false | 0.0| -1.0| | 101| 1.0| |c |c |false | 0.0| -1.0| | 101|0.016762745| |b |v |false | 0.0| -1.0| | 101| 1.0| |c |res |false | 0.0| -1.0| | 101|0.008681587|0.37374988|c |la1 |false | 0.0| -0.500625| 100| 101| 0.04232038|0.37374988|c |la2 |false | 0.0| -0.500625| 100| 101| 0.03672261|0.37374988|c |va1 |false | 0.0| -0.500625| 100| 101| 0.06667662|0.37374988|c |va2 |false | 0.0| -0.500625| 100| 101| 0.06553146|0.19663344|a |a1 |false | 0.0| 800.0| 100| 101| 0.66633326|0.19663344|a |a2 |false | 0.0| 800.0| 100| 101| 0.66686845|0.19660011|b |a1 |false | 0.0| 800.0| 100| 101| 0.6654024|0.19660011|b |a2 |false | 0.0| 800.0| 100| 101| 0.6656135|============7. Settings============allow_system_table_mods\toffapplication_name\tDBeaver 21.0.0 - SQLEditor <Script-1.sql>archive_cleanup_command\tarchive_command\t(disabled)archive_mode\toffarchive_timeout\t0array_nulls\tonauthentication_timeout\t1minautovacuum\tonautovacuum_analyze_scale_factor\t0.1autovacuum_analyze_threshold\t50autovacuum_freeze_max_age\t200000000autovacuum_max_workers\t3autovacuum_multixact_freeze_max_age\t400000000autovacuum_naptime\t1minautovacuum_vacuum_cost_delay\t2msautovacuum_vacuum_cost_limit\t-1autovacuum_vacuum_insert_scale_factor\t0.2autovacuum_vacuum_insert_threshold\t1000autovacuum_vacuum_scale_factor\t0.2autovacuum_vacuum_threshold\t50autovacuum_work_mem\t-1backend_flush_after\t0backslash_quote\tsafe_encodingbacktrace_functions\tbgwriter_delay\t200msbgwriter_flush_after\t512kBbgwriter_lru_maxpages\t100bgwriter_lru_multiplier\t2block_size\t8192bonjour\toffbonjour_name\tbytea_output\thexcheck_function_bodies\toncheckpoint_completion_target\t0.5checkpoint_flush_after\t256kBcheckpoint_timeout\t5mincheckpoint_warning\t30sclient_encoding\tUTF8client_min_messages\tnoticecluster_name\tcommit_delay\t0commit_siblings\t5config_file\t/var/lib/postgresql/data/pgdata/postgresql.confconstraint_exclusion\tpartitioncpu_index_tuple_cost\t0.005cpu_operator_cost\t0.0025cpu_tuple_cost\t0.01cursor_tuple_fraction\t0.1data_checksums\toffdata_directory\t/var/lib/postgresql/data/pgdatadata_directory_mode\t0700data_sync_retry\toffDateStyle\tISO, MDYdb_user_namespace\toffdeadlock_timeout\t1sdebug_assertions\toffdebug_pretty_print\tondebug_print_parse\toffdebug_print_plan\toffdebug_print_rewritten\toffdefault_statistics_target\t100default_table_access_method\theapdefault_tablespace\tdefault_text_search_config\tpg_catalog.englishdefault_transaction_deferrable\toffdefault_transaction_isolation\tread committeddefault_transaction_read_only\toffdynamic_library_path\t$libdirdynamic_shared_memory_type\tposixeffective_cache_size\t4GBeffective_io_concurrency\t1enable_bitmapscan\tonenable_gathermerge\tonenable_hashagg\tonenable_hashjoin\tonenable_incremental_sort\tonenable_indexonlyscan\tonenable_indexscan\tonenable_material\tonenable_mergejoin\tonenable_nestloop\tonenable_parallel_append\tonenable_parallel_hash\tonenable_partition_pruning\tonenable_partitionwise_aggregate\toffenable_partitionwise_join\toffenable_seqscan\tonenable_sort\tonenable_tidscan\tonescape_string_warning\tonevent_source\tPostgreSQLexit_on_error\toffextension_destdir\texternal_pid_file\textra_float_digits\t3force_parallel_mode\tonfrom_collapse_limit\t8fsync\tonfull_page_writes\tongeqo\tongeqo_effort\t5geqo_generations\t0geqo_pool_size\t0geqo_seed\t0geqo_selection_bias\t2geqo_threshold\t12gin_fuzzy_search_limit\t0gin_pending_list_limit\t4MBhash_mem_multiplier\t1hba_file\t/var/lib/postgresql/data/pgdata/pg_hba.confhot_standby\tonhot_standby_feedback\toffhuge_pages\ttryident_file\t/var/lib/postgresql/data/pgdata/pg_ident.confidle_in_transaction_session_timeout\t0ignore_checksum_failure\toffignore_invalid_pages\toffignore_system_indexes\toffinteger_datetimes\tonIntervalStyle\tpostgresjit\tonjit_above_cost\t100000jit_debugging_support\toffjit_dump_bitcode\toffjit_expressions\tonjit_inline_above_cost\t500000jit_optimize_above_cost\t500000jit_profiling_support\toffjit_provider\tllvmjitjit_tuple_deforming\tonjoin_collapse_limit\t8krb_caseins_users\toffkrb_server_keyfile\tFILE:/etc/postgresql-common/krb5.keytablc_collate\ten_US.utf8lc_ctype\ten_US.utf8lc_messages\ten_US.utf8lc_monetary\ten_US.utf8lc_numeric\ten_US.utf8lc_time\ten_US.utf8listen_addresses\t*lo_compat_privileges\tofflocal_preload_libraries\tlock_timeout\t0log_autovacuum_min_duration\t-1log_checkpoints\tofflog_connections\tofflog_destination\tstderrlog_directory\tloglog_disconnections\tofflog_duration\tofflog_error_verbosity\tdefaultlog_executor_stats\tofflog_file_mode\t0600log_filename\tpostgresql-%Y-%m-%d_%H%M%S.loglog_hostname\tofflog_line_prefix\t%m [%p] log_lock_waits\tofflog_min_duration_sample\t-1log_min_duration_statement\t-1log_min_error_statement\terrorlog_min_messages\twarninglog_parameter_max_length\t-1log_parameter_max_length_on_error\t0log_parser_stats\tofflog_planner_stats\tofflog_replication_commands\tofflog_rotation_age\t1dlog_rotation_size\t10MBlog_statement\tnonelog_statement_sample_rate\t1log_statement_stats\tofflog_temp_files\t-1log_timezone\tEtc/UTClog_transaction_sample_rate\t0log_truncate_on_rotation\tofflogging_collector\tofflogical_decoding_work_mem\t64MBmaintenance_io_concurrency\t10maintenance_work_mem\t64MBmax_connections\t100max_files_per_process\t1000max_function_args\t100max_identifier_length\t63max_index_keys\t32max_locks_per_transaction\t64max_logical_replication_workers\t4max_parallel_maintenance_workers\t2max_parallel_workers\t8max_parallel_workers_per_gather\t16max_pred_locks_per_page\t2max_pred_locks_per_relation\t-2max_pred_locks_per_transaction\t64max_prepared_transactions\t0max_replication_slots\t10max_slot_wal_keep_size\t-1max_stack_depth\t2MBmax_standby_archive_delay\t30smax_standby_streaming_delay\t30smax_sync_workers_per_subscription\t2max_wal_senders\t10max_wal_size\t1GBmax_worker_processes\t8min_parallel_index_scan_size\t512kBmin_parallel_table_scan_size\t8MBmin_wal_size\t80MBold_snapshot_threshold\t-1operator_precedence_warning\toffparallel_leader_participation\tonparallel_setup_cost\t1parallel_tuple_cost\t0.001password_encryption\tmd5plan_cache_mode\tautoplpgsql.check_asserts\tonplpgsql.extra_errors\tnoneplpgsql.extra_warnings\tnoneplpgsql.print_strict_params\toffplpgsql.variable_conflict\terrorport\t5432post_auth_delay\t0pre_auth_delay\t0primary_conninfo\tprimary_slot_name\tpromote_trigger_file\tquote_all_identifiers\toffrandom_page_cost\t4recovery_end_command\trecovery_min_apply_delay\t0recovery_target\trecovery_target_action\tpauserecovery_target_inclusive\tonrecovery_target_lsn\trecovery_target_name\trecovery_target_time\trecovery_target_timeline\tlatestrecovery_target_xid\trestart_after_crash\tonrestore_command\trow_security\tonsearch_path\t\"$user\", publicsegment_size\t1GBseq_page_cost\t1server_encoding\tUTF8server_version\t13.2 (Debian 13.2-1.pgdg100+1)server_version_num\t130002session_preload_libraries\tsession_replication_role\toriginshared_buffers\t128MBshared_memory_type\tmmapshared_preload_libraries\tplugin_debuggerssl\toffssl_ca_file\tssl_cert_file\tserver.crtssl_ciphers\tHIGH:MEDIUM:+3DES:!aNULLssl_crl_file\tssl_dh_params_file\tssl_ecdh_curve\tprime256v1ssl_key_file\tserver.keyssl_library\tOpenSSLssl_max_protocol_version\tssl_min_protocol_version\tTLSv1.2ssl_passphrase_command\tssl_passphrase_command_supports_reload\toffssl_prefer_server_ciphers\tonstandard_conforming_strings\tonstatement_timeout\t0stats_temp_directory\tpg_stat_tmpsuperuser_reserved_connections\t3synchronize_seqscans\tonsynchronous_commit\tonsynchronous_standby_names\tsyslog_facility\tlocal0syslog_ident\tpostgressyslog_sequence_numbers\tonsyslog_split_messages\tontcp_keepalives_count\t9tcp_keepalives_idle\t7200tcp_keepalives_interval\t75tcp_user_timeout\t0temp_buffers\t8MBtemp_file_limit\t-1temp_tablespaces\tTimeZone\tEurope/Zurichtimezone_abbreviations\tDefaulttrace_notify\tofftrace_recovery_messages\tlogtrace_sort\tofftrack_activities\tontrack_activity_query_size\t1kBtrack_commit_timestamp\tofftrack_counts\tontrack_functions\tnonetrack_io_timing\tontransaction_deferrable\tofftransaction_isolation\tread committedtransaction_read_only\tofftransform_null_equals\toffunix_socket_directories\t/var/run/postgresqlunix_socket_group\tunix_socket_permissions\t0777update_process_title\tonvacuum_cleanup_index_scale_factor\t0.1vacuum_cost_delay\t0vacuum_cost_limit\t200vacuum_cost_page_dirty\t20vacuum_cost_page_hit\t1vacuum_cost_page_miss\t10vacuum_defer_cleanup_age\t0vacuum_freeze_min_age\t50000000vacuum_freeze_table_age\t150000000vacuum_multixact_freeze_min_age\t5000000vacuum_multixact_freeze_table_age\t150000000wal_block_size\t8192wal_buffers\t4MBwal_compression\toffwal_consistency_checking\twal_init_zero\tonwal_keep_size\t0wal_level\treplicawal_log_hints\toffwal_receiver_create_temp_slot\toffwal_receiver_status_interval\t10swal_receiver_timeout\t1minwal_recycle\tonwal_retrieve_retry_interval\t5swal_segment_size\t16MBwal_sender_timeout\t1minwal_skip_threshold\t2MBwal_sync_method\tfdatasyncwal_writer_delay\t200mswal_writer_flush_after\t1MBwork_mem\t800MBxmlbinary\tbase64xmloption\tcontentzero_damaged_pages\toffAgain, thanks a lot for the time you spent on my messages.Best regards,Francesco De AngelisYou don't mention shared_buffers, which is quite low by default. Not sure of the memory of your docker container, but it might be prudent to increase shared_buffers to keep as much data as possible in memory rather than needing to read from disk by a second run. To test the possibility Tom Lane suggested, do a manual analyze after data insert to see if it is also slow. Explain (analyze, buffers) select... and then using https://explain.depesz.com/ or https://tatiyants.com/pev/#/plans/new would be a good option to have some visualization on where the query is going off the rails.",
"msg_date": "Tue, 9 Mar 2021 23:58:05 +0100",
"msg_from": "Francesco De Angelis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: different execution time for the same query (and same DB status)"
},
{
"msg_contents": "On Sat, Mar 06, 2021 at 10:40:00PM +0100, Francesco De Angelis wrote:\n> The problem is the following: the query can take between 20 seconds and 4\n> minutes to complete. Most of times, when I run the query for the first time\n> after the server initialisation, it takes 20 seconds; but if I re-run it\n> again (without changing anything) right after the first execution, the\n> probability to take more than 4 minutes is very high.\n\nOn Tue, Mar 09, 2021 at 11:58:05PM +0100, Francesco De Angelis wrote:\n> With such a value, I noticed also the following phenomenon: in addition to\n> variable execution times (as previusly stated, the range is between 20\n> seconds and 4 minutes),\n\nYou said it takes between 20s and 4min (240s), but both the explain analyze\nshow ~1300s.\n\nexplain analyze can be slower than the query, due to timing overhead.\nIs that what's happening here? You could try explain(analyze,timing off,buffers).\nYou should send a result for the \"20sec\" result, and one for the \"4min\" result,\nto compare.\n\nI assume the crash is a result of OOM - you could find the result in dmesg\noutput (\"Out of memory: Killed process\") or the postgres logfile will say\n\"terminated by signal 9: Killed\". It's important to avoid setting work_mem so\nhigh that the process is killed and has to go into recovery mode.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 10 Mar 2021 02:27:53 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: different execution time for the same query (and same DB\n status)"
},
{
"msg_contents": "Hello,\nyes exactly in the previous analyses, as mentioned in the wiki, I ran\nEXPLAIN (ANALYZE, BUFFERS)* query*, which took much longer to complete\n(around 30 minutes) as showed in https://explain.depesz.com/s/gHrb and\nhttps://explain.depesz.com/s/X2as .\nAs you said, I did the new tests with EXPLAIN (ANALYZE, timing off BUFFERS)*\nquery,* and these are the results:\n- First execution: https://explain.depesz.com/s/ynAv\n- Second execution: https://explain.depesz.com/s/z1eb\nNow they are pretty aligned with the execution time of *query* (a few\nseconds more to complete) and the difference between the first and second\nexecution is visible.\nAlso, from what I can see, the plans are different...\n\nIl giorno mer 10 mar 2021 alle ore 09:27 Justin Pryzby <[email protected]>\nha scritto:\n\n> On Sat, Mar 06, 2021 at 10:40:00PM +0100, Francesco De Angelis wrote:\n> > The problem is the following: the query can take between 20 seconds and 4\n> > minutes to complete. Most of times, when I run the query for the first\n> time\n> > after the server initialisation, it takes 20 seconds; but if I re-run it\n> > again (without changing anything) right after the first execution, the\n> > probability to take more than 4 minutes is very high.\n>\n> On Tue, Mar 09, 2021 at 11:58:05PM +0100, Francesco De Angelis wrote:\n> > With such a value, I noticed also the following phenomenon: in addition\n> to\n> > variable execution times (as previusly stated, the range is between 20\n> > seconds and 4 minutes),\n>\n> You said it takes between 20s and 4min (240s), but both the explain analyze\n> show ~1300s.\n>\n> explain analyze can be slower than the query, due to timing overhead.\n> Is that what's happening here? You could try explain(analyze,timing\n> off,buffers).\n> You should send a result for the \"20sec\" result, and one for the \"4min\"\n> result,\n> to compare.\n>\n> I assume the crash is a result of OOM - you could find the result in dmesg\n> output (\"Out of memory: Killed process\") or the postgres logfile will say\n> \"terminated by signal 9: Killed\". It's important to avoid setting\n> work_mem so\n> high that the process is killed and has to go into recovery mode.\n>\n> --\n> Justin\n>\n\nHello,yes exactly in the previous analyses, as mentioned in the wiki, I ran EXPLAIN (ANALYZE, BUFFERS) query, which took much longer to complete (around 30 minutes) as showed in https://explain.depesz.com/s/gHrb and https://explain.depesz.com/s/X2as .As you said, I did the new tests with EXPLAIN (ANALYZE, timing off BUFFERS) query, and these are the results:- First execution: https://explain.depesz.com/s/ynAv - Second execution: https://explain.depesz.com/s/z1ebNow they are pretty aligned with the execution time of query (a few seconds more to complete) and the difference between the first and second execution is visible.Also, from what I can see, the plans are different...Il giorno mer 10 mar 2021 alle ore 09:27 Justin Pryzby <[email protected]> ha scritto:On Sat, Mar 06, 2021 at 10:40:00PM +0100, Francesco De Angelis wrote:\n> The problem is the following: the query can take between 20 seconds and 4\n> minutes to complete. Most of times, when I run the query for the first time\n> after the server initialisation, it takes 20 seconds; but if I re-run it\n> again (without changing anything) right after the first execution, the\n> probability to take more than 4 minutes is very high.\n\nOn Tue, Mar 09, 2021 at 11:58:05PM +0100, Francesco De Angelis wrote:\n> With such a value, I noticed also the following phenomenon: in addition to\n> variable execution times (as previusly stated, the range is between 20\n> seconds and 4 minutes),\n\nYou said it takes between 20s and 4min (240s), but both the explain analyze\nshow ~1300s.\n\nexplain analyze can be slower than the query, due to timing overhead.\nIs that what's happening here? You could try explain(analyze,timing off,buffers).\nYou should send a result for the \"20sec\" result, and one for the \"4min\" result,\nto compare.\n\nI assume the crash is a result of OOM - you could find the result in dmesg\noutput (\"Out of memory: Killed process\") or the postgres logfile will say\n\"terminated by signal 9: Killed\". It's important to avoid setting work_mem so\nhigh that the process is killed and has to go into recovery mode.\n\n-- \nJustin",
"msg_date": "Wed, 10 Mar 2021 12:48:25 +0100",
"msg_from": "Francesco De Angelis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: different execution time for the same query (and same DB\n status)"
},
{
"msg_contents": "I would increase shared_buffers to 1GB or more. Also, it would be very\ninteresting to see these queries executed with JIT off.\n\nI would increase shared_buffers to 1GB or more. Also, it would be very interesting to see these queries executed with JIT off.",
"msg_date": "Wed, 10 Mar 2021 06:29:18 -0700",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: different execution time for the same query (and same DB\n status)"
},
{
"msg_contents": "I have re-tested the execution times with several different values of\nshared_buffers in the range 256 MB - 4 GB.\nIt didn't solve the problem and I noticed that for values greater than 3GB\nthe executions halt very frequently.\nI also tried to disable JIT and this further slowed it down.\nBut there is an interesting news. I managed to exploit some properties of\nthe data I am modelling and I have changed the types of the tables and the\nquery as follows:\n\nCREATE TABLE A (\na1 int2,\na2 int2,\nv int4 primary key\n);\n\nCREATE TABLE B (\na1 int2,\na2 int2,\nv int4 primary key\n);\n\ncreate index hash_pkA on A using hash(v);\ncreate index hash_pkB on B using hash(v);\n\nCREATE TABLE C (\nla1 int2,\nla2 int2,\nva1 int2,\nva2 int2,\nres text,\nc int8\n);\ncreate index hash_C on C using hash(c);\n\nselect count(*) from (\n((select A.v,\ncoalesce(A.a1,0) as la1,\ncoalesce(A.a2,0) as la2,\ncoalesce(B.a1,0) as va1,\ncoalesce(B.a2,0) as va2\nfrom A\nleft join B on A.v = B.v)\nunion all\nselect B.v,\n0 as la1,\n0 as la2,\nB.a1 as va1,\nB.a2 as va2\nfrom B where B.v not in (select A.v from A))as\nta inner join C on\nta.la1 | (ta.la2::int8 << 10) |\n(ta.va1::int8 << 20) |\n(ta.va2::int8 << 30)\n= C.c);\n\nWith these changes I get stable results around 15 seconds. Here is the\nplan: https://explain.depesz.com/s/Y9dT.\nI also verified that I can decrease work_mem to 300MB (against 800MB of the\noriginal query) by keeping the same execution time. In the original one,\ndecreasing such a value worsens the overall performances instead.\nIn the new query there is only one comparison on a single column (they were\nfour in the original one) and I started guessing whether, in the previous\ncase, the DBMS considers the overall memory consumption is too high and\nchanges the plan. If yes, I would be interesting in understanding how the\noptimisation algorithm works and whether there is a way to disable it.\nIn this way I can try to better figure out what to do, in the future, in\ncase the data model cannot be re-arranged like in this case.\nThanks again.\n\nBest regards,\nFrancesco De Angelis\n\n\nIl giorno mer 10 mar 2021 alle ore 14:29 Michael Lewis <[email protected]>\nha scritto:\n\n> I would increase shared_buffers to 1GB or more. Also, it would be very\n> interesting to see these queries executed with JIT off.\n>\n\nI have re-tested the execution times with several different values of shared_buffers in the range 256 MB - 4 GB.It didn't solve the problem and I noticed that for values greater than 3GB the executions halt very frequently.I also tried to disable JIT and this further slowed it down.But there is an interesting news. I managed to exploit some properties of the data I am modelling and I have changed the types of the tables and the query as follows:CREATE TABLE A (\ta1 int2,\ta2 int2,\tv int4 primary key);CREATE TABLE B (\t\ta1 int2,\t\ta2 int2,\t\tv int4 primary key\t);create index hash_pkA on A using hash(v);create index hash_pkB on B using hash(v);CREATE TABLE C (\tla1 int2,\tla2 int2,\tva1 int2,\tva2 int2,\tres text,\tc int8);create index hash_C on C using hash(c);select count(*) from (\t((select A.v,\t\t\tcoalesce(A.a1,0) as la1, \t\t\tcoalesce(A.a2,0) as la2,\t\t\tcoalesce(B.a1,0) as va1, \t\t\tcoalesce(B.a2,0) as va2 \tfrom A \tleft join B on A.v = B.v)\tunion all\tselect \tB.v,\t\t\t0 as la1, \t\t\t0 as la2, \t\t\tB.a1 as va1, \t\t\tB.a2 as va2 \t\t\tfrom B where B.v not in (select A.v from A))as \tta inner join C on \t\tta.la1 | (ta.la2::int8 << 10) | \t\t(ta.va1::int8 << 20) |\t\t(ta.va2::int8 << 30) \t\t= C.c); With these changes I get stable results around 15 seconds. Here is the plan: https://explain.depesz.com/s/Y9dT.I also verified that I can decrease work_mem to 300MB (against 800MB of the original query) by keeping the same execution time. In the original one, decreasing such a value worsens the overall performances instead. In the new query there is only one comparison on a single column (they were four in the original one) and I started guessing whether, in the previous case, the DBMS considers the overall memory consumption is too high and changes the plan. If yes, I would be interesting in understanding how the optimisation algorithm works and whether there is a way to disable it.In this way I can try to better figure out what to do, in the future, in case the data model cannot be re-arranged like in this case.Thanks again.Best regards,Francesco De AngelisIl giorno mer 10 mar 2021 alle ore 14:29 Michael Lewis <[email protected]> ha scritto:I would increase shared_buffers to 1GB or more. Also, it would be very interesting to see these queries executed with JIT off.",
"msg_date": "Fri, 12 Mar 2021 14:39:13 +0100",
"msg_from": "Francesco De Angelis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: different execution time for the same query (and same DB\n status)"
},
{
"msg_contents": "",
"msg_date": "Thu, 18 Mar 2021 14:44:23 +0530",
"msg_from": "Manish Lad <[email protected]>",
"msg_from_op": false,
"msg_subject": "How do we hint a query to use index in postgre"
},
{
"msg_contents": "There is not such functionality, you can enable/disable special joins,\nor update statistics to drive the engine to use the most appropriate\none.\n\n\nOn Thu, Mar 18, 2021 at 10:14 AM Manish Lad <[email protected]> wrote:\n\n\n\n-- \ncpp-today.blogspot.com\n\n\n",
"msg_date": "Thu, 18 Mar 2021 10:15:58 +0100",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How do we hint a query to use index in postgre"
},
{
"msg_contents": "Thank you for your response.\n\n\n\n\nOn Thu, 18 Mar 2021, 14:46 Gaetano Mendola, <[email protected]> wrote:\n\n> There is not such functionality, you can enable/disable special joins,\n> or update statistics to drive the engine to use the most appropriate\n> one.\n>\n>\n> On Thu, Mar 18, 2021 at 10:14 AM Manish Lad <[email protected]>\n> wrote:\n>\n>\n>\n> --\n> cpp-today.blogspot.com\n>\n\nThank you for your response. On Thu, 18 Mar 2021, 14:46 Gaetano Mendola, <[email protected]> wrote:There is not such functionality, you can enable/disable special joins,\nor update statistics to drive the engine to use the most appropriate\none.\n\n\nOn Thu, Mar 18, 2021 at 10:14 AM Manish Lad <[email protected]> wrote:\n\n\n\n-- \ncpp-today.blogspot.com",
"msg_date": "Thu, 18 Mar 2021 15:07:17 +0530",
"msg_from": "Manish Lad <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How do we hint a query to use index in postgre"
},
{
"msg_contents": "How about using pg_hint_plans?\n\nCheck it out if it helps your case.\n\nThanks and Regards,\nNikhil\n\nOn Thu, Mar 18, 2021, 15:07 Manish Lad <[email protected]> wrote:\n\n> Thank you for your response.\n>\n>\n>\n>\n> On Thu, 18 Mar 2021, 14:46 Gaetano Mendola, <[email protected]> wrote:\n>\n>> There is not such functionality, you can enable/disable special joins,\n>> or update statistics to drive the engine to use the most appropriate\n>> one.\n>>\n>>\n>> On Thu, Mar 18, 2021 at 10:14 AM Manish Lad <[email protected]>\n>> wrote:\n>>\n>>\n>>\n>> --\n>> cpp-today.blogspot.com\n>>\n>\n\nHow about using pg_hint_plans?Check it out if it helps your case.Thanks and Regards,NikhilOn Thu, Mar 18, 2021, 15:07 Manish Lad <[email protected]> wrote:Thank you for your response. On Thu, 18 Mar 2021, 14:46 Gaetano Mendola, <[email protected]> wrote:There is not such functionality, you can enable/disable special joins,\nor update statistics to drive the engine to use the most appropriate\none.\n\n\nOn Thu, Mar 18, 2021 at 10:14 AM Manish Lad <[email protected]> wrote:\n\n\n\n-- \ncpp-today.blogspot.com",
"msg_date": "Fri, 26 Mar 2021 17:11:00 +0530",
"msg_from": "Nikhil Shetty <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How do we hint a query to use index in postgre"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a quick question, does role custom parameters settings will be granted to users well? \nDoes user c_role will have the same settings m_role.CREATE ROLE m_role ;CREATE ROLE c_role ;ALTER ROLE m_role SET configuration_parameter TO 'VALUE';GRANT m_role TO c_role;\nHi,I have a quick question, does role custom parameters settings will be granted to users well? Does user c_role will have the same settings m_role.CREATE ROLE m_role ;CREATE ROLE c_role ;ALTER ROLE m_role SET configuration_parameter TO 'VALUE';GRANT m_role TO c_role;",
"msg_date": "Mon, 8 Mar 2021 23:30:14 +0000 (UTC)",
"msg_from": "Nagaraj Raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "Users grants with setting options"
},
{
"msg_contents": "On Mon, Mar 8, 2021 at 4:30 PM Nagaraj Raj <[email protected]> wrote:\n\n> I have a quick question, does role custom parameters settings will be\n> granted to users well?\n>\n\nParameters are not inherited - the role credentials that are logging in are\nthe ones that are used to check for defaults. This \"no\" is not explicitly\ndocumented that I can find; though easy enough to test.\n\nDavid J.\n\nOn Mon, Mar 8, 2021 at 4:30 PM Nagaraj Raj <[email protected]> wrote:I have a quick question, does role custom parameters settings will be granted to users well? Parameters are not inherited - the role credentials that are logging in are the ones that are used to check for defaults. This \"no\" is not explicitly documented that I can find; though easy enough to test.David J.",
"msg_date": "Mon, 8 Mar 2021 16:46:10 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Users grants with setting options"
},
{
"msg_contents": "Thank you for confirmation. \n On Monday, March 8, 2021, 03:46:28 PM PST, David G. Johnston <[email protected]> wrote: \n \n On Mon, Mar 8, 2021 at 4:30 PM Nagaraj Raj <[email protected]> wrote:\n\nI have a quick question, does role custom parameters settings will be granted to users well? \n\nParameters are not inherited - the role credentials that are logging in are the ones that are used to check for defaults. This \"no\" is not explicitly documented that I can find; though easy enough to test.\nDavid J. \n\nThank you for confirmation. \n\n\n\n On Monday, March 8, 2021, 03:46:28 PM PST, David G. Johnston <[email protected]> wrote:\n \n\n\nOn Mon, Mar 8, 2021 at 4:30 PM Nagaraj Raj <[email protected]> wrote:I have a quick question, does role custom parameters settings will be granted to users well? Parameters are not inherited - the role credentials that are logging in are the ones that are used to check for defaults. This \"no\" is not explicitly documented that I can find; though easy enough to test.David J.",
"msg_date": "Mon, 8 Mar 2021 23:48:54 +0000 (UTC)",
"msg_from": "Nagaraj Raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Users grants with setting options"
}
] |
[
{
"msg_contents": "All;\n\n\nWe have a client that is running PostgreSQL 12, they have a table with \n212 columns and 723 partitions\n\n\nIt seems the planning time is consumed by generating 723 sub plans\n\nI suspect it's due to the fact that they are using hash based \npartitioning, example:\n\n\nCREATE TABLE rental_transaction_hash_p723 PARTITION OF \nrental_transaction FOR VALUES WITH (MODULUS 723, REMAINDER 723);\n\n\nBased on a strategy like this, queries will ALWAYS scan all partitions \nunless a hash value is specified as part of the query, correct? I \nsuspect this is the issue... looking for confirmation, or feedback if \ni'm off base\n\n\nThanks in advance\n\n\n\n\n\n",
"msg_date": "Mon, 15 Mar 2021 10:53:06 -0600",
"msg_from": "S Bob <[email protected]>",
"msg_from_op": true,
"msg_subject": "wide table, many many partitions, poor query performance"
},
{
"msg_contents": "On Mon, Mar 15, 2021 at 10:53:06AM -0600, S Bob wrote:\n> We have a client that is running PostgreSQL 12, they have a table with 212\n> columns and 723 partitions\n> \n> It seems the planning time is consumed by generating 723 sub plans\n\nIs plannning time the issue ?\nPlease show diagnostic output. You can start from here:\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n> I suspect it's due to the fact that they are using hash based partitioning,\n> example:\n> \n> CREATE TABLE rental_transaction_hash_p723 PARTITION OF rental_transaction\n> FOR VALUES WITH (MODULUS 723, REMAINDER 723);\n> \n> Based on a strategy like this, queries will ALWAYS scan all partitions\n> unless a hash value is specified as part of the query, correct? I suspect\n> this is the issue... looking for confirmation, or feedback if i'm off base\n\nYou didn't say anything about the query, so: yes, maybe.\nThe partition strategy and key need to be selected to optimize the intended\nqueries. Hash partitioning is frequently a mistake.\n\nSee also:\nhttps://www.postgresql.org/message-id/[email protected]\nhttps://www.postgresql.org/message-id/[email protected]\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 15 Mar 2021 12:03:35 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: wide table, many many partitions, poor query performance"
},
{
"msg_contents": "On Mon, 2021-03-15 at 10:53 -0600, S Bob wrote:\n> We have a client that is running PostgreSQL 12, they have a table with \n> 212 columns and 723 partitions\n> \n> It seems the planning time is consumed by generating 723 sub plans\n> \n> I suspect it's due to the fact that they are using hash based \n> partitioning, example:\n> \n> CREATE TABLE rental_transaction_hash_p723 PARTITION OF \n> rental_transaction FOR VALUES WITH (MODULUS 723, REMAINDER 723);\n> \n> Based on a strategy like this, queries will ALWAYS scan all partitions \n> unless a hash value is specified as part of the query, correct? I \n> suspect this is the issue... looking for confirmation, or feedback if \n> i'm off base\n\nThat is correct.\n\nThe only use I can see in hash partitioning is to put the partitions\non different storage devices in order to spread I/O - kind of striping\non the database level.\n\nUnless you can benefit from that, your queries will become slower.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 15 Mar 2021 18:03:59 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: wide table, many many partitions, poor query performance"
}
] |
[
{
"msg_contents": "Hi\n\nI am having a rare issue with extremely inefficient merge join. The query\nplan indicates that PG is doing some kind of nested loop, although an index\nis present.\n\nPostgreSQL 9.6.17 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5\n20150623 (Red Hat 4.8.5-39), 64-bit\n\nSchema of dir_current (some columns left out for brevity):\n\nstarfish=# \\d sf.dir_current\n Table \"sf.dir_current\"\n Column | Type | Collation | Nullable |\n Default\n--------------------+-------------+-----------+----------+-----------------------------------------------\n id | bigint | | not null |\nnextval('sf.object_id_seq'::regclass)\n volume_id | bigint | | not null |\n parent_id | bigint | | |\n blocks | sf.blkcnt_t | | |\n rec_aggrs | jsonb | | not null |\nIndexes:\n \"dir_current_pk\" PRIMARY KEY, btree (id), tablespace \"sf_current\"\n \"dir_current_parentid_idx\" btree (parent_id), tablespace \"sf_current\"\n \"dir_current_volume_id_out_of_sync_time_idx\" btree (volume_id,\nout_of_sync_time) WHERE out_of_sync_time IS NOT NULL, tablespace\n\"sf_current\"\n \"dir_current_volume_id_path_unq_idx\" UNIQUE, btree (volume_id, path\ntext_pattern_ops), tablespace \"sf_current\"\n \"dir_current_volumeid_id_unq\" UNIQUE CONSTRAINT, btree (volume_id, id),\ntablespace \"sf_current\"\nForeign-key constraints:\n \"dir_current_parentid_fk\" FOREIGN KEY (parent_id) REFERENCES\nsf.dir_current(id) DEFERRABLE INITIALLY DEFERRED\n\ndir_process is created as a temporary table:\n\nCREATE TEMP TABLE dir_process AS (\n SELECT sf.dir_current.id, volume_id, parent_id, depth, size, blocks,\natime, ctime, mtime, sync_time, local_aggrs FROM sf.dir_current\n WHERE ....\n );\n\nCREATE INDEX dir_process_indx ON dir_process(volume_id, id);\nANALYZE dir_process;\n\nand usually contains a few thousands rows.\n\nSlow query:\n SELECT dir.id, dir.volume_id, dir.parent_id, dir.rec_aggrs, dir.blocks\nFROM sf.dir_current AS dir\n INNER JOIN dir_process ON dir.parent_id = dir_process.id AND\ndir.volume_id = dir_process.volume_id\n WHERE dir.volume_id = ANY(volume_ids)\n\ndir_current contains around 750M rows altogether, and there is ca. 1.75M\nrows with volume_id=5.\nSometimes Postgres will choose very inefficient plan, which involves\nlooping many times over same rows, producing hundreds of millions or\nbillions of rows:\n\nLOG: duration: 1125530.496 ms plan:\n Merge Join (cost=909.42..40484.01 rows=1 width=456) (actual rows=1\nloops=1)\n Merge Cond: (dir.volume_id = dir_process.volume_id)\n Join Filter: (dir.parent_id = dir_process.id)\n Rows Removed by Join Filter: 13583132483\n -> Index Scan using dir_current_volumeid_id_unq on dir_current\ndir (cost=0.12..884052.46 rows=601329 width=456) (actual rows=2000756\nloops=1)\n Index Cond: (volume_id = ANY ('{5}'::bigint[]))\n -> Sort (cost=909.31..912.70 rows=6789 width=16) (actual\nrows=13581131729 loops=1)\n Sort Key: dir_process.volume_id\n Sort Method: quicksort Memory: 511kB\n -> Seq Scan on dir_process (cost=0.00..822.89 rows=6789\nwidth=16) (actual rows=6789 loops=1)\n\nLOG: duration: 3923310.224 ms plan:\n Merge Join (cost=0.17..4324.64 rows=1 width=456) (actual rows=529\nloops=1)\n Merge Cond: (dir_process.volume_id = dir.volume_id)\n Join Filter: (dir.parent_id = dir_process.id)\n Rows Removed by Join Filter: 831113021\n -> Index Only Scan using dir_process_indx on dir_process\n (cost=0.05..245.00 rows=450 width=16) (actual rows=450 loops=1)\n Heap Fetches: 450\n -> Index Scan using dir_current_volumeid_id_unq on dir_current\ndir (cost=0.12..884052.46 rows=601329 width=456) (actual rows=831113101\nloops=1)\n Index Cond: (volume_id = ANY ('{5}'::bigint[]))\n\nLOG: duration: 10140968.829 ms plan:\n Merge Join (cost=0.17..8389.13 rows=1 width=456) (actual rows=819\nloops=1)\n Merge Cond: (dir_process.volume_id = dir.volume_id)\n Join Filter: (dir.parent_id = dir_process.id)\n Rows Removed by Join Filter: 2153506735\n -> Index Only Scan using dir_process_indx on dir_process\n (cost=0.06..659.76 rows=1166 width=16) (actual rows=1166 loops=1)\n Heap Fetches: 1166\n -> Index Scan using dir_current_volumeid_id_unq on dir_current\ndir (cost=0.12..885276.20 rows=602172 width=456) (actual rows=2153506389\nloops=1)\n Index Cond: (volume_id = ANY ('{5}'::bigint[]))\n\nNote that 2153506389 / 1166 = 1 846 918. Similarly 831113101 / 450 =\n1 846 918.\n\nI wonder how I can help Postgres query planner to choose a faster plan?\n\n-- \nMarcin Gozdalik\n\nHiI am having a rare issue with extremely inefficient merge join. The query plan indicates that PG is doing some kind of nested loop, although an index is present.PostgreSQL 9.6.17 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bitSchema of dir_current (some columns left out for brevity):starfish=# \\d sf.dir_current Table \"sf.dir_current\" Column | Type | Collation | Nullable | Default--------------------+-------------+-----------+----------+----------------------------------------------- id | bigint | | not null | nextval('sf.object_id_seq'::regclass) volume_id | bigint | | not null | parent_id | bigint | | | blocks | sf.blkcnt_t | | | rec_aggrs | jsonb | | not null |Indexes: \"dir_current_pk\" PRIMARY KEY, btree (id), tablespace \"sf_current\" \"dir_current_parentid_idx\" btree (parent_id), tablespace \"sf_current\" \"dir_current_volume_id_out_of_sync_time_idx\" btree (volume_id, out_of_sync_time) WHERE out_of_sync_time IS NOT NULL, tablespace \"sf_current\" \"dir_current_volume_id_path_unq_idx\" UNIQUE, btree (volume_id, path text_pattern_ops), tablespace \"sf_current\" \"dir_current_volumeid_id_unq\" UNIQUE CONSTRAINT, btree (volume_id, id), tablespace \"sf_current\"Foreign-key constraints: \"dir_current_parentid_fk\" FOREIGN KEY (parent_id) REFERENCES sf.dir_current(id) DEFERRABLE INITIALLY DEFERREDdir_process is created as a temporary table:CREATE TEMP TABLE dir_process AS ( SELECT sf.dir_current.id, volume_id, parent_id, depth, size, blocks, atime, ctime, mtime, sync_time, local_aggrs FROM sf.dir_current WHERE .... );CREATE INDEX dir_process_indx ON dir_process(volume_id, id);ANALYZE dir_process;and usually contains a few thousands rows.Slow query: SELECT dir.id, dir.volume_id, dir.parent_id, dir.rec_aggrs, dir.blocks FROM sf.dir_current AS dir INNER JOIN dir_process ON dir.parent_id = dir_process.id AND dir.volume_id = dir_process.volume_id WHERE dir.volume_id = ANY(volume_ids)\ndir_current contains around 750M rows altogether, and there is ca. 1.75M rows with volume_id=5. \nSometimes Postgres will choose very inefficient plan, which involves looping many times over same rows, producing hundreds of millions or billions of rows:LOG: duration: 1125530.496 ms plan: Merge Join (cost=909.42..40484.01 rows=1 width=456) (actual rows=1 loops=1) Merge Cond: (dir.volume_id = dir_process.volume_id) Join Filter: (dir.parent_id = dir_process.id) Rows Removed by Join Filter: 13583132483 -> Index Scan using dir_current_volumeid_id_unq on dir_current dir (cost=0.12..884052.46 rows=601329 width=456) (actual rows=2000756 loops=1) Index Cond: (volume_id = ANY ('{5}'::bigint[])) -> Sort (cost=909.31..912.70 rows=6789 width=16) (actual rows=13581131729 loops=1) Sort Key: dir_process.volume_id Sort Method: quicksort Memory: 511kB -> Seq Scan on dir_process (cost=0.00..822.89 rows=6789 width=16) (actual rows=6789 loops=1)LOG: duration: 3923310.224 ms plan: Merge Join (cost=0.17..4324.64 rows=1 width=456) (actual rows=529 loops=1) Merge Cond: (dir_process.volume_id = dir.volume_id) Join Filter: (dir.parent_id = dir_process.id) Rows Removed by Join Filter: 831113021 -> Index Only Scan using dir_process_indx on dir_process (cost=0.05..245.00 rows=450 width=16) (actual rows=450 loops=1) Heap Fetches: 450 -> Index Scan using dir_current_volumeid_id_unq on dir_current dir (cost=0.12..884052.46 rows=601329 width=456) (actual rows=831113101 loops=1) Index Cond: (volume_id = ANY ('{5}'::bigint[]))LOG: duration: 10140968.829 ms plan: Merge Join (cost=0.17..8389.13 rows=1 width=456) (actual rows=819 loops=1) Merge Cond: (dir_process.volume_id = dir.volume_id) Join Filter: (dir.parent_id = dir_process.id) Rows Removed by Join Filter: 2153506735 -> Index Only Scan using dir_process_indx on dir_process (cost=0.06..659.76 rows=1166 width=16) (actual rows=1166 loops=1) Heap Fetches: 1166 -> Index Scan using dir_current_volumeid_id_unq on dir_current dir (cost=0.12..885276.20 rows=602172 width=456) (actual rows=2153506389 loops=1) Index Cond: (volume_id = ANY ('{5}'::bigint[]))Note that \n2153506389 / \n1166 = 1 846 918. Similarly \n831113101 / \n450 = 1 846 918.I wonder how I can help Postgres query planner to choose a faster plan?-- Marcin Gozdalik",
"msg_date": "Wed, 17 Mar 2021 16:10:14 +0000",
"msg_from": "Marcin Gozdalik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Extremely inefficient merge-join"
},
{
"msg_contents": "Marcin Gozdalik <[email protected]> writes:\n> Sometimes Postgres will choose very inefficient plan, which involves\n> looping many times over same rows, producing hundreds of millions or\n> billions of rows:\n\nYeah, this can happen if the outer side of the join has a lot of\nduplicate rows. The query planner is aware of that effect and will\ncharge an increased cost when it applies, so I wonder if your\nstatistics for the tables being joined are up-to-date.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Mar 2021 16:47:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extremely inefficient merge-join"
},
{
"msg_contents": "dir_current changes often, but is analyzed after significant changes, so\neffectively it's analyzed probably once an hour.\nThe approximate ratio of rows with volume_id=5 to the whole number of rows\ndoesn't change (i.e. volume_id=5 will appear roughly in 1.5M-2M rows, total\nis around 750-800M rows).\ndir_process is created once, analyzed and doesn't change later.\n\nAssuming dir_process is the outer side in plans shown here has only\nduplicates - i.e. all rows have volume_id=5 in this example.\nDo you think there is anything that could be changed with the query itself?\nAny hints would be appreciated.\n\nśr., 17 mar 2021 o 20:47 Tom Lane <[email protected]> napisał(a):\n\n> Marcin Gozdalik <[email protected]> writes:\n> > Sometimes Postgres will choose very inefficient plan, which involves\n> > looping many times over same rows, producing hundreds of millions or\n> > billions of rows:\n>\n> Yeah, this can happen if the outer side of the join has a lot of\n> duplicate rows. The query planner is aware of that effect and will\n> charge an increased cost when it applies, so I wonder if your\n> statistics for the tables being joined are up-to-date.\n>\n> regards, tom lane\n>\n\n\n-- \nMarcin Gozdalik\n\n\ndir_current changes often, but is analyzed after significant changes, so effectively it's analyzed probably once an hour.The approximate ratio of rows with volume_id=5 to the whole number of rows doesn't change (i.e. volume_id=5 will appear roughly in 1.5M-2M rows, total is around 750-800M rows).dir_process is created once, analyzed and doesn't change later. Assuming dir_process is the outer side in plans shown here has only duplicates - i.e. all rows have volume_id=5 in this example.Do you think there is anything that could be changed with the query itself? Any hints would be appreciated.śr., 17 mar 2021 o 20:47 Tom Lane <[email protected]> napisał(a):Marcin Gozdalik <[email protected]> writes:\n> Sometimes Postgres will choose very inefficient plan, which involves\n> looping many times over same rows, producing hundreds of millions or\n> billions of rows:\n\nYeah, this can happen if the outer side of the join has a lot of\nduplicate rows. The query planner is aware of that effect and will\ncharge an increased cost when it applies, so I wonder if your\nstatistics for the tables being joined are up-to-date.\n\n regards, tom lane\n-- Marcin Gozdalik",
"msg_date": "Wed, 17 Mar 2021 21:27:18 +0000",
"msg_from": "Marcin Gozdalik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extremely inefficient merge-join"
}
] |
[
{
"msg_contents": "AWS RDS v12\n\nThe following SQL takes ~25 seconds to run. I'm relatively new to postgres\nbut the execution plan (https://explain.depesz.com/s/N4oR) looks like it's\nmaterializing the entire EXISTS subquery for each row returned by the rest\nof the query before probing for plate_384_id existence. postgres is\nchoosing sequential scans on sample_plate_384 and test_result when\nsuitable, efficient indexes exist. a re-written query produces a much\nbetter plan (https://explain.depesz.com/s/zXJ6). Executing the EXISTS\nportion of the query with an explicit PLATE_384_ID yields the execution\nplan we want as well (https://explain.depesz.com/s/3QAK). unnesting the\nEXISTS and adding a DISTINCT on the result also yields a better plan.\n\nI've tried tried the following:\n\ndisable parallel\nset join_collapse_limit=1 and played with order of EXISTS/NOT EXISTS\nchanged work_mem and enable_material to see if that had any effect\nVACUUM FULL'd TEST_RESULT and SAMPLE_PLATE_384\ncreated a stats object on (sample_id, sample_plate_384_id) for both\nTEST_RESULT and SAMPLE_PLATE_384 to see if that would help (they increment\nfairly consistently with each other)\n\nI'm out of ideas on how to convince postgres to choose a better plan. any\nand all help/suggestions/explanations would be greatly appreciated. the\nrewritten SQL performs sufficiently well but i'd like to understand why\npostgres is doing this and what to do about it so i can't tackle the next\nSQL performance issue with a little more knowledge.\n\nSELECT count(*) AS \"count\" FROM \"plate_384_scan\"\nWHERE NOT EXISTS (SELECT 1 FROM \"plate_384_scan\" AS \"plate_384_scan_0\"\nWHERE \"plate_384_scan_0\".\"ts\" > \"plate_384_scan\".\"ts\" AND\n\"plate_384_scan_0\".\"plate_384_id\" = \"plate_384_scan\".\"plate_384_id\")\n AND EXISTS (SELECT 1 FROM \"sample_plate_384\" INNER JOIN \"test_result\"\nUSING (\"sample_plate_384_id\", \"sample_id\") WHERE \"test_result\" IS NULL AND\n\"plate_384_scan_id\" = \"plate_384_scan\".\"plate_384_scan_id\")\n AND NOT EXISTS (SELECT 1 FROM \"plate_384_abandoned\" WHERE \"plate_384_id\"\n= \"plate_384_scan\".\"plate_384_id\");\n\n[limsdb_dev] # SELECT relname, relpages, reltuples, relallvisible, relkind,\nrelnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class\nWHERE relname in ('sample_plate_384','test_result',\n'plate_384_scan','plate_384_abandoned') order by 1;\n relname | relpages | reltuples | relallvisible | relkind |\nrelnatts | relhassubclass | reloptions | pg_table_size\n---------------------+----------+-----------+---------------+---------+----------+----------------+------------+---------------\n plate_384_abandoned | 1 | 16 | 0 | r |\n 4 | f | (null) | 16384\n plate_384_scan | 13 | 1875 | 0 | r |\n 5 | f | (null) | 131072\n sample_plate_384 | 3827 | 600701 | 0 | r |\n 9 | f | (null) | 31350784\n test_result | 4900 | 599388 | 0 | r |\n 8 | f | (null) | 40140800\n(4 rows)\n\nTime: 44.405 ms\n[limsdb_dev] # \\d plate_384_abandoned\n Table \"lab_data.plate_384_abandoned\"\n Column | Type | Collation | Nullable |\n Default\n--------------+--------------------------+-----------+----------+-------------------\n plate_384_id | integer | | not null |\n reason | text | | not null |\n tech_id | integer | | |\n ts | timestamp with time zone | | not null |\nCURRENT_TIMESTAMP\nIndexes:\n \"plate_384_abandoned_pkey\" PRIMARY KEY, btree (plate_384_id)\nForeign-key constraints:\n \"plate_384_abandoned_plate_384_id_fkey\" FOREIGN KEY (plate_384_id)\nREFERENCES plate_384(plate_384_id)\n \"plate_384_abandoned_tech_id_fkey\" FOREIGN KEY (tech_id) REFERENCES\ntech(tech_id)\n\n[limsdb_dev] # \\d plate_384_scan\n Table\n\"lab_data.plate_384_scan\"\n Column | Type | Collation | Nullable |\n Default\n-------------------+--------------------------+-----------+----------+-----------------------------------------------------------\n plate_384_scan_id | integer | | not null |\nnextval('plate_384_scan_plate_384_scan_id_seq'::regclass)\n plate_384_id | integer | | not null |\n equipment_id | integer | | not null |\n tech_id | integer | | not null |\n ts | timestamp with time zone | | not null |\nCURRENT_TIMESTAMP\nIndexes:\n \"pk_plate_384_scan\" PRIMARY KEY, btree (plate_384_scan_id)\n \"plate_384_scan_idx001\" btree (ts, plate_384_scan_id)\n \"plate_384_scan_idx002\" btree (plate_384_id, ts)\nForeign-key constraints:\n \"fk_plate_384_scan_equipment_id\" FOREIGN KEY (equipment_id) REFERENCES\nequipment(equipment_id)\n \"fk_plate_384_scan_plate_384_id\" FOREIGN KEY (plate_384_id) REFERENCES\nplate_384(plate_384_id)\n \"fk_plate_384_scan_tech_id\" FOREIGN KEY (tech_id) REFERENCES\ntech(tech_id)\nReferenced by:\n TABLE \"sample_plate_384\" CONSTRAINT\n\"fk_sample_plate_384_plate_384_scan_id\" FOREIGN KEY (plate_384_scan_id)\nREFERENCES plate_384_scan(plate_384_scan_id)\n TABLE \"sample_plate_384_removed\" CONSTRAINT\n\"sample_plate_384_removed_plate_384_scan_id_fkey\" FOREIGN KEY\n(plate_384_scan_id) REFERENCES plate_384_scan(plate_384_scan_id)\n TABLE \"test_result_file\" CONSTRAINT\n\"test_result_file_plate_384_scan_id_fkey\" FOREIGN KEY (plate_384_scan_id)\nREFERENCES plate_384_scan(plate_384_scan_id)\n\n[limsdb_dev] # \\d sample_plate_384\n Table \"lab_data.sample_plate_384\"\n Column | Type | Collation | Nullable |\n Default\n---------------------+---------+-----------+----------+---------------------------------------------------------------\n sample_plate_384_id | integer | | not null |\nnextval('sample_plate_384_sample_plate_384_id_seq'::regclass)\n sample_id | integer | | not null |\n plate_384_scan_id | integer | | not null |\n plate_384_well | integer | | not null |\nIndexes:\n \"pk_sample_plate_384\" PRIMARY KEY, btree (sample_plate_384_id)\n \"sample_plate_384_idx001\" btree (sample_id, sample_plate_384_id)\n \"sample_plate_384_idx002\" btree (sample_id, sample_plate_384_id,\nplate_384_scan_id)\n \"sample_plate_384_idx003\" btree (plate_384_scan_id, sample_plate_384_id)\n \"sample_plate_384_idx004\" btree (plate_384_scan_id,\nsample_plate_384_id, sample_id)\n \"sample_plate_384_plate_384_scan_id_plate_384_well_idx\" UNIQUE, btree\n(plate_384_scan_id, plate_384_well)\nForeign-key constraints:\n \"fk_sample_plate_384_plate_384_scan_id\" FOREIGN KEY (plate_384_scan_id)\nREFERENCES plate_384_scan(plate_384_scan_id)\n \"fk_sample_plate_384_sample\" FOREIGN KEY (sample_id) REFERENCES\nsample(sample_id)\nReferenced by:\n TABLE \"sample_plate_96_plate_384\" CONSTRAINT\n\"fk_sample_plate_96_plate_384_sample_plate_384_id\" FOREIGN KEY\n(sample_plate_384_id) REFERENCES sample_plate_384(sample_plate_384_id)\n TABLE \"test_result\" CONSTRAINT \"fk_test_result_sample_plate_384\"\nFOREIGN KEY (sample_plate_384_id) REFERENCES\nsample_plate_384(sample_plate_384_id)\nStatistics objects:\n \"lab_data\".\"sp384_stats\" (ndistinct, dependencies, mcv) ON\nsample_plate_384_id, sample_id FROM sample_plate_384\n\n[limsdb_dev] # \\d test_result\n Table \"lab_data.test_result\"\n Column | Type | Collation | Nullable |\n Default\n---------------------+--------------------------+-----------+----------+-----------------------------------------------------\n test_result_id | integer | | not null |\nnextval('test_result_test_result_id_seq'::regclass)\n sample_plate_384_id | integer | | not null |\n sample_id | integer | | not null |\n equipment_id | integer | | |\n test_result | character varying(100) | | |\n final_result_flag | boolean | | |\n tech_id | integer | | |\n ts | timestamp with time zone | | |\nCURRENT_TIMESTAMP\nIndexes:\n \"pk_test_result\" PRIMARY KEY, btree (test_result_id)\n \"test_result_idx001\" btree (sample_id, ts, final_result_flag,\ntest_result)\n \"test_result_idx002\" btree (ts, final_result_flag, test_result,\nsample_id)\n \"test_result_idx003\" btree (ts, sample_id)\n \"test_result_idx004\" btree (sample_id, sample_plate_384_id)\n \"test_result_idx005\" btree (sample_plate_384_id)\n \"test_result_idx006\" btree (sample_plate_384_id, sample_id, test_result)\n \"test_result_sample_plate_384_id_idx\" UNIQUE, btree\n(sample_plate_384_id)\nForeign-key constraints:\n \"fk_test_result_equipment\" FOREIGN KEY (equipment_id) REFERENCES\nequipment(equipment_id)\n \"fk_test_result_sample\" FOREIGN KEY (sample_id) REFERENCES\nsample(sample_id)\n \"fk_test_result_sample_plate_384\" FOREIGN KEY (sample_plate_384_id)\nREFERENCES sample_plate_384(sample_plate_384_id)\n \"fk_test_result_tech\" FOREIGN KEY (tech_id) REFERENCES tech(tech_id)\nReferenced by:\n TABLE \"test_result_detail\" CONSTRAINT \"fk_test_result_detail\" FOREIGN\nKEY (test_result_id) REFERENCES test_result(test_result_id)\n TABLE \"reported_test_result\" CONSTRAINT\n\"reported_test_result_test_result_id_fkey\" FOREIGN KEY (test_result_id)\nREFERENCES test_result(test_result_id)\nStatistics objects:\n \"lab_data\".\"test_result_stats\" (ndistinct, dependencies, mcv) ON\nsample_plate_384_id, sample_id FROM test_result\n\nAWS RDS v12The following SQL takes ~25 seconds to run. I'm relatively new to postgres but the execution plan (https://explain.depesz.com/s/N4oR) looks like it's materializing the entire EXISTS subquery for each row returned by the rest of the query before probing for plate_384_id existence. postgres is choosing sequential scans on sample_plate_384 and test_result when suitable, efficient indexes exist. a re-written query produces a much better plan (https://explain.depesz.com/s/zXJ6). Executing the EXISTS portion of the query with an explicit PLATE_384_ID yields the execution plan we want as well (https://explain.depesz.com/s/3QAK). unnesting the EXISTS and adding a DISTINCT on the result also yields a better plan.I've tried tried the following:disable parallelset join_collapse_limit=1 and played with order of EXISTS/NOT EXISTSchanged work_mem and enable_material to see if that had any effectVACUUM FULL'd TEST_RESULT and SAMPLE_PLATE_384 created a stats object on (sample_id, sample_plate_384_id) for both TEST_RESULT and SAMPLE_PLATE_384 to see if that would help (they increment fairly consistently with each other)I'm out of ideas on how to convince postgres to choose a better plan. any and all help/suggestions/explanations would be greatly appreciated. the rewritten SQL performs sufficiently well but i'd like to understand why postgres is doing this and what to do about it so i can't tackle the next SQL performance issue with a little more knowledge.SELECT count(*) AS \"count\" FROM \"plate_384_scan\"WHERE NOT EXISTS (SELECT 1 FROM \"plate_384_scan\" AS \"plate_384_scan_0\" WHERE \"plate_384_scan_0\".\"ts\" > \"plate_384_scan\".\"ts\" AND \"plate_384_scan_0\".\"plate_384_id\" = \"plate_384_scan\".\"plate_384_id\") AND EXISTS (SELECT 1 FROM \"sample_plate_384\" INNER JOIN \"test_result\" USING (\"sample_plate_384_id\", \"sample_id\") WHERE \"test_result\" IS NULL AND \"plate_384_scan_id\" = \"plate_384_scan\".\"plate_384_scan_id\") AND NOT EXISTS (SELECT 1 FROM \"plate_384_abandoned\" WHERE \"plate_384_id\" = \"plate_384_scan\".\"plate_384_id\");[limsdb_dev] # SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname in ('sample_plate_384','test_result', 'plate_384_scan','plate_384_abandoned') order by 1; relname | relpages | reltuples | relallvisible | relkind | relnatts | relhassubclass | reloptions | pg_table_size---------------------+----------+-----------+---------------+---------+----------+----------------+------------+--------------- plate_384_abandoned | 1 | 16 | 0 | r | 4 | f | (null) | 16384 plate_384_scan | 13 | 1875 | 0 | r | 5 | f | (null) | 131072 sample_plate_384 | 3827 | 600701 | 0 | r | 9 | f | (null) | 31350784 test_result | 4900 | 599388 | 0 | r | 8 | f | (null) | 40140800(4 rows)Time: 44.405 ms[limsdb_dev] # \\d plate_384_abandoned Table \"lab_data.plate_384_abandoned\" Column | Type | Collation | Nullable | Default--------------+--------------------------+-----------+----------+------------------- plate_384_id | integer | | not null | reason | text | | not null | tech_id | integer | | | ts | timestamp with time zone | | not null | CURRENT_TIMESTAMPIndexes: \"plate_384_abandoned_pkey\" PRIMARY KEY, btree (plate_384_id)Foreign-key constraints: \"plate_384_abandoned_plate_384_id_fkey\" FOREIGN KEY (plate_384_id) REFERENCES plate_384(plate_384_id) \"plate_384_abandoned_tech_id_fkey\" FOREIGN KEY (tech_id) REFERENCES tech(tech_id)[limsdb_dev] # \\d plate_384_scan Table \"lab_data.plate_384_scan\" Column | Type | Collation | Nullable | Default-------------------+--------------------------+-----------+----------+----------------------------------------------------------- plate_384_scan_id | integer | | not null | nextval('plate_384_scan_plate_384_scan_id_seq'::regclass) plate_384_id | integer | | not null | equipment_id | integer | | not null | tech_id | integer | | not null | ts | timestamp with time zone | | not null | CURRENT_TIMESTAMPIndexes: \"pk_plate_384_scan\" PRIMARY KEY, btree (plate_384_scan_id) \"plate_384_scan_idx001\" btree (ts, plate_384_scan_id) \"plate_384_scan_idx002\" btree (plate_384_id, ts)Foreign-key constraints: \"fk_plate_384_scan_equipment_id\" FOREIGN KEY (equipment_id) REFERENCES equipment(equipment_id) \"fk_plate_384_scan_plate_384_id\" FOREIGN KEY (plate_384_id) REFERENCES plate_384(plate_384_id) \"fk_plate_384_scan_tech_id\" FOREIGN KEY (tech_id) REFERENCES tech(tech_id)Referenced by: TABLE \"sample_plate_384\" CONSTRAINT \"fk_sample_plate_384_plate_384_scan_id\" FOREIGN KEY (plate_384_scan_id) REFERENCES plate_384_scan(plate_384_scan_id) TABLE \"sample_plate_384_removed\" CONSTRAINT \"sample_plate_384_removed_plate_384_scan_id_fkey\" FOREIGN KEY (plate_384_scan_id) REFERENCES plate_384_scan(plate_384_scan_id) TABLE \"test_result_file\" CONSTRAINT \"test_result_file_plate_384_scan_id_fkey\" FOREIGN KEY (plate_384_scan_id) REFERENCES plate_384_scan(plate_384_scan_id)[limsdb_dev] # \\d sample_plate_384 Table \"lab_data.sample_plate_384\" Column | Type | Collation | Nullable | Default---------------------+---------+-----------+----------+--------------------------------------------------------------- sample_plate_384_id | integer | | not null | nextval('sample_plate_384_sample_plate_384_id_seq'::regclass) sample_id | integer | | not null | plate_384_scan_id | integer | | not null | plate_384_well | integer | | not null |Indexes: \"pk_sample_plate_384\" PRIMARY KEY, btree (sample_plate_384_id) \"sample_plate_384_idx001\" btree (sample_id, sample_plate_384_id) \"sample_plate_384_idx002\" btree (sample_id, sample_plate_384_id, plate_384_scan_id) \"sample_plate_384_idx003\" btree (plate_384_scan_id, sample_plate_384_id) \"sample_plate_384_idx004\" btree (plate_384_scan_id, sample_plate_384_id, sample_id) \"sample_plate_384_plate_384_scan_id_plate_384_well_idx\" UNIQUE, btree (plate_384_scan_id, plate_384_well)Foreign-key constraints: \"fk_sample_plate_384_plate_384_scan_id\" FOREIGN KEY (plate_384_scan_id) REFERENCES plate_384_scan(plate_384_scan_id) \"fk_sample_plate_384_sample\" FOREIGN KEY (sample_id) REFERENCES sample(sample_id)Referenced by: TABLE \"sample_plate_96_plate_384\" CONSTRAINT \"fk_sample_plate_96_plate_384_sample_plate_384_id\" FOREIGN KEY (sample_plate_384_id) REFERENCES sample_plate_384(sample_plate_384_id) TABLE \"test_result\" CONSTRAINT \"fk_test_result_sample_plate_384\" FOREIGN KEY (sample_plate_384_id) REFERENCES sample_plate_384(sample_plate_384_id)Statistics objects: \"lab_data\".\"sp384_stats\" (ndistinct, dependencies, mcv) ON sample_plate_384_id, sample_id FROM sample_plate_384[limsdb_dev] # \\d test_result Table \"lab_data.test_result\" Column | Type | Collation | Nullable | Default---------------------+--------------------------+-----------+----------+----------------------------------------------------- test_result_id | integer | | not null | nextval('test_result_test_result_id_seq'::regclass) sample_plate_384_id | integer | | not null | sample_id | integer | | not null | equipment_id | integer | | | test_result | character varying(100) | | | final_result_flag | boolean | | | tech_id | integer | | | ts | timestamp with time zone | | | CURRENT_TIMESTAMPIndexes: \"pk_test_result\" PRIMARY KEY, btree (test_result_id) \"test_result_idx001\" btree (sample_id, ts, final_result_flag, test_result) \"test_result_idx002\" btree (ts, final_result_flag, test_result, sample_id) \"test_result_idx003\" btree (ts, sample_id) \"test_result_idx004\" btree (sample_id, sample_plate_384_id) \"test_result_idx005\" btree (sample_plate_384_id) \"test_result_idx006\" btree (sample_plate_384_id, sample_id, test_result) \"test_result_sample_plate_384_id_idx\" UNIQUE, btree (sample_plate_384_id)Foreign-key constraints: \"fk_test_result_equipment\" FOREIGN KEY (equipment_id) REFERENCES equipment(equipment_id) \"fk_test_result_sample\" FOREIGN KEY (sample_id) REFERENCES sample(sample_id) \"fk_test_result_sample_plate_384\" FOREIGN KEY (sample_plate_384_id) REFERENCES sample_plate_384(sample_plate_384_id) \"fk_test_result_tech\" FOREIGN KEY (tech_id) REFERENCES tech(tech_id)Referenced by: TABLE \"test_result_detail\" CONSTRAINT \"fk_test_result_detail\" FOREIGN KEY (test_result_id) REFERENCES test_result(test_result_id) TABLE \"reported_test_result\" CONSTRAINT \"reported_test_result_test_result_id_fkey\" FOREIGN KEY (test_result_id) REFERENCES test_result(test_result_id)Statistics objects: \"lab_data\".\"test_result_stats\" (ndistinct, dependencies, mcv) ON sample_plate_384_id, sample_id FROM test_result",
"msg_date": "Mon, 22 Mar 2021 08:10:46 -0500",
"msg_from": "Chris Stephens <[email protected]>",
"msg_from_op": true,
"msg_subject": "SQL performance issue (postgresql chooses a bad plan when a better\n one is available)"
},
{
"msg_contents": "On Mon, 2021-03-22 at 08:10 -0500, Chris Stephens wrote:\n> The following SQL takes ~25 seconds to run. I'm relatively new to postgres\n> but the execution plan (https://explain.depesz.com/s/N4oR) looks like it's\n> materializing the entire EXISTS subquery for each row returned by the rest\n> of the query before probing for plate_384_id existence. postgres is\n> choosing sequential scans on sample_plate_384 and test_result when suitable,\n> efficient indexes exist. a re-written query produces a much better plan\n> (https://explain.depesz.com/s/zXJ6). Executing the EXISTS portion of the\n> query with an explicit PLATE_384_ID yields the execution plan we want as\n> well (https://explain.depesz.com/s/3QAK). unnesting the EXISTS and adding\n> a DISTINCT on the result also yields a better plan.\n\nGreat! Then use one of the rewritten queries.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Mon, 22 Mar 2021 15:54:13 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL performance issue (postgresql chooses a bad plan when a\n better one is available)"
},
{
"msg_contents": "we are but i was hoping to get a better understanding of where the\noptimizer is going wrong and what i can do about it.\n\nchris\n\n\nOn Mon, Mar 22, 2021 at 9:54 AM Laurenz Albe <[email protected]>\nwrote:\n\n> On Mon, 2021-03-22 at 08:10 -0500, Chris Stephens wrote:\n> > The following SQL takes ~25 seconds to run. I'm relatively new to\n> postgres\n> > but the execution plan (https://explain.depesz.com/s/N4oR) looks like\n> it's\n> > materializing the entire EXISTS subquery for each row returned by the\n> rest\n> > of the query before probing for plate_384_id existence. postgres is\n> > choosing sequential scans on sample_plate_384 and test_result when\n> suitable,\n> > efficient indexes exist. a re-written query produces a much better plan\n> > (https://explain.depesz.com/s/zXJ6). Executing the EXISTS portion of\n> the\n> > query with an explicit PLATE_384_ID yields the execution plan we want as\n> > well (https://explain.depesz.com/s/3QAK). unnesting the EXISTS and\n> adding\n> > a DISTINCT on the result also yields a better plan.\n>\n> Great! Then use one of the rewritten queries.\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n>\n\nwe are but i was hoping to get a better understanding of where the optimizer is going wrong and what i can do about it. chrisOn Mon, Mar 22, 2021 at 9:54 AM Laurenz Albe <[email protected]> wrote:On Mon, 2021-03-22 at 08:10 -0500, Chris Stephens wrote:\n> The following SQL takes ~25 seconds to run. I'm relatively new to postgres\n> but the execution plan (https://explain.depesz.com/s/N4oR) looks like it's\n> materializing the entire EXISTS subquery for each row returned by the rest\n> of the query before probing for plate_384_id existence. postgres is\n> choosing sequential scans on sample_plate_384 and test_result when suitable,\n> efficient indexes exist. a re-written query produces a much better plan\n> (https://explain.depesz.com/s/zXJ6). Executing the EXISTS portion of the\n> query with an explicit PLATE_384_ID yields the execution plan we want as\n> well (https://explain.depesz.com/s/3QAK). unnesting the EXISTS and adding\n> a DISTINCT on the result also yields a better plan.\n\nGreat! Then use one of the rewritten queries.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com",
"msg_date": "Mon, 22 Mar 2021 12:28:49 -0500",
"msg_from": "Chris Stephens <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SQL performance issue (postgresql chooses a bad plan when a\n better one is available)"
},
{
"msg_contents": "you can play around various `enable_*` flags to see if disabling any\nof these will *maybe* yield the plan you were expecting, and then\ncheck the costs in EXPLAIN to see if the optimiser also thinks this\nplan is cheaper.\n\n\nOn Mon, Mar 22, 2021 at 6:29 PM Chris Stephens <[email protected]> wrote:\n>\n> we are but i was hoping to get a better understanding of where the optimizer is going wrong and what i can do about it.\n>\n> chris\n>\n>\n> On Mon, Mar 22, 2021 at 9:54 AM Laurenz Albe <[email protected]> wrote:\n>>\n>> On Mon, 2021-03-22 at 08:10 -0500, Chris Stephens wrote:\n>> > The following SQL takes ~25 seconds to run. I'm relatively new to postgres\n>> > but the execution plan (https://explain.depesz.com/s/N4oR) looks like it's\n>> > materializing the entire EXISTS subquery for each row returned by the rest\n>> > of the query before probing for plate_384_id existence. postgres is\n>> > choosing sequential scans on sample_plate_384 and test_result when suitable,\n>> > efficient indexes exist. a re-written query produces a much better plan\n>> > (https://explain.depesz.com/s/zXJ6). Executing the EXISTS portion of the\n>> > query with an explicit PLATE_384_ID yields the execution plan we want as\n>> > well (https://explain.depesz.com/s/3QAK). unnesting the EXISTS and adding\n>> > a DISTINCT on the result also yields a better plan.\n>>\n>> Great! Then use one of the rewritten queries.\n>>\n>> Yours,\n>> Laurenz Albe\n>> --\n>> Cybertec | https://www.cybertec-postgresql.com\n>>\n\n\n",
"msg_date": "Mon, 22 Mar 2021 22:39:02 +0100",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL performance issue (postgresql chooses a bad plan when a\n better one is available)"
},
{
"msg_contents": "\"set enable_material=false;\" produces an efficient plan. good to know there\nare *some* knobs to turn when the optimizer comes up with a bad plan. would\nbe awesome if you could lock that plan into place w/out altering the\nvariable.\n\nthanks for the help Hannu!\n\nOn Mon, Mar 22, 2021 at 4:39 PM Hannu Krosing <[email protected]> wrote:\n\n> you can play around various `enable_*` flags to see if disabling any\n> of these will *maybe* yield the plan you were expecting, and then\n> check the costs in EXPLAIN to see if the optimiser also thinks this\n> plan is cheaper.\n>\n>\n> On Mon, Mar 22, 2021 at 6:29 PM Chris Stephens <[email protected]>\n> wrote:\n> >\n> > we are but i was hoping to get a better understanding of where the\n> optimizer is going wrong and what i can do about it.\n> >\n> > chris\n> >\n> >\n> > On Mon, Mar 22, 2021 at 9:54 AM Laurenz Albe <[email protected]>\n> wrote:\n> >>\n> >> On Mon, 2021-03-22 at 08:10 -0500, Chris Stephens wrote:\n> >> > The following SQL takes ~25 seconds to run. I'm relatively new to\n> postgres\n> >> > but the execution plan (https://explain.depesz.com/s/N4oR) looks\n> like it's\n> >> > materializing the entire EXISTS subquery for each row returned by\n> the rest\n> >> > of the query before probing for plate_384_id existence. postgres is\n> >> > choosing sequential scans on sample_plate_384 and test_result when\n> suitable,\n> >> > efficient indexes exist. a re-written query produces a much better\n> plan\n> >> > (https://explain.depesz.com/s/zXJ6). Executing the EXISTS portion\n> of the\n> >> > query with an explicit PLATE_384_ID yields the execution plan we\n> want as\n> >> > well (https://explain.depesz.com/s/3QAK). unnesting the EXISTS and\n> adding\n> >> > a DISTINCT on the result also yields a better plan.\n> >>\n> >> Great! Then use one of the rewritten queries.\n> >>\n> >> Yours,\n> >> Laurenz Albe\n> >> --\n> >> Cybertec | https://www.cybertec-postgresql.com\n> >>\n>\n\n\"set enable_material=false;\" produces an efficient plan. good to know there are *some* knobs to turn when the optimizer comes up with a bad plan. would be awesome if you could lock that plan into place w/out altering the variable.thanks for the help Hannu!On Mon, Mar 22, 2021 at 4:39 PM Hannu Krosing <[email protected]> wrote:you can play around various `enable_*` flags to see if disabling any\nof these will *maybe* yield the plan you were expecting, and then\ncheck the costs in EXPLAIN to see if the optimiser also thinks this\nplan is cheaper.\n\n\nOn Mon, Mar 22, 2021 at 6:29 PM Chris Stephens <[email protected]> wrote:\n>\n> we are but i was hoping to get a better understanding of where the optimizer is going wrong and what i can do about it.\n>\n> chris\n>\n>\n> On Mon, Mar 22, 2021 at 9:54 AM Laurenz Albe <[email protected]> wrote:\n>>\n>> On Mon, 2021-03-22 at 08:10 -0500, Chris Stephens wrote:\n>> > The following SQL takes ~25 seconds to run. I'm relatively new to postgres\n>> > but the execution plan (https://explain.depesz.com/s/N4oR) looks like it's\n>> > materializing the entire EXISTS subquery for each row returned by the rest\n>> > of the query before probing for plate_384_id existence. postgres is\n>> > choosing sequential scans on sample_plate_384 and test_result when suitable,\n>> > efficient indexes exist. a re-written query produces a much better plan\n>> > (https://explain.depesz.com/s/zXJ6). Executing the EXISTS portion of the\n>> > query with an explicit PLATE_384_ID yields the execution plan we want as\n>> > well (https://explain.depesz.com/s/3QAK). unnesting the EXISTS and adding\n>> > a DISTINCT on the result also yields a better plan.\n>>\n>> Great! Then use one of the rewritten queries.\n>>\n>> Yours,\n>> Laurenz Albe\n>> --\n>> Cybertec | https://www.cybertec-postgresql.com\n>>",
"msg_date": "Tue, 23 Mar 2021 10:21:55 -0500",
"msg_from": "Chris Stephens <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SQL performance issue (postgresql chooses a bad plan when a\n better one is available)"
},
{
"msg_contents": "When I do serious database development I try to use database functions\nas much as possible.\n\nYou can attach any flag value to a function in which case it gets set\nwhen the function is running,\n\nIn your case you could probably wrap your query into an set-returning\n`LANGUAGE SQL` function [1] and then include\n\n`SET enable_material=false`\n\nas part of the `CREATE FUNCTION` [2]\n\n------\n[1] https://www.postgresql.org/docs/current/xfunc-sql.html\n[2] https://www.postgresql.org/docs/13/sql-createfunction.html\n\nOn Tue, Mar 23, 2021 at 4:22 PM Chris Stephens <[email protected]> wrote:\n>\n> \"set enable_material=false;\" produces an efficient plan. good to know there are *some* knobs to turn when the optimizer comes up with a bad plan. would be awesome if you could lock that plan into place w/out altering the variable.\n>\n> thanks for the help Hannu!\n>\n> On Mon, Mar 22, 2021 at 4:39 PM Hannu Krosing <[email protected]> wrote:\n>>\n>> you can play around various `enable_*` flags to see if disabling any\n>> of these will *maybe* yield the plan you were expecting, and then\n>> check the costs in EXPLAIN to see if the optimiser also thinks this\n>> plan is cheaper.\n>>\n>>\n>> On Mon, Mar 22, 2021 at 6:29 PM Chris Stephens <[email protected]> wrote:\n>> >\n>> > we are but i was hoping to get a better understanding of where the optimizer is going wrong and what i can do about it.\n>> >\n>> > chris\n>> >\n>> >\n>> > On Mon, Mar 22, 2021 at 9:54 AM Laurenz Albe <[email protected]> wrote:\n>> >>\n>> >> On Mon, 2021-03-22 at 08:10 -0500, Chris Stephens wrote:\n>> >> > The following SQL takes ~25 seconds to run. I'm relatively new to postgres\n>> >> > but the execution plan (https://explain.depesz.com/s/N4oR) looks like it's\n>> >> > materializing the entire EXISTS subquery for each row returned by the rest\n>> >> > of the query before probing for plate_384_id existence. postgres is\n>> >> > choosing sequential scans on sample_plate_384 and test_result when suitable,\n>> >> > efficient indexes exist. a re-written query produces a much better plan\n>> >> > (https://explain.depesz.com/s/zXJ6). Executing the EXISTS portion of the\n>> >> > query with an explicit PLATE_384_ID yields the execution plan we want as\n>> >> > well (https://explain.depesz.com/s/3QAK). unnesting the EXISTS and adding\n>> >> > a DISTINCT on the result also yields a better plan.\n>> >>\n>> >> Great! Then use one of the rewritten queries.\n>> >>\n>> >> Yours,\n>> >> Laurenz Albe\n>> >> --\n>> >> Cybertec | https://www.cybertec-postgresql.com\n>> >>\n\n\n",
"msg_date": "Wed, 24 Mar 2021 00:20:49 +0100",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL performance issue (postgresql chooses a bad plan when a\n better one is available)"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have a query where Postgresql (11.9 at the moment) is making an odd plan\nchoice, choosing to use index scans which require filtering out millions of\nrows, rather than \"just\" doing an aggregate over the rows the where clause\ntargets which is much faster.\nAFAICT it isn't a statistics problem, at least increasing the stats target\nand analyzing the table doesn't seem to fix the problem.\n\nThe query looks like:\n\n======\n explain analyze select min(risk_id),max(risk_id) from risk where\ntime>='2020-01-20 15:00:07+00' and time < '2020-01-21 15:00:08+00';\n\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=217.80..217.81 rows=1 width=16) (actual\ntime=99722.685..99722.687 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.57..108.90 rows=1 width=8) (actual\ntime=38454.537..38454.538 rows=1 loops=1)\n -> Index Scan using risk_risk_id_key on risk\n (cost=0.57..9280362.29 rows=85668 width=8) (actual\ntime=38454.535..38454.536 rows=1 loops=1)\n Index Cond: (risk_id IS NOT NULL)\n Filter: ((\"time\" >= '2020-01-20 15:00:07+00'::timestamp\nwith time zone) AND (\"time\" < '2020-01-21 15:00:08+00'::timestamp with time\nzone))\n Rows Removed by Filter: 161048697\n InitPlan 2 (returns $1)\n -> Limit (cost=0.57..108.90 rows=1 width=8) (actual\ntime=61268.140..61268.140 rows=1 loops=1)\n -> Index Scan Backward using risk_risk_id_key on risk risk_1\n (cost=0.57..9280362.29 rows=85668 width=8) (actual\ntime=61268.138..61268.139 rows=1 loops=1)\n Index Cond: (risk_id IS NOT NULL)\n Filter: ((\"time\" >= '2020-01-20 15:00:07+00'::timestamp\nwith time zone) AND (\"time\" < '2020-01-21 15:00:08+00'::timestamp with time\nzone))\n Rows Removed by Filter: 41746396\n Planning Time: 0.173 ms\n Execution Time: 99722.716 ms\n(15 rows)\n======\n\nIf I add a count(*) so it has to consider all rows in the range for that\npart of the query and doesn't consider using the other index for a min/max\n\"shortcut\" then the query is fast.\n======\nexplain analyze select min(risk_id),max(risk_id), count(*) from risk where\ntime>='2020-01-20 15:00:07+00' and time < '2020-01-21 15:00:08+00';\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=4376.67..4376.68 rows=1 width=24) (actual\ntime=30.011..30.012 rows=1 loops=1)\n -> Index Scan using risk_time_idx on risk (cost=0.57..3734.17\nrows=85667 width=8) (actual time=0.018..22.441 rows=90973 loops=1)\n Index Cond: ((\"time\" >= '2020-01-20 15:00:07+00'::timestamp with\ntime zone) AND (\"time\" < '2020-01-21 15:00:08+00'::timestamp with time\nzone))\n Planning Time: 0.091 ms\n Execution Time: 30.045 ms\n(5 rows)\n======\n\nMy count() hack works around my immediate problem but I'm trying to get my\nhead round why Postgres chooses the plan it does without it, in case there\nis some general problem with my configuration that may negatively effect\nother areas, or there's something else I am missing.\n\nAny ideas?\n\nPaul McGarry\n\nHi all,I have a query where Postgresql (11.9 at the moment) is making an odd plan choice, choosing to use index scans which require filtering out millions of rows, rather than \"just\" doing an aggregate over the rows the where clause targets which is much faster. AFAICT it isn't a statistics problem, at least increasing the stats target and analyzing the table doesn't seem to fix the problem.The query looks like:====== explain analyze select min(risk_id),max(risk_id) from risk where time>='2020-01-20 15:00:07+00' and time < '2020-01-21 15:00:08+00'; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- Result (cost=217.80..217.81 rows=1 width=16) (actual time=99722.685..99722.687 rows=1 loops=1) InitPlan 1 (returns $0) -> Limit (cost=0.57..108.90 rows=1 width=8) (actual time=38454.537..38454.538 rows=1 loops=1) -> Index Scan using risk_risk_id_key on risk (cost=0.57..9280362.29 rows=85668 width=8) (actual time=38454.535..38454.536 rows=1 loops=1) Index Cond: (risk_id IS NOT NULL) Filter: ((\"time\" >= '2020-01-20 15:00:07+00'::timestamp with time zone) AND (\"time\" < '2020-01-21 15:00:08+00'::timestamp with time zone)) Rows Removed by Filter: 161048697 InitPlan 2 (returns $1) -> Limit (cost=0.57..108.90 rows=1 width=8) (actual time=61268.140..61268.140 rows=1 loops=1) -> Index Scan Backward using risk_risk_id_key on risk risk_1 (cost=0.57..9280362.29 rows=85668 width=8) (actual time=61268.138..61268.139 rows=1 loops=1) Index Cond: (risk_id IS NOT NULL) Filter: ((\"time\" >= '2020-01-20 15:00:07+00'::timestamp with time zone) AND (\"time\" < '2020-01-21 15:00:08+00'::timestamp with time zone)) Rows Removed by Filter: 41746396 Planning Time: 0.173 ms Execution Time: 99722.716 ms(15 rows)======If I add a count(*) so it has to consider all rows in the range for that part of the query and doesn't consider using the other index for a min/max \"shortcut\" then the query is fast.======explain analyze select min(risk_id),max(risk_id), count(*) from risk where time>='2020-01-20 15:00:07+00' and time < '2020-01-21 15:00:08+00'; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=4376.67..4376.68 rows=1 width=24) (actual time=30.011..30.012 rows=1 loops=1) -> Index Scan using risk_time_idx on risk (cost=0.57..3734.17 rows=85667 width=8) (actual time=0.018..22.441 rows=90973 loops=1) Index Cond: ((\"time\" >= '2020-01-20 15:00:07+00'::timestamp with time zone) AND (\"time\" < '2020-01-21 15:00:08+00'::timestamp with time zone)) Planning Time: 0.091 ms Execution Time: 30.045 ms(5 rows)======My count() hack works around my immediate problem but I'm trying to get my head round why Postgres chooses the plan it does without it, in case there is some general problem with my configuration that may negatively effect other areas, or there's something else I am missing.Any ideas?Paul McGarry",
"msg_date": "Tue, 23 Mar 2021 15:00:38 +1100",
"msg_from": "Paul McGarry <[email protected]>",
"msg_from_op": true,
"msg_subject": "Odd (slow) plan choice with min/max"
},
{
"msg_contents": "On Tue, Mar 23, 2021 at 03:00:38PM +1100, Paul McGarry wrote:\n> I have a query where Postgresql (11.9 at the moment) is making an odd plan\n> choice, choosing to use index scans which require filtering out millions of\n> rows, rather than \"just\" doing an aggregate over the rows the where clause\n> targets which is much faster.\n> AFAICT it isn't a statistics problem, at least increasing the stats target\n> and analyzing the table doesn't seem to fix the problem.\n\n> explain analyze select min(risk_id),max(risk_id) from risk where\n> time>='2020-01-20 15:00:07+00' and time < '2020-01-21 15:00:08+00';\n\nI'm guessing the time and ID columns are highly correlated...\n\nSo the planner thinks it can get the smallest ID by scanning the ID index, but\nthen ends up rejecting the first 161e6 rows for which the time is too low, and\nfails the >= condition.\n\nAnd thinks it can get the greatest ID by backward scanning the ID idx, but ends\nup rejecting/filtering the first 41e6 rows, for which the time is too high,\nfailing the < condition.\n\nThis is easy to reproduce:\npostgres=# DROP TABLE t; CREATE TABLE t AS SELECT a i,a j FROM generate_series(1,999999)a; CREATE INDEX ON t(j); ANALYZE t;\npostgres=# explain analyze SELECT min(j), max(j) FROM t WHERE i BETWEEN 9999 AND 99999;\n\nOne solution seems to be to create an index on (i,j), but I don't know if\nthere's a better way.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 23 Mar 2021 00:13:03 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd (slow) plan choice with min/max"
},
{
"msg_contents": "On Tue, 23 Mar 2021 at 16:13, Justin Pryzby <[email protected]> wrote:\n\n> On Tue, Mar 23, 2021 at 03:00:38PM +1100, Paul McGarry wrote:\n> > I have a query where Postgresql (11.9 at the moment) is making an odd\n> plan\n> > choice, choosing to use index scans which require filtering out millions\n> of\n> > rows, rather than \"just\" doing an aggregate over the rows the where\n> clause\n> > targets which is much faster.\n> > AFAICT it isn't a statistics problem, at least increasing the stats\n> target\n> > and analyzing the table doesn't seem to fix the problem.\n>\n> > explain analyze select min(risk_id),max(risk_id) from risk where\n> > time>='2020-01-20 15:00:07+00' and time < '2020-01-21 15:00:08+00';\n>\n> I'm guessing the time and ID columns are highly correlated...\n>\n> So the planner thinks it can get the smallest ID by scanning the ID index,\n> but\n> then ends up rejecting the first 161e6 rows for which the time is too low,\n> and\n> fails the >= condition.\n>\n> And thinks it can get the greatest ID by backward scanning the ID idx, but\n> ends\n> up rejecting/filtering the first 41e6 rows, for which the time is too high,\n> failing the < condition.\n>\n\nYes, the columns are highly correlated, but that alone doesn't seem like it\nshould be sufficient criteria to choose this plan.\nIe the selection criteria (1 day of data about a year ago) has a year+\nworth of data after it and probably a decade of data before it, so anything\nwalking a correlated index from top or bottom is going to have to walk past\na lot of data before it gets to data that fits the criteria.\n\n\n> One solution seems to be to create an index on (i,j), but I don't know if\n> there's a better way.\n>\n>\nAdding the count() stops the planner considering the option so that will\nwork for now.\nMy colleague has pointed out that we had the same issue in November and I\ncame up with the count() workaround then too, but somehow seem to have\nforgotten it in the meantime and reinvented it today. I wonder if I posted\nto pgsql-performance then too.....\n\nMaybe time for me to read the PG12 release notes....\n\nPaul\n\nOn Tue, 23 Mar 2021 at 16:13, Justin Pryzby <[email protected]> wrote:On Tue, Mar 23, 2021 at 03:00:38PM +1100, Paul McGarry wrote:\n> I have a query where Postgresql (11.9 at the moment) is making an odd plan\n> choice, choosing to use index scans which require filtering out millions of\n> rows, rather than \"just\" doing an aggregate over the rows the where clause\n> targets which is much faster.\n> AFAICT it isn't a statistics problem, at least increasing the stats target\n> and analyzing the table doesn't seem to fix the problem.\n\n> explain analyze select min(risk_id),max(risk_id) from risk where\n> time>='2020-01-20 15:00:07+00' and time < '2020-01-21 15:00:08+00';\n\nI'm guessing the time and ID columns are highly correlated...\n\nSo the planner thinks it can get the smallest ID by scanning the ID index, but\nthen ends up rejecting the first 161e6 rows for which the time is too low, and\nfails the >= condition.\n\nAnd thinks it can get the greatest ID by backward scanning the ID idx, but ends\nup rejecting/filtering the first 41e6 rows, for which the time is too high,\nfailing the < condition.Yes, the columns are highly correlated, but that alone doesn't seem like it should be sufficient criteria to choose this plan.Ie the selection criteria (1 day of data about a year ago) has a year+ worth of data after it and probably a decade of data before it, so anything walking a correlated index from top or bottom is going to have to walk past a lot of data before it gets to data that fits the criteria. \nOne solution seems to be to create an index on (i,j), but I don't know if\nthere's a better way.Adding the count() stops the planner considering the option so that will work for now.My colleague has pointed out that we had the same issue in November and I came up with the count() workaround then too, but somehow seem to have forgotten it in the meantime and reinvented it today. I wonder if I posted to pgsql-performance then too..... Maybe time for me to read the PG12 release notes.... Paul",
"msg_date": "Tue, 23 Mar 2021 17:51:49 +1100",
"msg_from": "Paul McGarry <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd (slow) plan choice with min/max"
},
{
"msg_contents": "On Tue, Mar 23, 2021 at 2:52 AM Paul McGarry <[email protected]> wrote:\n\n>\n>\n> On Tue, 23 Mar 2021 at 16:13, Justin Pryzby <[email protected]> wrote:\n>\n>> On Tue, Mar 23, 2021 at 03:00:38PM +1100, Paul McGarry wrote:\n>> > I have a query where Postgresql (11.9 at the moment) is making an odd\n>> plan\n>> > choice, choosing to use index scans which require filtering out\n>> millions of\n>> > rows, rather than \"just\" doing an aggregate over the rows the where\n>> clause\n>> > targets which is much faster.\n>> > AFAICT it isn't a statistics problem, at least increasing the stats\n>> target\n>> > and analyzing the table doesn't seem to fix the problem.\n>>\n>> > explain analyze select min(risk_id),max(risk_id) from risk where\n>> > time>='2020-01-20 15:00:07+00' and time < '2020-01-21 15:00:08+00';\n>>\n>> I'm guessing the time and ID columns are highly correlated...\n>>\n>> So the planner thinks it can get the smallest ID by scanning the ID\n>> index, but\n>> then ends up rejecting the first 161e6 rows for which the time is too\n>> low, and\n>> fails the >= condition.\n>>\n>> And thinks it can get the greatest ID by backward scanning the ID idx,\n>> but ends\n>> up rejecting/filtering the first 41e6 rows, for which the time is too\n>> high,\n>> failing the < condition.\n>>\n>\n> Yes, the columns are highly correlated, but that alone doesn't seem like\n> it should be sufficient criteria to choose this plan.\n> Ie the selection criteria (1 day of data about a year ago) has a year+\n> worth of data after it and probably a decade of data before it, so anything\n> walking a correlated index from top or bottom is going to have to walk past\n> a lot of data before it gets to data that fits the criteria.\n>\n\n\nI assume you have a statistic on the correlated columns, ie `create\nstatistic` ?\n\nIf you can't use partitions on your date column, can you use partial\nindexes instead? Or a functional index with min() over day and max() over\nday?\n\nOn Tue, Mar 23, 2021 at 2:52 AM Paul McGarry <[email protected]> wrote:On Tue, 23 Mar 2021 at 16:13, Justin Pryzby <[email protected]> wrote:On Tue, Mar 23, 2021 at 03:00:38PM +1100, Paul McGarry wrote:\n> I have a query where Postgresql (11.9 at the moment) is making an odd plan\n> choice, choosing to use index scans which require filtering out millions of\n> rows, rather than \"just\" doing an aggregate over the rows the where clause\n> targets which is much faster.\n> AFAICT it isn't a statistics problem, at least increasing the stats target\n> and analyzing the table doesn't seem to fix the problem.\n\n> explain analyze select min(risk_id),max(risk_id) from risk where\n> time>='2020-01-20 15:00:07+00' and time < '2020-01-21 15:00:08+00';\n\nI'm guessing the time and ID columns are highly correlated...\n\nSo the planner thinks it can get the smallest ID by scanning the ID index, but\nthen ends up rejecting the first 161e6 rows for which the time is too low, and\nfails the >= condition.\n\nAnd thinks it can get the greatest ID by backward scanning the ID idx, but ends\nup rejecting/filtering the first 41e6 rows, for which the time is too high,\nfailing the < condition.Yes, the columns are highly correlated, but that alone doesn't seem like it should be sufficient criteria to choose this plan.Ie the selection criteria (1 day of data about a year ago) has a year+ worth of data after it and probably a decade of data before it, so anything walking a correlated index from top or bottom is going to have to walk past a lot of data before it gets to data that fits the criteria.I assume you have a statistic on the correlated columns, ie `create statistic` ?If you can't use partitions on your date column, can you use partial indexes instead? Or a functional index with min() over day and max() over day?",
"msg_date": "Tue, 23 Mar 2021 09:07:11 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd (slow) plan choice with min/max"
},
{
"msg_contents": "On Wed, 24 Mar 2021 at 00:07, Rick Otten <[email protected]> wrote:\n\n>\n>> Yes, the columns are highly correlated, but that alone doesn't seem like\n>> it should be sufficient criteria to choose this plan.\n>> Ie the selection criteria (1 day of data about a year ago) has a year+\n>> worth of data after it and probably a decade of data before it, so anything\n>> walking a correlated index from top or bottom is going to have to walk past\n>> a lot of data before it gets to data that fits the criteria.\n>>\n>\n>\n> I assume you have a statistic on the correlated columns, ie `create\n> statistic` ?\n>\n\nI didn't, but adding\n======\nCREATE STATISTICS risk_risk_id_time_correlation_stats ON risk_id,time FROM\nrisk;\nanalyze risk;\n======\ndoesn't seem to help.\nI get the same plan before/after. Second run was faster, but just because\ndata was hot.\n\n\n\n> If you can't use partitions on your date column, can you use partial\n> indexes instead? Or a functional index with min() over day and max() over\n> day?\n>\n\nI don't particularly want to add more weird indexes to solve this one\nparticular query. as the existing risk_id index should make it efficient\nenough if only the planner chose to use it. This is part of an archiving\njob, identifying sections of historical data, so not a query that needs to\nbe super optimised, but essentially doing a full table scan\nbackwards/forwards as it is now is doing a lot of unnecessary IO that would\nbe best left free for more time sensitive queries.My count(() workaround\nworks so we can use that.\nI'm more interested in understanding why the planner makes what seems to be\nan obviously bad choice.\n\nPaul\n\nOn Wed, 24 Mar 2021 at 00:07, Rick Otten <[email protected]> wrote:Yes, the columns are highly correlated, but that alone doesn't seem like it should be sufficient criteria to choose this plan.Ie the selection criteria (1 day of data about a year ago) has a year+ worth of data after it and probably a decade of data before it, so anything walking a correlated index from top or bottom is going to have to walk past a lot of data before it gets to data that fits the criteria.I assume you have a statistic on the correlated columns, ie `create statistic` ?I didn't, but adding======CREATE STATISTICS risk_risk_id_time_correlation_stats ON risk_id,time FROM risk;analyze risk;======doesn't seem to help.I get the same plan before/after. Second run was faster, but just because data was hot. If you can't use partitions on your date column, can you use partial indexes instead? Or a functional index with min() over day and max() over day?I don't particularly want to add more weird indexes to solve this one particular query. as the existing risk_id index should make it efficient enough if only the planner chose to use it. This is part of an archiving job, identifying sections of historical data, so not a query that needs to be super optimised, but essentially doing a full table scan backwards/forwards as it is now is doing a lot of unnecessary IO that would be best left free for more time sensitive queries.My count(() workaround works so we can use that.I'm more interested in understanding why the planner makes what seems to be an obviously bad choice.Paul",
"msg_date": "Wed, 24 Mar 2021 08:38:26 +1100",
"msg_from": "Paul McGarry <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd (slow) plan choice with min/max"
},
{
"msg_contents": "Another workaround could be :\n\nexplain analyze select min(risk_id),max(risk_id) from (select * from risk\nwhere time>='2020-01-20 15:00:07+00' and time < '2020-01-21 15:00:08+00')\nt;\n\nin order to force the planner to use first the timestamp index.\n\nHowever, I agree with you; we meet a planner bad behavior here.\n\nRegards,\nYoan SULTAN\n\nLe mar. 23 mars 2021 à 22:38, Paul McGarry <[email protected]> a écrit :\n\n>\n>\n> On Wed, 24 Mar 2021 at 00:07, Rick Otten <[email protected]> wrote:\n>\n>>\n>>> Yes, the columns are highly correlated, but that alone doesn't seem like\n>>> it should be sufficient criteria to choose this plan.\n>>> Ie the selection criteria (1 day of data about a year ago) has a year+\n>>> worth of data after it and probably a decade of data before it, so anything\n>>> walking a correlated index from top or bottom is going to have to walk past\n>>> a lot of data before it gets to data that fits the criteria.\n>>>\n>>\n>>\n>> I assume you have a statistic on the correlated columns, ie `create\n>> statistic` ?\n>>\n>\n> I didn't, but adding\n> ======\n> CREATE STATISTICS risk_risk_id_time_correlation_stats ON risk_id,time FROM\n> risk;\n> analyze risk;\n> ======\n> doesn't seem to help.\n> I get the same plan before/after. Second run was faster, but just because\n> data was hot.\n>\n>\n>\n>> If you can't use partitions on your date column, can you use partial\n>> indexes instead? Or a functional index with min() over day and max() over\n>> day?\n>>\n>\n> I don't particularly want to add more weird indexes to solve this one\n> particular query. as the existing risk_id index should make it efficient\n> enough if only the planner chose to use it. This is part of an archiving\n> job, identifying sections of historical data, so not a query that needs to\n> be super optimised, but essentially doing a full table scan\n> backwards/forwards as it is now is doing a lot of unnecessary IO that would\n> be best left free for more time sensitive queries.My count(() workaround\n> works so we can use that.\n> I'm more interested in understanding why the planner makes what seems to\n> be an obviously bad choice.\n>\n> Paul\n>\n\n\n-- \nRegards,\nYo.\n\nAnother workaround could be :explain analyze select min(risk_id),max(risk_id) from (select * from risk where time>='2020-01-20 15:00:07+00' and time < '2020-01-21 15:00:08+00') t; in order to force the planner to use first the timestamp index.However, I agree with you; we meet a planner bad behavior here.Regards,Yoan SULTANLe mar. 23 mars 2021 à 22:38, Paul McGarry <[email protected]> a écrit :On Wed, 24 Mar 2021 at 00:07, Rick Otten <[email protected]> wrote:Yes, the columns are highly correlated, but that alone doesn't seem like it should be sufficient criteria to choose this plan.Ie the selection criteria (1 day of data about a year ago) has a year+ worth of data after it and probably a decade of data before it, so anything walking a correlated index from top or bottom is going to have to walk past a lot of data before it gets to data that fits the criteria.I assume you have a statistic on the correlated columns, ie `create statistic` ?I didn't, but adding======CREATE STATISTICS risk_risk_id_time_correlation_stats ON risk_id,time FROM risk;analyze risk;======doesn't seem to help.I get the same plan before/after. Second run was faster, but just because data was hot. If you can't use partitions on your date column, can you use partial indexes instead? Or a functional index with min() over day and max() over day?I don't particularly want to add more weird indexes to solve this one particular query. as the existing risk_id index should make it efficient enough if only the planner chose to use it. This is part of an archiving job, identifying sections of historical data, so not a query that needs to be super optimised, but essentially doing a full table scan backwards/forwards as it is now is doing a lot of unnecessary IO that would be best left free for more time sensitive queries.My count(() workaround works so we can use that.I'm more interested in understanding why the planner makes what seems to be an obviously bad choice.Paul \n-- Regards,Yo.",
"msg_date": "Wed, 24 Mar 2021 10:26:12 +0100",
"msg_from": "Yoan SULTAN <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd (slow) plan choice with min/max"
}
] |
[
{
"msg_contents": "Hi,\n\nWe are trying to find maximum throughput in terms of transactions per\nsecond (or simultaneous read+write SQL operations per second) for a use\ncase that does one ACID transaction (consisting of tens of reads and tens\nof updates/ inserts) per incoming stream element on a high-volume\nhigh-velocity stream of data.\n\nOur load test showed us that PostgreSQL version 11/12 could support upto\n10,000 to 11,000 such ACID transactions per second = 55K read SQL\noperations per second along with simultaneous 77 K write SQL operations per\nsecond (= total 132 K total read+write SQL operations per second)\n\nThe underlying hardware limit is much more. But is this the maximum\nPostgreSQL can support? If not what are the server tuning parameters we\nshould consider for this scale of throughput ?\n\nThanks,\nArti\n\nHi,We are trying to find maximum throughput in terms of transactions per second (or simultaneous read+write SQL operations per second) for a use case that does one ACID transaction (consisting of tens of reads and tens of updates/ inserts) per incoming stream element on a high-volume high-velocity stream of data.Our load test showed us that PostgreSQL version 11/12 could support upto 10,000 to 11,000 such ACID transactions per second = 55K read SQL operations per second along with simultaneous 77 K write SQL operations per second (= total 132 K total read+write SQL operations per second) The underlying hardware limit is much more. But is this the maximum PostgreSQL can support? If not what are the server tuning parameters we should consider for this scale of throughput ? Thanks,Arti",
"msg_date": "Thu, 25 Mar 2021 23:42:15 +0530",
"msg_from": "Geervan Hayatnagarkar <[email protected]>",
"msg_from_op": true,
"msg_subject": "High-volume writes - what is the max throughput possible"
},
{
"msg_contents": "It completely depends on a lot of factors of course, so these numbers are\nmeaningless.\nIt depends at the very least on:\n* The hardware (CPU, disk type + disk connection)\n* The size of the records read/written\n* The presence of indices and constraints.\n\nSo, adding some other meaningless numbers to at least give some idea: we\nhave specialized load processes using Postgres where we reach insert counts\nof around one million records per second. This is the *compound* insert\ncount of multiple parallel streams that read data from one table and insert\nit in one or more other tables. So you can definitely go faster, but it\ndepends in great amount on how you process the data and what you run on.\nIf you run on clouds (at least on Azure, which we use) you can have other\nnasty surprises as they do not really seem to have disks but instead a set\nof old people writing the data onto paper... On normal (non-ephemeral)\ndisks you will not get close to these numbers.\n\nThings to do are:\n* use the copy command to do the actual insert. We wrote a special kind of\n\"insert\" that provides the input stream for the copy command dynamically as\ndata becomes available.\n* Do the reading of data in a different thread than the writing, and have a\nlarge records buffer between the two processes. In that way reading the\ndata can continue while the writing process writes.\n\nRegards,\n\nFrits\n\n\nOn Fri, Mar 26, 2021 at 1:20 PM Geervan Hayatnagarkar <[email protected]>\nwrote:\n\n> Hi,\n>\n> We are trying to find maximum throughput in terms of transactions per\n> second (or simultaneous read+write SQL operations per second) for a use\n> case that does one ACID transaction (consisting of tens of reads and tens\n> of updates/ inserts) per incoming stream element on a high-volume\n> high-velocity stream of data.\n>\n> Our load test showed us that PostgreSQL version 11/12 could support upto\n> 10,000 to 11,000 such ACID transactions per second = 55K read SQL\n> operations per second along with simultaneous 77 K write SQL operations per\n> second (= total 132 K total read+write SQL operations per second)\n>\n> The underlying hardware limit is much more. But is this the maximum\n> PostgreSQL can support? If not what are the server tuning parameters we\n> should consider for this scale of throughput ?\n>\n> Thanks,\n> Arti\n>\n>\n\nIt completely depends on a lot of factors of course, so these numbers are meaningless.It depends at the very least on:* The hardware (CPU, disk type + disk connection)* The size of the records read/written* The presence of indices and constraints.So, adding some other meaningless numbers to at least give some idea: we have specialized load processes using Postgres where we reach insert counts of around one million records per second. This is the *compound* insert count of multiple parallel streams that read data from one table and insert it in one or more other tables. So you can definitely go faster, but it depends in great amount on how you process the data and what you run on.If you run on clouds (at least on Azure, which we use) you can have other nasty surprises as they do not really seem to have disks but instead a set of old people writing the data onto paper... On normal (non-ephemeral) disks you will not get close to these numbers.Things to do are:* use the copy command to do the actual insert. We wrote a special kind of \"insert\" that provides the input stream for the copy command dynamically as data becomes available.* Do the reading of data in a different thread than the writing, and have a large records buffer between the two processes. In that way reading the data can continue while the writing process writes.Regards,FritsOn Fri, Mar 26, 2021 at 1:20 PM Geervan Hayatnagarkar <[email protected]> wrote:Hi,We are trying to find maximum throughput in terms of transactions per second (or simultaneous read+write SQL operations per second) for a use case that does one ACID transaction (consisting of tens of reads and tens of updates/ inserts) per incoming stream element on a high-volume high-velocity stream of data.Our load test showed us that PostgreSQL version 11/12 could support upto 10,000 to 11,000 such ACID transactions per second = 55K read SQL operations per second along with simultaneous 77 K write SQL operations per second (= total 132 K total read+write SQL operations per second) The underlying hardware limit is much more. But is this the maximum PostgreSQL can support? If not what are the server tuning parameters we should consider for this scale of throughput ? Thanks,Arti",
"msg_date": "Fri, 26 Mar 2021 13:48:36 +0100",
"msg_from": "Frits Jalvingh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High-volume writes - what is the max throughput possible"
},
{
"msg_contents": "Are you issuing \"tens of reads and tens of updates/ inserts\" for your\nACID transaction individually from SQL client, or have you packaged\nthem as a single database function ?\n\nUsing the function can be much faster, as it eliminates all the\ncommand latencies between the client and the server.\n\nCheers\nHannu\n\nOn Fri, Mar 26, 2021 at 1:48 PM Frits Jalvingh <[email protected]> wrote:\n>\n> It completely depends on a lot of factors of course, so these numbers are meaningless.\n> It depends at the very least on:\n> * The hardware (CPU, disk type + disk connection)\n> * The size of the records read/written\n> * The presence of indices and constraints.\n>\n> So, adding some other meaningless numbers to at least give some idea: we have specialized load processes using Postgres where we reach insert counts of around one million records per second. This is the *compound* insert count of multiple parallel streams that read data from one table and insert it in one or more other tables. So you can definitely go faster, but it depends in great amount on how you process the data and what you run on.\n> If you run on clouds (at least on Azure, which we use) you can have other nasty surprises as they do not really seem to have disks but instead a set of old people writing the data onto paper... On normal (non-ephemeral) disks you will not get close to these numbers.\n>\n> Things to do are:\n> * use the copy command to do the actual insert. We wrote a special kind of \"insert\" that provides the input stream for the copy command dynamically as data becomes available.\n> * Do the reading of data in a different thread than the writing, and have a large records buffer between the two processes. In that way reading the data can continue while the writing process writes.\n>\n> Regards,\n>\n> Frits\n>\n>\n> On Fri, Mar 26, 2021 at 1:20 PM Geervan Hayatnagarkar <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> We are trying to find maximum throughput in terms of transactions per second (or simultaneous read+write SQL operations per second) for a use case that does one ACID transaction (consisting of tens of reads and tens of updates/ inserts) per incoming stream element on a high-volume high-velocity stream of data.\n>>\n>> Our load test showed us that PostgreSQL version 11/12 could support upto 10,000 to 11,000 such ACID transactions per second = 55K read SQL operations per second along with simultaneous 77 K write SQL operations per second (= total 132 K total read+write SQL operations per second)\n>>\n>> The underlying hardware limit is much more. But is this the maximum PostgreSQL can support? If not what are the server tuning parameters we should consider for this scale of throughput ?\n>>\n>> Thanks,\n>> Arti\n>>\n\n\n",
"msg_date": "Tue, 30 Mar 2021 17:21:14 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High-volume writes - what is the max throughput possible"
}
] |
[
{
"msg_contents": "Hi,\nWe migrated our Oracle Databases to PostgreSQL. One of the simple select\nquery that takes 4 ms on Oracle is taking around 200 ms on PostgreSQL.\nCould you please advise. Please find query and query plans below. Gather\ncost seems high. Will increasing max_parallel_worker_per_gather help?\n\nexplain analyse SELECT bom.address_key dom2137,bom.address_type_key\ndom1727,bom.start_date dom1077,bom.end_date dom828,bom.address_status_key\ndom1955,bom.address_role_key dom1711,bom.delivery_point_created\ndom2362,bom.postcode dom873,bom.postcode_name dom1390,bom.street_name\ndom1186,bom.premises_number_1 dom1777,bom.premises_number_2\ndom1778,bom.premises_letter_1 dom1784,bom.premises_letter_2\ndom1785,bom.premises_separator dom1962,bom.stairway dom892,bom.po_box\ndom653,bom.apartment_number dom1732,bom.apartment_letter\ndom1739,bom.street_key dom1097,bom.address_use_key dom1609,bom.language_key\ndom1272,bom.address_family_id dom1796,bom.cur_address_key\ndom2566,bom.created_by dom1052,bom.modified_by dom1158,bom.creation_time\ndom1392,bom.modification_time dom1813 FROM DEPT.address dom WHERE\naddress_key = 6113763\n\n[\n{\n\"Plan\": {\n\"Node Type\": \"Gather\",\n\"Parallel Aware\": false,\n\"Actual Rows\": 1,\n\"Actual Loops\": 1,\n\"Workers Planned\": 1,\n\"Workers Launched\": 1,\n\"Single Copy\": true,\n\"Plans\": [\n{\n\"Node Type\": \"Index Scan\",\n\"Parent Relationship\": \"Outer\",\n\"Parallel Aware\": false,\n\"Scan Direction\": \"Forward\",\n\"Index Name\": \"address1_i7\",\n\"Relation Name\": \"address\",\n\"Alias\": \"dom\",\n\"Actual Rows\": 1,\n\"Actual Loops\": 1,\n\"Index Cond\": \"(address_key = 6113763)\",\n\"Rows Removed by Index Recheck\": 0\n}\n]\n},\n\"Triggers\": []\n}\n]\n\n\"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual\ntime=174.318..198.539 rows=1 loops=1)\"\n\" Workers Planned: 1\"\n\" Workers Launched: 1\"\n\" Single Copy: true\"\n\" -> Index Scan using address1_i7 on address1 dom (cost=0.43..2.65 rows=1\nwidth=127) (actual time=0.125..0.125 rows=1 loops=1)\"\n\" Index Cond: (address_key = 6113763)\"\n\"Planning Time: 0.221 ms\"\n\"Execution Time: 198.601 ms\"\n\n\n\nRegards,\nAditya.\n\nHi,We migrated our Oracle Databases to PostgreSQL. One of the simple select query that takes 4 ms on Oracle is taking around 200 ms on PostgreSQL. Could you please advise. Please find query and query plans below. Gather cost seems high. Will increasing max_parallel_worker_per_gather help?explain analyse SELECT bom.address_key dom2137,bom.address_type_key dom1727,bom.start_date dom1077,bom.end_date dom828,bom.address_status_key dom1955,bom.address_role_key dom1711,bom.delivery_point_created dom2362,bom.postcode dom873,bom.postcode_name dom1390,bom.street_name dom1186,bom.premises_number_1 dom1777,bom.premises_number_2 dom1778,bom.premises_letter_1 dom1784,bom.premises_letter_2 dom1785,bom.premises_separator dom1962,bom.stairway dom892,bom.po_box dom653,bom.apartment_number dom1732,bom.apartment_letter dom1739,bom.street_key dom1097,bom.address_use_key dom1609,bom.language_key dom1272,bom.address_family_id dom1796,bom.cur_address_key dom2566,bom.created_by dom1052,bom.modified_by dom1158,bom.creation_time dom1392,bom.modification_time dom1813 FROM DEPT.address dom WHERE address_key = 6113763[{\"Plan\": {\"Node Type\": \"Gather\",\"Parallel Aware\": false,\"Actual Rows\": 1,\"Actual Loops\": 1,\"Workers Planned\": 1,\"Workers Launched\": 1,\"Single Copy\": true,\"Plans\": [{\"Node Type\": \"Index Scan\",\"Parent Relationship\": \"Outer\",\"Parallel Aware\": false,\"Scan Direction\": \"Forward\",\"Index Name\": \"address1_i7\",\"Relation Name\": \"address\",\"Alias\": \"dom\",\"Actual Rows\": 1,\"Actual Loops\": 1,\"Index Cond\": \"(address_key = 6113763)\",\"Rows Removed by Index Recheck\": 0}]},\"Triggers\": []}]\"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual time=174.318..198.539 rows=1 loops=1)\"\" Workers Planned: 1\"\" Workers Launched: 1\"\" Single Copy: true\"\" -> Index Scan using address1_i7 on address1 dom (cost=0.43..2.65 rows=1 width=127) (actual time=0.125..0.125 rows=1 loops=1)\"\" Index Cond: (address_key = 6113763)\"\"Planning Time: 0.221 ms\"\"Execution Time: 198.601 ms\"Regards,Aditya.",
"msg_date": "Sat, 3 Apr 2021 19:08:22 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on Oracle\n after migration."
},
{
"msg_contents": "On Sat, Apr 3, 2021 at 7:08 PM aditya desai <[email protected]> wrote:\n>\n> Hi,\n> We migrated our Oracle Databases to PostgreSQL. One of the simple select query that takes 4 ms on Oracle is taking around 200 ms on PostgreSQL. Could you please advise. Please find query and query plans below. Gather cost seems high. Will increasing max_parallel_worker_per_gather help?\n\nNo it doesn't. For small tables, parallelism might not help since it\ndoesn't come for free. Try setting max_parallel_worker_per_gather to 0\ni.e. without parallel query.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 3 Apr 2021 19:17:47 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "so 3. 4. 2021 v 15:38 odesílatel aditya desai <[email protected]> napsal:\n\n> Hi,\n> We migrated our Oracle Databases to PostgreSQL. One of the simple select\n> query that takes 4 ms on Oracle is taking around 200 ms on PostgreSQL.\n> Could you please advise. Please find query and query plans below. Gather\n> cost seems high. Will increasing max_parallel_worker_per_gather help?\n>\n> explain analyse SELECT bom.address_key dom2137,bom.address_type_key\n> dom1727,bom.start_date dom1077,bom.end_date dom828,bom.address_status_key\n> dom1955,bom.address_role_key dom1711,bom.delivery_point_created\n> dom2362,bom.postcode dom873,bom.postcode_name dom1390,bom.street_name\n> dom1186,bom.premises_number_1 dom1777,bom.premises_number_2\n> dom1778,bom.premises_letter_1 dom1784,bom.premises_letter_2\n> dom1785,bom.premises_separator dom1962,bom.stairway dom892,bom.po_box\n> dom653,bom.apartment_number dom1732,bom.apartment_letter\n> dom1739,bom.street_key dom1097,bom.address_use_key dom1609,bom.language_key\n> dom1272,bom.address_family_id dom1796,bom.cur_address_key\n> dom2566,bom.created_by dom1052,bom.modified_by dom1158,bom.creation_time\n> dom1392,bom.modification_time dom1813 FROM DEPT.address dom WHERE\n> address_key = 6113763\n>\n> [\n> {\n> \"Plan\": {\n> \"Node Type\": \"Gather\",\n> \"Parallel Aware\": false,\n> \"Actual Rows\": 1,\n> \"Actual Loops\": 1,\n> \"Workers Planned\": 1,\n> \"Workers Launched\": 1,\n> \"Single Copy\": true,\n> \"Plans\": [\n> {\n> \"Node Type\": \"Index Scan\",\n> \"Parent Relationship\": \"Outer\",\n> \"Parallel Aware\": false,\n> \"Scan Direction\": \"Forward\",\n> \"Index Name\": \"address1_i7\",\n> \"Relation Name\": \"address\",\n> \"Alias\": \"dom\",\n> \"Actual Rows\": 1,\n> \"Actual Loops\": 1,\n> \"Index Cond\": \"(address_key = 6113763)\",\n> \"Rows Removed by Index Recheck\": 0\n> }\n> ]\n> },\n> \"Triggers\": []\n> }\n> ]\n>\n> \"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual\n> time=174.318..198.539 rows=1 loops=1)\"\n> \" Workers Planned: 1\"\n> \" Workers Launched: 1\"\n> \" Single Copy: true\"\n> \" -> Index Scan using address1_i7 on address1 dom (cost=0.43..2.65 rows=1\n> width=127) (actual time=0.125..0.125 rows=1 loops=1)\"\n> \" Index Cond: (address_key = 6113763)\"\n> \"Planning Time: 0.221 ms\"\n> \"Execution Time: 198.601 ms\"\n>\n\nYou should have broken configuration - there is not any reason to start\nparallelism - probably some option in postgresql.conf has very bad value.\nSecond - it's crazy to see 200 ms just on interprocess communication -\nmaybe your CPU is overutilized.\n\nRegards\n\nPavel\n\n\n\n\n>\n>\n> Regards,\n> Aditya.\n>\n\nso 3. 4. 2021 v 15:38 odesílatel aditya desai <[email protected]> napsal:Hi,We migrated our Oracle Databases to PostgreSQL. One of the simple select query that takes 4 ms on Oracle is taking around 200 ms on PostgreSQL. Could you please advise. Please find query and query plans below. Gather cost seems high. Will increasing max_parallel_worker_per_gather help?explain analyse SELECT bom.address_key dom2137,bom.address_type_key dom1727,bom.start_date dom1077,bom.end_date dom828,bom.address_status_key dom1955,bom.address_role_key dom1711,bom.delivery_point_created dom2362,bom.postcode dom873,bom.postcode_name dom1390,bom.street_name dom1186,bom.premises_number_1 dom1777,bom.premises_number_2 dom1778,bom.premises_letter_1 dom1784,bom.premises_letter_2 dom1785,bom.premises_separator dom1962,bom.stairway dom892,bom.po_box dom653,bom.apartment_number dom1732,bom.apartment_letter dom1739,bom.street_key dom1097,bom.address_use_key dom1609,bom.language_key dom1272,bom.address_family_id dom1796,bom.cur_address_key dom2566,bom.created_by dom1052,bom.modified_by dom1158,bom.creation_time dom1392,bom.modification_time dom1813 FROM DEPT.address dom WHERE address_key = 6113763[{\"Plan\": {\"Node Type\": \"Gather\",\"Parallel Aware\": false,\"Actual Rows\": 1,\"Actual Loops\": 1,\"Workers Planned\": 1,\"Workers Launched\": 1,\"Single Copy\": true,\"Plans\": [{\"Node Type\": \"Index Scan\",\"Parent Relationship\": \"Outer\",\"Parallel Aware\": false,\"Scan Direction\": \"Forward\",\"Index Name\": \"address1_i7\",\"Relation Name\": \"address\",\"Alias\": \"dom\",\"Actual Rows\": 1,\"Actual Loops\": 1,\"Index Cond\": \"(address_key = 6113763)\",\"Rows Removed by Index Recheck\": 0}]},\"Triggers\": []}]\"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual time=174.318..198.539 rows=1 loops=1)\"\" Workers Planned: 1\"\" Workers Launched: 1\"\" Single Copy: true\"\" -> Index Scan using address1_i7 on address1 dom (cost=0.43..2.65 rows=1 width=127) (actual time=0.125..0.125 rows=1 loops=1)\"\" Index Cond: (address_key = 6113763)\"\"Planning Time: 0.221 ms\"\"Execution Time: 198.601 ms\"You should have broken configuration - there is not any reason to start parallelism - probably some option in postgresql.conf has very bad value. Second - it's crazy to see 200 ms just on interprocess communication - maybe your CPU is overutilized.RegardsPavelRegards,Aditya.",
"msg_date": "Sat, 3 Apr 2021 16:08:01 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "It seems like something is missing. Is this table partitioned? How long ago\nwas migration done? Has vacuum freeze and analyze of tables been done? Was\nindex created after populating data or reindexed after perhaps? What\nversion are you using?\n\nIt seems like something is missing. Is this table partitioned? How long ago was migration done? Has vacuum freeze and analyze of tables been done? Was index created after populating data or reindexed after perhaps? What version are you using?",
"msg_date": "Sat, 3 Apr 2021 08:10:23 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "On Sat, Apr 03, 2021 at 04:08:01PM +0200, Pavel Stehule wrote:\n> so 3. 4. 2021 v 15:38 odes�latel aditya desai <[email protected]> napsal:\n> > \"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual\n> > time=174.318..198.539 rows=1 loops=1)\"\n> > \" Workers Planned: 1\"\n> > \" Workers Launched: 1\"\n> > \" Single Copy: true\"\n> > \" -> Index Scan using address1_i7 on address1 dom (cost=0.43..2.65 rows=1\n> > width=127) (actual time=0.125..0.125 rows=1 loops=1)\"\n> > \" Index Cond: (address_key = 6113763)\"\n> > \"Planning Time: 0.221 ms\"\n> > \"Execution Time: 198.601 ms\"\n> \n> You should have broken configuration - there is not any reason to start\n> parallelism - probably some option in postgresql.conf has very bad value.\n> Second - it's crazy to see 200 ms just on interprocess communication -\n> maybe your CPU is overutilized.\n\nIt seems like force_parallel_mode is set, which is for debugging and not for\n\"forcing things to go faster\". Maybe we should rename the parameter, like\nparallel_mode_testing=on.\n\nhttp://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 3 Apr 2021 09:16:51 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "Hi Michael,\nThanks for your response.\nIs this table partitioned? - No\nHow long ago was migration done? - 27th March 2021\nHas vacuum freeze and analyze of tables been done? - We ran vacuum analyze.\n Was index created after populating data or reindexed after perhaps? -\nIndex was created after data load and reindex was executed on all tables\nyesterday.\n Version is PostgreSQL-11\n\nRegards,\nAditya.\n\n\nOn Sat, Apr 3, 2021 at 7:40 PM Michael Lewis <[email protected]> wrote:\n\n> It seems like something is missing. Is this table partitioned? How long\n> ago was migration done? Has vacuum freeze and analyze of tables been done?\n> Was index created after populating data or reindexed after perhaps? What\n> version are you using?\n>\n\nHi Michael,Thanks for your response.Is this table partitioned? - NoHow long ago was migration done? - 27th March 2021Has vacuum freeze and analyze of tables been done? - We ran vacuum analyze. Was index created after populating data or reindexed after perhaps? - Index was created after data load and reindex was executed on all tables yesterday. Version is PostgreSQL-11Regards,Aditya.On Sat, Apr 3, 2021 at 7:40 PM Michael Lewis <[email protected]> wrote:It seems like something is missing. Is this table partitioned? How long ago was migration done? Has vacuum freeze and analyze of tables been done? Was index created after populating data or reindexed after perhaps? What version are you using?",
"msg_date": "Sat, 3 Apr 2021 20:29:22 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "On Sat, Apr 3, 2021 at 08:29:22PM +0530, aditya desai wrote:\n> Hi Michael,\n> Thanks for your response.\n> Is this table partitioned? - No\n> How long ago was migration done? - 27th March 2021\n> Has vacuum freeze and analyze of tables been done? - We ran vacuum analyze.\n> �Was index created after populating data or reindexed after perhaps? - Index\n> was created after data load and reindex was executed on all tables yesterday.\n> �Version is PostgreSQL-11\n\nFYI, the output of these queries will show u what changes have been made\nto the configuration file:\n\n\tSELECT version();\n\t\n\tSELECT name, current_setting(name), source\n\tFROM pg_settings\n\tWHERE source NOT IN ('default', 'override');\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Sat, 3 Apr 2021 11:04:17 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "Hi Justin,\nYes, force_parallel_mode is on. Should we set it off?\n\nRegards,\nAditya.\n\nOn Sat, Apr 3, 2021 at 7:46 PM Justin Pryzby <[email protected]> wrote:\n\n> On Sat, Apr 03, 2021 at 04:08:01PM +0200, Pavel Stehule wrote:\n> > so 3. 4. 2021 v 15:38 odesílatel aditya desai <[email protected]>\n> napsal:\n> > > \"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual\n> > > time=174.318..198.539 rows=1 loops=1)\"\n> > > \" Workers Planned: 1\"\n> > > \" Workers Launched: 1\"\n> > > \" Single Copy: true\"\n> > > \" -> Index Scan using address1_i7 on address1 dom (cost=0.43..2.65\n> rows=1\n> > > width=127) (actual time=0.125..0.125 rows=1 loops=1)\"\n> > > \" Index Cond: (address_key = 6113763)\"\n> > > \"Planning Time: 0.221 ms\"\n> > > \"Execution Time: 198.601 ms\"\n> >\n> > You should have broken configuration - there is not any reason to start\n> > parallelism - probably some option in postgresql.conf has very bad\n> value.\n> > Second - it's crazy to see 200 ms just on interprocess communication -\n> > maybe your CPU is overutilized.\n>\n> It seems like force_parallel_mode is set, which is for debugging and not\n> for\n> \"forcing things to go faster\". Maybe we should rename the parameter, like\n> parallel_mode_testing=on.\n>\n> http://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html\n>\n> --\n> Justin\n>\n\nHi Justin,Yes, force_parallel_mode is on. Should we set it off?Regards,Aditya.On Sat, Apr 3, 2021 at 7:46 PM Justin Pryzby <[email protected]> wrote:On Sat, Apr 03, 2021 at 04:08:01PM +0200, Pavel Stehule wrote:\n> so 3. 4. 2021 v 15:38 odesílatel aditya desai <[email protected]> napsal:\n> > \"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual\n> > time=174.318..198.539 rows=1 loops=1)\"\n> > \" Workers Planned: 1\"\n> > \" Workers Launched: 1\"\n> > \" Single Copy: true\"\n> > \" -> Index Scan using address1_i7 on address1 dom (cost=0.43..2.65 rows=1\n> > width=127) (actual time=0.125..0.125 rows=1 loops=1)\"\n> > \" Index Cond: (address_key = 6113763)\"\n> > \"Planning Time: 0.221 ms\"\n> > \"Execution Time: 198.601 ms\"\n> \n> You should have broken configuration - there is not any reason to start\n> parallelism - probably some option in postgresql.conf has very bad value.\n> Second - it's crazy to see 200 ms just on interprocess communication -\n> maybe your CPU is overutilized.\n\nIt seems like force_parallel_mode is set, which is for debugging and not for\n\"forcing things to go faster\". Maybe we should rename the parameter, like\nparallel_mode_testing=on.\n\nhttp://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html\n\n-- \nJustin",
"msg_date": "Sat, 3 Apr 2021 20:38:18 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> Hi Justin,\n> Yes, force_parallel_mode is on. Should we set it off?\n\nYes. I bet someone set it without reading our docs:\n\n\thttps://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n\n-->\tAllows the use of parallel queries for testing purposes even in cases\n-->\twhere no performance benefit is expected.\n\nWe might need to clarify this sentence to be clearer it is _only_ for\ntesting. Also, I suggest you review _all_ changes that have been made\nto the server since I am worried other unwise changes might also have\nbeen made.\n\n---------------------------------------------------------------------------\n\n> \n> Regards,\n> Aditya.\n> \n> On Sat, Apr 3, 2021 at 7:46 PM Justin Pryzby <[email protected]> wrote:\n> \n> On Sat, Apr 03, 2021 at 04:08:01PM +0200, Pavel Stehule wrote:\n> > so 3. 4. 2021 v 15:38 odes�latel aditya desai <[email protected]>\n> napsal:\n> > > \"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual\n> > > time=174.318..198.539 rows=1 loops=1)\"\n> > > \" Workers Planned: 1\"\n> > > \" Workers Launched: 1\"\n> > > \" Single Copy: true\"\n> > > \" -> Index Scan using address1_i7 on address1 dom (cost=0.43..2.65 rows\n> =1\n> > > width=127) (actual time=0.125..0.125 rows=1 loops=1)\"\n> > > \" Index Cond: (address_key = 6113763)\"\n> > > \"Planning Time: 0.221 ms\"\n> > > \"Execution Time: 198.601 ms\"\n> >\n> > You should have broken configuration - there is not any reason to start\n> > parallelism -� probably some option in postgresql.conf has very bad\n> value.\n> > Second - it's crazy to see 200 ms just on interprocess communication -\n> > maybe your CPU is overutilized.\n> \n> It seems like force_parallel_mode is set, which is for debugging and not\n> for\n> \"forcing things to go faster\".� Maybe we should rename the parameter, like\n> parallel_mode_testing=on.\n> \n> http://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html\n> \n> --\n> Justin\n> \n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Sat, 3 Apr 2021 11:12:01 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "Thanks Bruce!! Will set it off and retry.\n\nOn Sat, Apr 3, 2021 at 8:42 PM Bruce Momjian <[email protected]> wrote:\n\n> On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> > Hi Justin,\n> > Yes, force_parallel_mode is on. Should we set it off?\n>\n> Yes. I bet someone set it without reading our docs:\n>\n>\n> https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n>\n> --> Allows the use of parallel queries for testing purposes even in\n> cases\n> --> where no performance benefit is expected.\n>\n> We might need to clarify this sentence to be clearer it is _only_ for\n> testing. Also, I suggest you review _all_ changes that have been made\n> to the server since I am worried other unwise changes might also have\n> been made.\n>\n> ---------------------------------------------------------------------------\n>\n> >\n> > Regards,\n> > Aditya.\n> >\n> > On Sat, Apr 3, 2021 at 7:46 PM Justin Pryzby <[email protected]>\n> wrote:\n> >\n> > On Sat, Apr 03, 2021 at 04:08:01PM +0200, Pavel Stehule wrote:\n> > > so 3. 4. 2021 v 15:38 odesílatel aditya desai <[email protected]>\n> > napsal:\n> > > > \"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual\n> > > > time=174.318..198.539 rows=1 loops=1)\"\n> > > > \" Workers Planned: 1\"\n> > > > \" Workers Launched: 1\"\n> > > > \" Single Copy: true\"\n> > > > \" -> Index Scan using address1_i7 on address1 dom\n> (cost=0.43..2.65 rows\n> > =1\n> > > > width=127) (actual time=0.125..0.125 rows=1 loops=1)\"\n> > > > \" Index Cond: (address_key = 6113763)\"\n> > > > \"Planning Time: 0.221 ms\"\n> > > > \"Execution Time: 198.601 ms\"\n> > >\n> > > You should have broken configuration - there is not any reason to\n> start\n> > > parallelism - probably some option in postgresql.conf has very bad\n> > value.\n> > > Second - it's crazy to see 200 ms just on interprocess\n> communication -\n> > > maybe your CPU is overutilized.\n> >\n> > It seems like force_parallel_mode is set, which is for debugging and\n> not\n> > for\n> > \"forcing things to go faster\". Maybe we should rename the\n> parameter, like\n> > parallel_mode_testing=on.\n> >\n> >\n> http://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html\n> >\n> > --\n> > Justin\n> >\n>\n> --\n> Bruce Momjian <[email protected]> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n>\n\nThanks Bruce!! Will set it off and retry.On Sat, Apr 3, 2021 at 8:42 PM Bruce Momjian <[email protected]> wrote:On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> Hi Justin,\n> Yes, force_parallel_mode is on. Should we set it off?\n\nYes. I bet someone set it without reading our docs:\n\n https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n\n--> Allows the use of parallel queries for testing purposes even in cases\n--> where no performance benefit is expected.\n\nWe might need to clarify this sentence to be clearer it is _only_ for\ntesting. Also, I suggest you review _all_ changes that have been made\nto the server since I am worried other unwise changes might also have\nbeen made.\n\n---------------------------------------------------------------------------\n\n> \n> Regards,\n> Aditya.\n> \n> On Sat, Apr 3, 2021 at 7:46 PM Justin Pryzby <[email protected]> wrote:\n> \n> On Sat, Apr 03, 2021 at 04:08:01PM +0200, Pavel Stehule wrote:\n> > so 3. 4. 2021 v 15:38 odesílatel aditya desai <[email protected]>\n> napsal:\n> > > \"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual\n> > > time=174.318..198.539 rows=1 loops=1)\"\n> > > \" Workers Planned: 1\"\n> > > \" Workers Launched: 1\"\n> > > \" Single Copy: true\"\n> > > \" -> Index Scan using address1_i7 on address1 dom (cost=0.43..2.65 rows\n> =1\n> > > width=127) (actual time=0.125..0.125 rows=1 loops=1)\"\n> > > \" Index Cond: (address_key = 6113763)\"\n> > > \"Planning Time: 0.221 ms\"\n> > > \"Execution Time: 198.601 ms\"\n> >\n> > You should have broken configuration - there is not any reason to start\n> > parallelism - probably some option in postgresql.conf has very bad\n> value.\n> > Second - it's crazy to see 200 ms just on interprocess communication -\n> > maybe your CPU is overutilized.\n> \n> It seems like force_parallel_mode is set, which is for debugging and not\n> for\n> \"forcing things to go faster\". Maybe we should rename the parameter, like\n> parallel_mode_testing=on.\n> \n> http://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html\n> \n> --\n> Justin\n> \n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Sat, 3 Apr 2021 20:52:25 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "On Sat, Apr 3, 2021 at 11:12:01AM -0400, Bruce Momjian wrote:\n> On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> > Hi Justin,\n> > Yes, force_parallel_mode is on. Should we set it off?\n> \n> Yes. I bet someone set it without reading our docs:\n> \n> \thttps://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> \n> -->\tAllows the use of parallel queries for testing purposes even in cases\n> -->\twhere no performance benefit is expected.\n> \n> We might need to clarify this sentence to be clearer it is _only_ for\n> testing. Also, I suggest you review _all_ changes that have been made\n> to the server since I am worried other unwise changes might also have\n> been made.\n\nThis brings up an issue we see occasionally. You can either leave\neverything as default, get advice on which defaults to change, or study\neach setting and then change defaults. Changing defaults without study\noften leads to poor configurations, as we are seeing here.\n\nThe lucky thing is that you noticed a slow query and found the\nmisconfiguration, but I am sure there are many servers where\nmisconfiguration is never detected. I wish I knew how to improve this\nsituation, but user education seems to be all we can do.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Sat, 3 Apr 2021 11:24:02 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "adding the group.\n\n aad_log_min_messages | warning\n | configuration file\n application_name | psql\n | client\n archive_command |\nc:\\postgres\\bin\\xlogcopy\\xlogcopy.exe archive blob \"%f\" \"%p\" |\nconfiguration file\n archive_mode | on\n | configuration file\n archive_timeout | 15min\n | configuration file\n authentication_timeout | 30s\n | configuration file\n autovacuum_analyze_scale_factor | 0.05\n | configuration file\n autovacuum_naptime | 15s\n | configuration file\n autovacuum_vacuum_scale_factor | 0.05\n | configuration file\n bgwriter_delay | 20ms\n | configuration file\n bgwriter_flush_after | 512kB\n | configuration file\n bgwriter_lru_maxpages | 100\n | configuration file\n checkpoint_completion_target | 0.9\n | configuration file\n checkpoint_flush_after | 256kB\n | configuration file\n checkpoint_timeout | 5min\n | configuration file\n client_encoding | UTF8\n | client\n connection_ID |\n5b59f092-444c-49df-b5d6-a7a0028a7855 | client\n connection_PeerIP |\nfd40:4d4a:11:5067:6d11:500:a07:5144 | client\n connection_Vnet | on\n | client\n constraint_exclusion | partition\n | configuration file\n data_sync_retry | on\n | configuration file\n DateStyle | ISO, MDY\n | configuration file\n default_text_search_config | pg_catalog.english\n | configuration file\n dynamic_shared_memory_type | windows\n | configuration file\n effective_cache_size | 160GB\n | configuration file\n enable_seqscan | off\n | configuration file\n force_parallel_mode | off\n | configuration file\n from_collapse_limit | 15\n | configuration file\n full_page_writes | off\n | configuration file\n hot_standby | on\n | configuration file\n hot_standby_feedback | on\n | configuration file\n join_collapse_limit | 15\n | configuration file\n lc_messages | English_United States.1252\n | configuration file\n lc_monetary | English_United States.1252\n | configuration file\n lc_numeric | English_United States.1252\n | configuration file\n lc_time | English_United States.1252\n | configuration file\n listen_addresses | *\n | configuration file\n log_checkpoints | on\n | configuration file\n log_connections | on\n | configuration file\n log_destination | stderr\n | configuration file\n log_file_mode | 0640\n | configuration file\n log_line_prefix | %t-%c-\n | configuration file\n log_min_messages_internal | info\n | configuration file\n log_rotation_age | 1h\n | configuration file\n log_rotation_size | 100MB\n | configuration file\n log_timezone | UTC\n | configuration file\n logging_collector | on\n | configuration file\n maintenance_work_mem | 1GB\n | configuration file\n max_connections | 1900\n | configuration file\n max_parallel_workers_per_gather | 16\n | configuration file\n max_replication_slots | 10\n | configuration file\n max_stack_depth | 2MB\n | environment variable\n max_wal_senders | 10\n | configuration file\n max_wal_size | 26931MB\n | configuration file\n min_wal_size | 4GB\n | configuration file\n pg_qs.query_capture_mode | top\n | configuration file\n pgms_wait_sampling.query_capture_mode | all\n | configuration file\n pgstat_udp_port | 20224\n | command line\n port | 20224\n | command line\n random_page_cost | 1.1\n | configuration file\n shared_buffers | 64GB\n | configuration file\n ssl | on\n | configuration file\n ssl_ca_file | root.crt\n | configuration file\n superuser_reserved_connections | 5\n | configuration file\n TimeZone | EET\n | configuration file\n track_io_timing | on\n | configuration file\n wal_buffers | 128MB\n | configuration file\n wal_keep_segments | 25\n | configuration file\n wal_level | replica\n | configuration file\n work_mem | 16MB\n | configuration file\n\n\nOn Sat, Apr 3, 2021 at 8:59 PM aditya desai <[email protected]> wrote:\n\n> Hi Bruce,\n> Please find the below output.force_parallel_mode if off now.\n>\n> aad_log_min_messages | warning\n> | configuration file\n> application_name | psql\n> | client\n> archive_command |\n> c:\\postgres\\bin\\xlogcopy\\xlogcopy.exe archive blob \"%f\" \"%p\" |\n> configuration file\n> archive_mode | on\n> | configuration file\n> archive_timeout | 15min\n> | configuration file\n> authentication_timeout | 30s\n> | configuration file\n> autovacuum_analyze_scale_factor | 0.05\n> | configuration file\n> autovacuum_naptime | 15s\n> | configuration file\n> autovacuum_vacuum_scale_factor | 0.05\n> | configuration file\n> bgwriter_delay | 20ms\n> | configuration file\n> bgwriter_flush_after | 512kB\n> | configuration file\n> bgwriter_lru_maxpages | 100\n> | configuration file\n> checkpoint_completion_target | 0.9\n> | configuration file\n> checkpoint_flush_after | 256kB\n> | configuration file\n> checkpoint_timeout | 5min\n> | configuration file\n> client_encoding | UTF8\n> | client\n> connection_ID |\n> 5b59f092-444c-49df-b5d6-a7a0028a7855 | client\n> connection_PeerIP |\n> fd40:4d4a:11:5067:6d11:500:a07:5144 | client\n> connection_Vnet | on\n> | client\n> constraint_exclusion | partition\n> | configuration file\n> data_sync_retry | on\n> | configuration file\n> DateStyle | ISO, MDY\n> | configuration file\n> default_text_search_config | pg_catalog.english\n> | configuration file\n> dynamic_shared_memory_type | windows\n> | configuration file\n> effective_cache_size | 160GB\n> | configuration file\n> enable_seqscan | off\n> | configuration file\n> force_parallel_mode | off\n> | configuration file\n> from_collapse_limit | 15\n> | configuration file\n> full_page_writes | off\n> | configuration file\n> hot_standby | on\n> | configuration file\n> hot_standby_feedback | on\n> | configuration file\n> join_collapse_limit | 15\n> | configuration file\n> lc_messages | English_United States.1252\n> | configuration file\n> lc_monetary | English_United States.1252\n> | configuration file\n> lc_numeric | English_United States.1252\n> | configuration file\n> lc_time | English_United States.1252\n> | configuration file\n> listen_addresses | *\n> | configuration file\n> log_checkpoints | on\n> | configuration file\n> log_connections | on\n> | configuration file\n> log_destination | stderr\n> | configuration file\n> log_file_mode | 0640\n> | configuration file\n> log_line_prefix | %t-%c-\n> | configuration file\n> log_min_messages_internal | info\n> | configuration file\n> log_rotation_age | 1h\n> | configuration file\n> log_rotation_size | 100MB\n> | configuration file\n> log_timezone | UTC\n> | configuration file\n> logging_collector | on\n> | configuration file\n> maintenance_work_mem | 1GB\n> | configuration file\n> max_connections | 1900\n> | configuration file\n> max_parallel_workers_per_gather | 16\n> | configuration file\n> max_replication_slots | 10\n> | configuration file\n> max_stack_depth | 2MB\n> | environment variable\n> max_wal_senders | 10\n> | configuration file\n> max_wal_size | 26931MB\n> | configuration file\n> min_wal_size | 4GB\n> | configuration file\n> pg_qs.query_capture_mode | top\n> | configuration file\n> pgms_wait_sampling.query_capture_mode | all\n> | configuration file\n> pgstat_udp_port | 20224\n> | command line\n> port | 20224\n> | command line\n> random_page_cost | 1.1\n> | configuration file\n> shared_buffers | 64GB\n> | configuration file\n> ssl | on\n> | configuration file\n> ssl_ca_file | root.crt\n> | configuration file\n> superuser_reserved_connections | 5\n> | configuration file\n> TimeZone | EET\n> | configuration file\n> track_io_timing | on\n> | configuration file\n> wal_buffers | 128MB\n> | configuration file\n> wal_keep_segments | 25\n> | configuration file\n> wal_level | replica\n> | configuration file\n> work_mem | 16MB\n> | configuration file\n>\n>\n> Regards,\n> Aditya.\n>\n>\n>\n> On Sat, Apr 3, 2021 at 8:34 PM Bruce Momjian <[email protected]> wrote:\n>\n>> On Sat, Apr 3, 2021 at 08:29:22PM +0530, aditya desai wrote:\n>> > Hi Michael,\n>> > Thanks for your response.\n>> > Is this table partitioned? - No\n>> > How long ago was migration done? - 27th March 2021\n>> > Has vacuum freeze and analyze of tables been done? - We ran vacuum\n>> analyze.\n>> > Was index created after populating data or reindexed after perhaps? -\n>> Index\n>> > was created after data load and reindex was executed on all tables\n>> yesterday.\n>> > Version is PostgreSQL-11\n>>\n>> FYI, the output of these queries will show u what changes have been made\n>> to the configuration file:\n>>\n>> SELECT version();\n>>\n>> SELECT name, current_setting(name), source\n>> FROM pg_settings\n>> WHERE source NOT IN ('default', 'override');\n>>\n>> --\n>> Bruce Momjian <[email protected]> https://momjian.us\n>> EDB https://enterprisedb.com\n>>\n>> If only the physical world exists, free will is an illusion.\n>>\n>>\n\nadding the group. aad_log_min_messages | warning | configuration file application_name | psql | client archive_command | c:\\postgres\\bin\\xlogcopy\\xlogcopy.exe archive blob \"%f\" \"%p\" | configuration file archive_mode | on | configuration file archive_timeout | 15min | configuration file authentication_timeout | 30s | configuration file autovacuum_analyze_scale_factor | 0.05 | configuration file autovacuum_naptime | 15s | configuration file autovacuum_vacuum_scale_factor | 0.05 | configuration file bgwriter_delay | 20ms | configuration file bgwriter_flush_after | 512kB | configuration file bgwriter_lru_maxpages | 100 | configuration file checkpoint_completion_target | 0.9 | configuration file checkpoint_flush_after | 256kB | configuration file checkpoint_timeout | 5min | configuration file client_encoding | UTF8 | client connection_ID | 5b59f092-444c-49df-b5d6-a7a0028a7855 | client connection_PeerIP | fd40:4d4a:11:5067:6d11:500:a07:5144 | client connection_Vnet | on | client constraint_exclusion | partition | configuration file data_sync_retry | on | configuration file DateStyle | ISO, MDY | configuration file default_text_search_config | pg_catalog.english | configuration file dynamic_shared_memory_type | windows | configuration file effective_cache_size | 160GB | configuration file enable_seqscan | off | configuration file force_parallel_mode | off | configuration file from_collapse_limit | 15 | configuration file full_page_writes | off | configuration file hot_standby | on | configuration file hot_standby_feedback | on | configuration file join_collapse_limit | 15 | configuration file lc_messages | English_United States.1252 | configuration file lc_monetary | English_United States.1252 | configuration file lc_numeric | English_United States.1252 | configuration file lc_time | English_United States.1252 | configuration file listen_addresses | * | configuration file log_checkpoints | on | configuration file log_connections | on | configuration file log_destination | stderr | configuration file log_file_mode | 0640 | configuration file log_line_prefix | %t-%c- | configuration file log_min_messages_internal | info | configuration file log_rotation_age | 1h | configuration file log_rotation_size | 100MB | configuration file log_timezone | UTC | configuration file logging_collector | on | configuration file maintenance_work_mem | 1GB | configuration file max_connections | 1900 | configuration file max_parallel_workers_per_gather | 16 | configuration file max_replication_slots | 10 | configuration file max_stack_depth | 2MB | environment variable max_wal_senders | 10 | configuration file max_wal_size | 26931MB | configuration file min_wal_size | 4GB | configuration file pg_qs.query_capture_mode | top | configuration file pgms_wait_sampling.query_capture_mode | all | configuration file pgstat_udp_port | 20224 | command line port | 20224 | command line random_page_cost | 1.1 | configuration file shared_buffers | 64GB | configuration file ssl | on | configuration file ssl_ca_file | root.crt | configuration file superuser_reserved_connections | 5 | configuration file TimeZone | EET | configuration file track_io_timing | on | configuration file wal_buffers | 128MB | configuration file wal_keep_segments | 25 | configuration file wal_level | replica | configuration file work_mem | 16MB | configuration fileOn Sat, Apr 3, 2021 at 8:59 PM aditya desai <[email protected]> wrote:Hi Bruce,Please find the below output.force_parallel_mode if off now. aad_log_min_messages | warning | configuration file application_name | psql | client archive_command | c:\\postgres\\bin\\xlogcopy\\xlogcopy.exe archive blob \"%f\" \"%p\" | configuration file archive_mode | on | configuration file archive_timeout | 15min | configuration file authentication_timeout | 30s | configuration file autovacuum_analyze_scale_factor | 0.05 | configuration file autovacuum_naptime | 15s | configuration file autovacuum_vacuum_scale_factor | 0.05 | configuration file bgwriter_delay | 20ms | configuration file bgwriter_flush_after | 512kB | configuration file bgwriter_lru_maxpages | 100 | configuration file checkpoint_completion_target | 0.9 | configuration file checkpoint_flush_after | 256kB | configuration file checkpoint_timeout | 5min | configuration file client_encoding | UTF8 | client connection_ID | 5b59f092-444c-49df-b5d6-a7a0028a7855 | client connection_PeerIP | fd40:4d4a:11:5067:6d11:500:a07:5144 | client connection_Vnet | on | client constraint_exclusion | partition | configuration file data_sync_retry | on | configuration file DateStyle | ISO, MDY | configuration file default_text_search_config | pg_catalog.english | configuration file dynamic_shared_memory_type | windows | configuration file effective_cache_size | 160GB | configuration file enable_seqscan | off | configuration file force_parallel_mode | off | configuration file from_collapse_limit | 15 | configuration file full_page_writes | off | configuration file hot_standby | on | configuration file hot_standby_feedback | on | configuration file join_collapse_limit | 15 | configuration file lc_messages | English_United States.1252 | configuration file lc_monetary | English_United States.1252 | configuration file lc_numeric | English_United States.1252 | configuration file lc_time | English_United States.1252 | configuration file listen_addresses | * | configuration file log_checkpoints | on | configuration file log_connections | on | configuration file log_destination | stderr | configuration file log_file_mode | 0640 | configuration file log_line_prefix | %t-%c- | configuration file log_min_messages_internal | info | configuration file log_rotation_age | 1h | configuration file log_rotation_size | 100MB | configuration file log_timezone | UTC | configuration file logging_collector | on | configuration file maintenance_work_mem | 1GB | configuration file max_connections | 1900 | configuration file max_parallel_workers_per_gather | 16 | configuration file max_replication_slots | 10 | configuration file max_stack_depth | 2MB | environment variable max_wal_senders | 10 | configuration file max_wal_size | 26931MB | configuration file min_wal_size | 4GB | configuration file pg_qs.query_capture_mode | top | configuration file pgms_wait_sampling.query_capture_mode | all | configuration file pgstat_udp_port | 20224 | command line port | 20224 | command line random_page_cost | 1.1 | configuration file shared_buffers | 64GB | configuration file ssl | on | configuration file ssl_ca_file | root.crt | configuration file superuser_reserved_connections | 5 | configuration file TimeZone | EET | configuration file track_io_timing | on | configuration file wal_buffers | 128MB | configuration file wal_keep_segments | 25 | configuration file wal_level | replica | configuration file work_mem | 16MB | configuration fileRegards,Aditya.On Sat, Apr 3, 2021 at 8:34 PM Bruce Momjian <[email protected]> wrote:On Sat, Apr 3, 2021 at 08:29:22PM +0530, aditya desai wrote:\r\n> Hi Michael,\r\n> Thanks for your response.\r\n> Is this table partitioned? - No\r\n> How long ago was migration done? - 27th March 2021\r\n> Has vacuum freeze and analyze of tables been done? - We ran vacuum analyze.\r\n> Was index created after populating data or reindexed after perhaps? - Index\r\n> was created after data load and reindex was executed on all tables yesterday.\r\n> Version is PostgreSQL-11\n\r\nFYI, the output of these queries will show u what changes have been made\r\nto the configuration file:\n\r\n SELECT version();\n\r\n SELECT name, current_setting(name), source\r\n FROM pg_settings\r\n WHERE source NOT IN ('default', 'override');\n\r\n-- \r\n Bruce Momjian <[email protected]> https://momjian.us\r\n EDB https://enterprisedb.com\n\r\n If only the physical world exists, free will is an illusion.",
"msg_date": "Sat, 3 Apr 2021 21:00:24 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "I will gather all information and get back to you\n\nOn Sat, Apr 3, 2021 at 9:00 PM Pavel Stehule <[email protected]>\nwrote:\n\n>\n>\n> so 3. 4. 2021 v 17:15 odesílatel aditya desai <[email protected]> napsal:\n>\n>> Hi Pavel,\n>> Thanks for response. Please see below.\n>> work_mem=16MB\n>> maintenance_work_mem=1GB\n>> effective_cache_size=160GB\n>> shared_buffers=64GB\n>> force_parallel_mode=ON\n>>\n>\n> force_parallel_mode is very bad idea. efective_cache_size=160GB can be too\n> much too. work_mem 16 MB is maybe too low. The configuration looks a little\n> bit chaotic :)\n>\n> How much has RAM your server? How much CPU cores are there? What is\n> max_connections?\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>> Regards,\n>> Aditya.\n>>\n>>\n>> On Sat, Apr 3, 2021 at 7:38 PM Pavel Stehule <[email protected]>\n>> wrote:\n>>\n>>>\n>>>\n>>> so 3. 4. 2021 v 15:38 odesílatel aditya desai <[email protected]>\n>>> napsal:\n>>>\n>>>> Hi,\n>>>> We migrated our Oracle Databases to PostgreSQL. One of the simple\n>>>> select query that takes 4 ms on Oracle is taking around 200 ms on\n>>>> PostgreSQL. Could you please advise. Please find query and query plans\n>>>> below. Gather cost seems high. Will increasing\n>>>> max_parallel_worker_per_gather help?\n>>>>\n>>>> explain analyse SELECT bom.address_key dom2137,bom.address_type_key\n>>>> dom1727,bom.start_date dom1077,bom.end_date dom828,bom.address_status_key\n>>>> dom1955,bom.address_role_key dom1711,bom.delivery_point_created\n>>>> dom2362,bom.postcode dom873,bom.postcode_name dom1390,bom.street_name\n>>>> dom1186,bom.premises_number_1 dom1777,bom.premises_number_2\n>>>> dom1778,bom.premises_letter_1 dom1784,bom.premises_letter_2\n>>>> dom1785,bom.premises_separator dom1962,bom.stairway dom892,bom.po_box\n>>>> dom653,bom.apartment_number dom1732,bom.apartment_letter\n>>>> dom1739,bom.street_key dom1097,bom.address_use_key dom1609,bom.language_key\n>>>> dom1272,bom.address_family_id dom1796,bom.cur_address_key\n>>>> dom2566,bom.created_by dom1052,bom.modified_by dom1158,bom.creation_time\n>>>> dom1392,bom.modification_time dom1813 FROM DEPT.address dom WHERE\n>>>> address_key = 6113763\n>>>>\n>>>> [\n>>>> {\n>>>> \"Plan\": {\n>>>> \"Node Type\": \"Gather\",\n>>>> \"Parallel Aware\": false,\n>>>> \"Actual Rows\": 1,\n>>>> \"Actual Loops\": 1,\n>>>> \"Workers Planned\": 1,\n>>>> \"Workers Launched\": 1,\n>>>> \"Single Copy\": true,\n>>>> \"Plans\": [\n>>>> {\n>>>> \"Node Type\": \"Index Scan\",\n>>>> \"Parent Relationship\": \"Outer\",\n>>>> \"Parallel Aware\": false,\n>>>> \"Scan Direction\": \"Forward\",\n>>>> \"Index Name\": \"address1_i7\",\n>>>> \"Relation Name\": \"address\",\n>>>> \"Alias\": \"dom\",\n>>>> \"Actual Rows\": 1,\n>>>> \"Actual Loops\": 1,\n>>>> \"Index Cond\": \"(address_key = 6113763)\",\n>>>> \"Rows Removed by Index Recheck\": 0\n>>>> }\n>>>> ]\n>>>> },\n>>>> \"Triggers\": []\n>>>> }\n>>>> ]\n>>>>\n>>>> \"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual\n>>>> time=174.318..198.539 rows=1 loops=1)\"\n>>>> \" Workers Planned: 1\"\n>>>> \" Workers Launched: 1\"\n>>>> \" Single Copy: true\"\n>>>> \" -> Index Scan using address1_i7 on address1 dom (cost=0.43..2.65\n>>>> rows=1 width=127) (actual time=0.125..0.125 rows=1 loops=1)\"\n>>>> \" Index Cond: (address_key = 6113763)\"\n>>>> \"Planning Time: 0.221 ms\"\n>>>> \"Execution Time: 198.601 ms\"\n>>>>\n>>>\n>>> You should have broken configuration - there is not any reason to start\n>>> parallelism - probably some option in postgresql.conf has very bad value.\n>>> Second - it's crazy to see 200 ms just on interprocess communication -\n>>> maybe your CPU is overutilized.\n>>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>>>\n>>>\n>>>\n>>>>\n>>>>\n>>>> Regards,\n>>>> Aditya.\n>>>>\n>>>\n\nI will gather all information and get back to youOn Sat, Apr 3, 2021 at 9:00 PM Pavel Stehule <[email protected]> wrote:so 3. 4. 2021 v 17:15 odesílatel aditya desai <[email protected]> napsal:Hi Pavel,Thanks for response. Please see below.work_mem=16MBmaintenance_work_mem=1GBeffective_cache_size=160GBshared_buffers=64GBforce_parallel_mode=ONforce_parallel_mode is very bad idea. efective_cache_size=160GB can be too much too. work_mem 16 MB is maybe too low. The configuration looks a little bit chaotic :)How much has RAM your server? How much CPU cores are there? What is max_connections? RegardsPavel Regards,Aditya.On Sat, Apr 3, 2021 at 7:38 PM Pavel Stehule <[email protected]> wrote:so 3. 4. 2021 v 15:38 odesílatel aditya desai <[email protected]> napsal:Hi,We migrated our Oracle Databases to PostgreSQL. One of the simple select query that takes 4 ms on Oracle is taking around 200 ms on PostgreSQL. Could you please advise. Please find query and query plans below. Gather cost seems high. Will increasing max_parallel_worker_per_gather help?explain analyse SELECT bom.address_key dom2137,bom.address_type_key dom1727,bom.start_date dom1077,bom.end_date dom828,bom.address_status_key dom1955,bom.address_role_key dom1711,bom.delivery_point_created dom2362,bom.postcode dom873,bom.postcode_name dom1390,bom.street_name dom1186,bom.premises_number_1 dom1777,bom.premises_number_2 dom1778,bom.premises_letter_1 dom1784,bom.premises_letter_2 dom1785,bom.premises_separator dom1962,bom.stairway dom892,bom.po_box dom653,bom.apartment_number dom1732,bom.apartment_letter dom1739,bom.street_key dom1097,bom.address_use_key dom1609,bom.language_key dom1272,bom.address_family_id dom1796,bom.cur_address_key dom2566,bom.created_by dom1052,bom.modified_by dom1158,bom.creation_time dom1392,bom.modification_time dom1813 FROM DEPT.address dom WHERE address_key = 6113763[{\"Plan\": {\"Node Type\": \"Gather\",\"Parallel Aware\": false,\"Actual Rows\": 1,\"Actual Loops\": 1,\"Workers Planned\": 1,\"Workers Launched\": 1,\"Single Copy\": true,\"Plans\": [{\"Node Type\": \"Index Scan\",\"Parent Relationship\": \"Outer\",\"Parallel Aware\": false,\"Scan Direction\": \"Forward\",\"Index Name\": \"address1_i7\",\"Relation Name\": \"address\",\"Alias\": \"dom\",\"Actual Rows\": 1,\"Actual Loops\": 1,\"Index Cond\": \"(address_key = 6113763)\",\"Rows Removed by Index Recheck\": 0}]},\"Triggers\": []}]\"Gather (cost=1000.43..1002.75 rows=1 width=127) (actual time=174.318..198.539 rows=1 loops=1)\"\" Workers Planned: 1\"\" Workers Launched: 1\"\" Single Copy: true\"\" -> Index Scan using address1_i7 on address1 dom (cost=0.43..2.65 rows=1 width=127) (actual time=0.125..0.125 rows=1 loops=1)\"\" Index Cond: (address_key = 6113763)\"\"Planning Time: 0.221 ms\"\"Execution Time: 198.601 ms\"You should have broken configuration - there is not any reason to start parallelism - probably some option in postgresql.conf has very bad value. Second - it's crazy to see 200 ms just on interprocess communication - maybe your CPU is overutilized.RegardsPavelRegards,Aditya.",
"msg_date": "Sat, 3 Apr 2021 21:03:42 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "so 3. 4. 2021 v 17:30 odesílatel aditya desai <[email protected]> napsal:\n\n> adding the group.\n>\n> aad_log_min_messages | warning\n> | configuration file\n> application_name | psql\n> | client\n> archive_command |\n> c:\\postgres\\bin\\xlogcopy\\xlogcopy.exe archive blob \"%f\" \"%p\" |\n> configuration file\n> archive_mode | on\n> | configuration file\n> archive_timeout | 15min\n> | configuration file\n> authentication_timeout | 30s\n> | configuration file\n> autovacuum_analyze_scale_factor | 0.05\n> | configuration file\n> autovacuum_naptime | 15s\n> | configuration file\n> autovacuum_vacuum_scale_factor | 0.05\n> | configuration file\n> bgwriter_delay | 20ms\n> | configuration file\n> bgwriter_flush_after | 512kB\n> | configuration file\n> bgwriter_lru_maxpages | 100\n> | configuration file\n> checkpoint_completion_target | 0.9\n> | configuration file\n> checkpoint_flush_after | 256kB\n> | configuration file\n> checkpoint_timeout | 5min\n> | configuration file\n> client_encoding | UTF8\n> | client\n> connection_ID |\n> 5b59f092-444c-49df-b5d6-a7a0028a7855 | client\n> connection_PeerIP |\n> fd40:4d4a:11:5067:6d11:500:a07:5144 | client\n> connection_Vnet | on\n> | client\n> constraint_exclusion | partition\n> | configuration file\n> data_sync_retry | on\n> | configuration file\n> DateStyle | ISO, MDY\n> | configuration file\n> default_text_search_config | pg_catalog.english\n> | configuration file\n> dynamic_shared_memory_type | windows\n> | configuration file\n> effective_cache_size | 160GB\n> | configuration file\n> enable_seqscan | off\n> | configuration file\n> force_parallel_mode | off\n> | configuration file\n> from_collapse_limit | 15\n> | configuration file\n> full_page_writes | off\n> | configuration file\n> hot_standby | on\n> | configuration file\n> hot_standby_feedback | on\n> | configuration file\n> join_collapse_limit | 15\n> | configuration file\n> lc_messages | English_United States.1252\n> | configuration file\n> lc_monetary | English_United States.1252\n> | configuration file\n> lc_numeric | English_United States.1252\n> | configuration file\n> lc_time | English_United States.1252\n> | configuration file\n> listen_addresses | *\n> | configuration file\n> log_checkpoints | on\n> | configuration file\n> log_connections | on\n> | configuration file\n> log_destination | stderr\n> | configuration file\n> log_file_mode | 0640\n> | configuration file\n> log_line_prefix | %t-%c-\n> | configuration file\n> log_min_messages_internal | info\n> | configuration file\n> log_rotation_age | 1h\n> | configuration file\n> log_rotation_size | 100MB\n> | configuration file\n> log_timezone | UTC\n> | configuration file\n> logging_collector | on\n> | configuration file\n> maintenance_work_mem | 1GB\n> | configuration file\n> max_connections | 1900\n> | configuration file\n> max_parallel_workers_per_gather | 16\n> | configuration file\n> max_replication_slots | 10\n> | configuration file\n> max_stack_depth | 2MB\n> | environment variable\n> max_wal_senders | 10\n> | configuration file\n> max_wal_size | 26931MB\n> | configuration file\n> min_wal_size | 4GB\n> | configuration file\n> pg_qs.query_capture_mode | top\n> | configuration file\n> pgms_wait_sampling.query_capture_mode | all\n> | configuration file\n> pgstat_udp_port | 20224\n> | command line\n> port | 20224\n> | command line\n> random_page_cost | 1.1\n> | configuration file\n> shared_buffers | 64GB\n> | configuration file\n> ssl | on\n> | configuration file\n> ssl_ca_file | root.crt\n> | configuration file\n> superuser_reserved_connections | 5\n> | configuration file\n> TimeZone | EET\n> | configuration file\n> track_io_timing | on\n> | configuration file\n> wal_buffers | 128MB\n> | configuration file\n> wal_keep_segments | 25\n> | configuration file\n> wal_level | replica\n> | configuration file\n> work_mem | 16MB\n> | configuration file\n>\n>\nmax_connections | 1900\n\nit is really not good - there can be very high CPU overloading with a lot\nof others issues.\n\n\n\n> On Sat, Apr 3, 2021 at 8:59 PM aditya desai <[email protected]> wrote:\n>\n>> Hi Bruce,\n>> Please find the below output.force_parallel_mode if off now.\n>>\n>> aad_log_min_messages | warning\n>> | configuration file\n>> application_name | psql\n>> | client\n>> archive_command |\n>> c:\\postgres\\bin\\xlogcopy\\xlogcopy.exe archive blob \"%f\" \"%p\" |\n>> configuration file\n>> archive_mode | on\n>> | configuration file\n>> archive_timeout | 15min\n>> | configuration file\n>> authentication_timeout | 30s\n>> | configuration file\n>> autovacuum_analyze_scale_factor | 0.05\n>> | configuration file\n>> autovacuum_naptime | 15s\n>> | configuration file\n>> autovacuum_vacuum_scale_factor | 0.05\n>> | configuration file\n>> bgwriter_delay | 20ms\n>> | configuration file\n>> bgwriter_flush_after | 512kB\n>> | configuration file\n>> bgwriter_lru_maxpages | 100\n>> | configuration file\n>> checkpoint_completion_target | 0.9\n>> | configuration file\n>> checkpoint_flush_after | 256kB\n>> | configuration file\n>> checkpoint_timeout | 5min\n>> | configuration file\n>> client_encoding | UTF8\n>> | client\n>> connection_ID |\n>> 5b59f092-444c-49df-b5d6-a7a0028a7855 | client\n>> connection_PeerIP |\n>> fd40:4d4a:11:5067:6d11:500:a07:5144 | client\n>> connection_Vnet | on\n>> | client\n>> constraint_exclusion | partition\n>> | configuration file\n>> data_sync_retry | on\n>> | configuration file\n>> DateStyle | ISO, MDY\n>> | configuration file\n>> default_text_search_config | pg_catalog.english\n>> | configuration file\n>> dynamic_shared_memory_type | windows\n>> | configuration file\n>> effective_cache_size | 160GB\n>> | configuration file\n>> enable_seqscan | off\n>> | configuration file\n>> force_parallel_mode | off\n>> | configuration file\n>> from_collapse_limit | 15\n>> | configuration file\n>> full_page_writes | off\n>> | configuration file\n>> hot_standby | on\n>> | configuration file\n>> hot_standby_feedback | on\n>> | configuration file\n>> join_collapse_limit | 15\n>> | configuration file\n>> lc_messages | English_United States.1252\n>> | configuration file\n>> lc_monetary | English_United States.1252\n>> | configuration file\n>> lc_numeric | English_United States.1252\n>> | configuration file\n>> lc_time | English_United States.1252\n>> | configuration file\n>> listen_addresses | *\n>> | configuration file\n>> log_checkpoints | on\n>> | configuration file\n>> log_connections | on\n>> | configuration file\n>> log_destination | stderr\n>> | configuration file\n>> log_file_mode | 0640\n>> | configuration file\n>> log_line_prefix | %t-%c-\n>> | configuration file\n>> log_min_messages_internal | info\n>> | configuration file\n>> log_rotation_age | 1h\n>> | configuration file\n>> log_rotation_size | 100MB\n>> | configuration file\n>> log_timezone | UTC\n>> | configuration file\n>> logging_collector | on\n>> | configuration file\n>> maintenance_work_mem | 1GB\n>> | configuration file\n>> max_connections | 1900\n>> | configuration file\n>> max_parallel_workers_per_gather | 16\n>> | configuration file\n>> max_replication_slots | 10\n>> | configuration file\n>> max_stack_depth | 2MB\n>> | environment variable\n>> max_wal_senders | 10\n>> | configuration file\n>> max_wal_size | 26931MB\n>> | configuration file\n>> min_wal_size | 4GB\n>> | configuration file\n>> pg_qs.query_capture_mode | top\n>> | configuration file\n>> pgms_wait_sampling.query_capture_mode | all\n>> | configuration file\n>> pgstat_udp_port | 20224\n>> | command line\n>> port | 20224\n>> | command line\n>> random_page_cost | 1.1\n>> | configuration file\n>> shared_buffers | 64GB\n>> | configuration file\n>> ssl | on\n>> | configuration file\n>> ssl_ca_file | root.crt\n>> | configuration file\n>> superuser_reserved_connections | 5\n>> | configuration file\n>> TimeZone | EET\n>> | configuration file\n>> track_io_timing | on\n>> | configuration file\n>> wal_buffers | 128MB\n>> | configuration file\n>> wal_keep_segments | 25\n>> | configuration file\n>> wal_level | replica\n>> | configuration file\n>> work_mem | 16MB\n>> | configuration file\n>>\n>>\n>> Regards,\n>> Aditya.\n>>\n>>\n>>\n>> On Sat, Apr 3, 2021 at 8:34 PM Bruce Momjian <[email protected]> wrote:\n>>\n>>> On Sat, Apr 3, 2021 at 08:29:22PM +0530, aditya desai wrote:\n>>> > Hi Michael,\n>>> > Thanks for your response.\n>>> > Is this table partitioned? - No\n>>> > How long ago was migration done? - 27th March 2021\n>>> > Has vacuum freeze and analyze of tables been done? - We ran vacuum\n>>> analyze.\n>>> > Was index created after populating data or reindexed after perhaps? -\n>>> Index\n>>> > was created after data load and reindex was executed on all tables\n>>> yesterday.\n>>> > Version is PostgreSQL-11\n>>>\n>>> FYI, the output of these queries will show u what changes have been made\n>>> to the configuration file:\n>>>\n>>> SELECT version();\n>>>\n>>> SELECT name, current_setting(name), source\n>>> FROM pg_settings\n>>> WHERE source NOT IN ('default', 'override');\n>>>\n>>> --\n>>> Bruce Momjian <[email protected]> https://momjian.us\n>>> EDB https://enterprisedb.com\n>>>\n>>> If only the physical world exists, free will is an illusion.\n>>>\n>>>\n\nso 3. 4. 2021 v 17:30 odesílatel aditya desai <[email protected]> napsal:adding the group. aad_log_min_messages | warning | configuration file application_name | psql | client archive_command | c:\\postgres\\bin\\xlogcopy\\xlogcopy.exe archive blob \"%f\" \"%p\" | configuration file archive_mode | on | configuration file archive_timeout | 15min | configuration file authentication_timeout | 30s | configuration file autovacuum_analyze_scale_factor | 0.05 | configuration file autovacuum_naptime | 15s | configuration file autovacuum_vacuum_scale_factor | 0.05 | configuration file bgwriter_delay | 20ms | configuration file bgwriter_flush_after | 512kB | configuration file bgwriter_lru_maxpages | 100 | configuration file checkpoint_completion_target | 0.9 | configuration file checkpoint_flush_after | 256kB | configuration file checkpoint_timeout | 5min | configuration file client_encoding | UTF8 | client connection_ID | 5b59f092-444c-49df-b5d6-a7a0028a7855 | client connection_PeerIP | fd40:4d4a:11:5067:6d11:500:a07:5144 | client connection_Vnet | on | client constraint_exclusion | partition | configuration file data_sync_retry | on | configuration file DateStyle | ISO, MDY | configuration file default_text_search_config | pg_catalog.english | configuration file dynamic_shared_memory_type | windows | configuration file effective_cache_size | 160GB | configuration file enable_seqscan | off | configuration file force_parallel_mode | off | configuration file from_collapse_limit | 15 | configuration file full_page_writes | off | configuration file hot_standby | on | configuration file hot_standby_feedback | on | configuration file join_collapse_limit | 15 | configuration file lc_messages | English_United States.1252 | configuration file lc_monetary | English_United States.1252 | configuration file lc_numeric | English_United States.1252 | configuration file lc_time | English_United States.1252 | configuration file listen_addresses | * | configuration file log_checkpoints | on | configuration file log_connections | on | configuration file log_destination | stderr | configuration file log_file_mode | 0640 | configuration file log_line_prefix | %t-%c- | configuration file log_min_messages_internal | info | configuration file log_rotation_age | 1h | configuration file log_rotation_size | 100MB | configuration file log_timezone | UTC | configuration file logging_collector | on | configuration file maintenance_work_mem | 1GB | configuration file max_connections | 1900 | configuration file max_parallel_workers_per_gather | 16 | configuration file max_replication_slots | 10 | configuration file max_stack_depth | 2MB | environment variable max_wal_senders | 10 | configuration file max_wal_size | 26931MB | configuration file min_wal_size | 4GB | configuration file pg_qs.query_capture_mode | top | configuration file pgms_wait_sampling.query_capture_mode | all | configuration file pgstat_udp_port | 20224 | command line port | 20224 | command line random_page_cost | 1.1 | configuration file shared_buffers | 64GB | configuration file ssl | on | configuration file ssl_ca_file | root.crt | configuration file superuser_reserved_connections | 5 | configuration file TimeZone | EET | configuration file track_io_timing | on | configuration file wal_buffers | 128MB | configuration file wal_keep_segments | 25 | configuration file wal_level | replica | configuration file work_mem | 16MB | configuration filemax_connections | 1900 it is really not good - there can be very high CPU overloading with a lot of others issues.On Sat, Apr 3, 2021 at 8:59 PM aditya desai <[email protected]> wrote:Hi Bruce,Please find the below output.force_parallel_mode if off now. aad_log_min_messages | warning | configuration file application_name | psql | client archive_command | c:\\postgres\\bin\\xlogcopy\\xlogcopy.exe archive blob \"%f\" \"%p\" | configuration file archive_mode | on | configuration file archive_timeout | 15min | configuration file authentication_timeout | 30s | configuration file autovacuum_analyze_scale_factor | 0.05 | configuration file autovacuum_naptime | 15s | configuration file autovacuum_vacuum_scale_factor | 0.05 | configuration file bgwriter_delay | 20ms | configuration file bgwriter_flush_after | 512kB | configuration file bgwriter_lru_maxpages | 100 | configuration file checkpoint_completion_target | 0.9 | configuration file checkpoint_flush_after | 256kB | configuration file checkpoint_timeout | 5min | configuration file client_encoding | UTF8 | client connection_ID | 5b59f092-444c-49df-b5d6-a7a0028a7855 | client connection_PeerIP | fd40:4d4a:11:5067:6d11:500:a07:5144 | client connection_Vnet | on | client constraint_exclusion | partition | configuration file data_sync_retry | on | configuration file DateStyle | ISO, MDY | configuration file default_text_search_config | pg_catalog.english | configuration file dynamic_shared_memory_type | windows | configuration file effective_cache_size | 160GB | configuration file enable_seqscan | off | configuration file force_parallel_mode | off | configuration file from_collapse_limit | 15 | configuration file full_page_writes | off | configuration file hot_standby | on | configuration file hot_standby_feedback | on | configuration file join_collapse_limit | 15 | configuration file lc_messages | English_United States.1252 | configuration file lc_monetary | English_United States.1252 | configuration file lc_numeric | English_United States.1252 | configuration file lc_time | English_United States.1252 | configuration file listen_addresses | * | configuration file log_checkpoints | on | configuration file log_connections | on | configuration file log_destination | stderr | configuration file log_file_mode | 0640 | configuration file log_line_prefix | %t-%c- | configuration file log_min_messages_internal | info | configuration file log_rotation_age | 1h | configuration file log_rotation_size | 100MB | configuration file log_timezone | UTC | configuration file logging_collector | on | configuration file maintenance_work_mem | 1GB | configuration file max_connections | 1900 | configuration file max_parallel_workers_per_gather | 16 | configuration file max_replication_slots | 10 | configuration file max_stack_depth | 2MB | environment variable max_wal_senders | 10 | configuration file max_wal_size | 26931MB | configuration file min_wal_size | 4GB | configuration file pg_qs.query_capture_mode | top | configuration file pgms_wait_sampling.query_capture_mode | all | configuration file pgstat_udp_port | 20224 | command line port | 20224 | command line random_page_cost | 1.1 | configuration file shared_buffers | 64GB | configuration file ssl | on | configuration file ssl_ca_file | root.crt | configuration file superuser_reserved_connections | 5 | configuration file TimeZone | EET | configuration file track_io_timing | on | configuration file wal_buffers | 128MB | configuration file wal_keep_segments | 25 | configuration file wal_level | replica | configuration file work_mem | 16MB | configuration fileRegards,Aditya.On Sat, Apr 3, 2021 at 8:34 PM Bruce Momjian <[email protected]> wrote:On Sat, Apr 3, 2021 at 08:29:22PM +0530, aditya desai wrote:\r\n> Hi Michael,\r\n> Thanks for your response.\r\n> Is this table partitioned? - No\r\n> How long ago was migration done? - 27th March 2021\r\n> Has vacuum freeze and analyze of tables been done? - We ran vacuum analyze.\r\n> Was index created after populating data or reindexed after perhaps? - Index\r\n> was created after data load and reindex was executed on all tables yesterday.\r\n> Version is PostgreSQL-11\n\r\nFYI, the output of these queries will show u what changes have been made\r\nto the configuration file:\n\r\n SELECT version();\n\r\n SELECT name, current_setting(name), source\r\n FROM pg_settings\r\n WHERE source NOT IN ('default', 'override');\n\r\n-- \r\n Bruce Momjian <[email protected]> https://momjian.us\r\n EDB https://enterprisedb.com\n\r\n If only the physical world exists, free will is an illusion.",
"msg_date": "Sat, 3 Apr 2021 17:35:36 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "On Sat, Apr 3, 2021 at 09:00:24PM +0530, aditya desai wrote:\n> adding the group.\n\nPerfect. That is a lot of non-default settings, so I would be concerned\nthere are other misconfigurations in there --- the group here might have\nsome tips.\n\n> �aad_log_min_messages� � � � � � � � � | warning� � � � � � � � � � � � � � � �\n> � � � � � � � � � � � | configuration file\n\nThe above is not a PG config variable.\n\n> �connection_ID� � � � � � � � � � � � �| 5b59f092-444c-49df-b5d6-a7a0028a7855�\n> � � � � � � � � � � � �| client\n> �connection_PeerIP� � � � � � � � � � �| fd40:4d4a:11:5067:6d11:500:a07:5144� �\n> � � � � � � � � � � � | client\n> �connection_Vnet� � � � � � � � � � � �| on� � � � � � � � � � � � � � � � � �\n\nUh, these are not a PG settings. You need to show us the output of\nversion() because this is not standard Postgres. A quick search\nsuggests this is a Microsoft version of Postgres. I will stop\ncommenting.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Sat, 3 Apr 2021 11:38:12 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n>> Yes, force_parallel_mode is on. Should we set it off?\n\n> Yes. I bet someone set it without reading our docs:\n\n> \thttps://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n\n> -->\tAllows the use of parallel queries for testing purposes even in cases\n> -->\twhere no performance benefit is expected.\n\n> We might need to clarify this sentence to be clearer it is _only_ for\n> testing.\n\nI wonder why it is listed under planner options at all, and not under\ndeveloper options.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 03 Apr 2021 11:39:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> >> Yes, force_parallel_mode is on. Should we set it off?\n> \n> > Yes. I bet someone set it without reading our docs:\n> \n> > \thttps://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> \n> > -->\tAllows the use of parallel queries for testing purposes even in cases\n> > -->\twhere no performance benefit is expected.\n> \n> > We might need to clarify this sentence to be clearer it is _only_ for\n> > testing.\n> \n> I wonder why it is listed under planner options at all, and not under\n> developer options.\n\nBecause it's there to help DBAs catch errors in functions incorrectly marked as\nparallel safe.\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 3 Apr 2021 10:41:14 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "On Sat, Apr 3, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> >> Yes, force_parallel_mode is on. Should we set it off?\n> \n> > Yes. I bet someone set it without reading our docs:\n> \n> > \thttps://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> \n> > -->\tAllows the use of parallel queries for testing purposes even in cases\n> > -->\twhere no performance benefit is expected.\n> \n> > We might need to clarify this sentence to be clearer it is _only_ for\n> > testing.\n> \n> I wonder why it is listed under planner options at all, and not under\n> developer options.\n\nI was kind of surprised by that myself since I was working on a blog\nentry about from_collapse_limit and join_collapse_limit. I think moving\nit makes sense.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Sat, 3 Apr 2021 11:42:59 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "On Sat, Apr 3, 2021 at 10:41:14AM -0500, Justin Pryzby wrote:\n> On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n> > Bruce Momjian <[email protected]> writes:\n> > > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> > >> Yes, force_parallel_mode is on. Should we set it off?\n> > \n> > > Yes. I bet someone set it without reading our docs:\n> > \n> > > \thttps://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> > \n> > > -->\tAllows the use of parallel queries for testing purposes even in cases\n> > > -->\twhere no performance benefit is expected.\n> > \n> > > We might need to clarify this sentence to be clearer it is _only_ for\n> > > testing.\n> > \n> > I wonder why it is listed under planner options at all, and not under\n> > developer options.\n> \n> Because it's there to help DBAs catch errors in functions incorrectly marked as\n> parallel safe.\n\nUh, isn't that developer/debugging?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Sat, 3 Apr 2021 11:43:36 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "Thanks Justin. Will review all parameters and get back to you.\n\nOn Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <[email protected]> wrote:\n\n> On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n> > Bruce Momjian <[email protected]> writes:\n> > > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> > >> Yes, force_parallel_mode is on. Should we set it off?\n> >\n> > > Yes. I bet someone set it without reading our docs:\n> >\n> > >\n> https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> >\n> > > --> Allows the use of parallel queries for testing purposes even in\n> cases\n> > > --> where no performance benefit is expected.\n> >\n> > > We might need to clarify this sentence to be clearer it is _only_ for\n> > > testing.\n> >\n> > I wonder why it is listed under planner options at all, and not under\n> > developer options.\n>\n> Because it's there to help DBAs catch errors in functions incorrectly\n> marked as\n> parallel safe.\n>\n> --\n> Justin\n>\n\nThanks Justin. Will review all parameters and get back to you.On Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <[email protected]> wrote:On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> >> Yes, force_parallel_mode is on. Should we set it off?\n> \n> > Yes. I bet someone set it without reading our docs:\n> \n> > https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> \n> > --> Allows the use of parallel queries for testing purposes even in cases\n> > --> where no performance benefit is expected.\n> \n> > We might need to clarify this sentence to be clearer it is _only_ for\n> > testing.\n> \n> I wonder why it is listed under planner options at all, and not under\n> developer options.\n\nBecause it's there to help DBAs catch errors in functions incorrectly marked as\nparallel safe.\n\n-- \nJustin",
"msg_date": "Sat, 3 Apr 2021 21:14:33 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "Hi Justin/Bruce/Pavel,\nThanks for your inputs. After setting force_parallel_mode=off Execution\ntime of same query was reduced to 1ms from 200 ms. Worked like a charm. We\nalso increased work_mem to 80=MB. Thanks again.\n\nRegards,\nAditya.\n\nOn Sat, Apr 3, 2021 at 9:14 PM aditya desai <[email protected]> wrote:\n\n> Thanks Justin. Will review all parameters and get back to you.\n>\n> On Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <[email protected]> wrote:\n>\n>> On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n>> > Bruce Momjian <[email protected]> writes:\n>> > > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n>> > >> Yes, force_parallel_mode is on. Should we set it off?\n>> >\n>> > > Yes. I bet someone set it without reading our docs:\n>> >\n>> > >\n>> https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n>> >\n>> > > --> Allows the use of parallel queries for testing purposes even in\n>> cases\n>> > > --> where no performance benefit is expected.\n>> >\n>> > > We might need to clarify this sentence to be clearer it is _only_ for\n>> > > testing.\n>> >\n>> > I wonder why it is listed under planner options at all, and not under\n>> > developer options.\n>>\n>> Because it's there to help DBAs catch errors in functions incorrectly\n>> marked as\n>> parallel safe.\n>>\n>> --\n>> Justin\n>>\n>\n\nHi Justin/Bruce/Pavel,Thanks for your inputs. After setting force_parallel_mode=off Execution time of same query was reduced to 1ms from 200 ms. Worked like a charm. We also increased work_mem to 80=MB. Thanks again.Regards,Aditya.On Sat, Apr 3, 2021 at 9:14 PM aditya desai <[email protected]> wrote:Thanks Justin. Will review all parameters and get back to you.On Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <[email protected]> wrote:On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> >> Yes, force_parallel_mode is on. Should we set it off?\n> \n> > Yes. I bet someone set it without reading our docs:\n> \n> > https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> \n> > --> Allows the use of parallel queries for testing purposes even in cases\n> > --> where no performance benefit is expected.\n> \n> > We might need to clarify this sentence to be clearer it is _only_ for\n> > testing.\n> \n> I wonder why it is listed under planner options at all, and not under\n> developer options.\n\nBecause it's there to help DBAs catch errors in functions incorrectly marked as\nparallel safe.\n\n-- \nJustin",
"msg_date": "Sat, 3 Apr 2021 23:06:57 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "so 3. 4. 2021 v 19:37 odesílatel aditya desai <[email protected]> napsal:\n\n> Hi Justin/Bruce/Pavel,\n> Thanks for your inputs. After setting force_parallel_mode=off Execution\n> time of same query was reduced to 1ms from 200 ms. Worked like a charm. We\n> also increased work_mem to 80=MB. Thanks\n>\n\nsuper.\n\nThe too big max_connection can cause a lot of problems. You should install\nand use pgbouncer or pgpool II.\n\nhttps://scalegrid.io/blog/postgresql-connection-pooling-part-4-pgbouncer-vs-pgpool/\n\nRegards\n\nPavel\n\n\n\n\n> again.\n>\n> Regards,\n> Aditya.\n>\n> On Sat, Apr 3, 2021 at 9:14 PM aditya desai <[email protected]> wrote:\n>\n>> Thanks Justin. Will review all parameters and get back to you.\n>>\n>> On Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <[email protected]>\n>> wrote:\n>>\n>>> On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n>>> > Bruce Momjian <[email protected]> writes:\n>>> > > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n>>> > >> Yes, force_parallel_mode is on. Should we set it off?\n>>> >\n>>> > > Yes. I bet someone set it without reading our docs:\n>>> >\n>>> > >\n>>> https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n>>> >\n>>> > > --> Allows the use of parallel queries for testing purposes even in\n>>> cases\n>>> > > --> where no performance benefit is expected.\n>>> >\n>>> > > We might need to clarify this sentence to be clearer it is _only_ for\n>>> > > testing.\n>>> >\n>>> > I wonder why it is listed under planner options at all, and not under\n>>> > developer options.\n>>>\n>>> Because it's there to help DBAs catch errors in functions incorrectly\n>>> marked as\n>>> parallel safe.\n>>>\n>>> --\n>>> Justin\n>>>\n>>\n\nso 3. 4. 2021 v 19:37 odesílatel aditya desai <[email protected]> napsal:Hi Justin/Bruce/Pavel,Thanks for your inputs. After setting force_parallel_mode=off Execution time of same query was reduced to 1ms from 200 ms. Worked like a charm. We also increased work_mem to 80=MB. Thanks super.The too big max_connection can cause a lot of problems. You should install and use pgbouncer or pgpool II. https://scalegrid.io/blog/postgresql-connection-pooling-part-4-pgbouncer-vs-pgpool/RegardsPavel again.Regards,Aditya.On Sat, Apr 3, 2021 at 9:14 PM aditya desai <[email protected]> wrote:Thanks Justin. Will review all parameters and get back to you.On Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <[email protected]> wrote:On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> >> Yes, force_parallel_mode is on. Should we set it off?\n> \n> > Yes. I bet someone set it without reading our docs:\n> \n> > https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> \n> > --> Allows the use of parallel queries for testing purposes even in cases\n> > --> where no performance benefit is expected.\n> \n> > We might need to clarify this sentence to be clearer it is _only_ for\n> > testing.\n> \n> I wonder why it is listed under planner options at all, and not under\n> developer options.\n\nBecause it's there to help DBAs catch errors in functions incorrectly marked as\nparallel safe.\n\n-- \nJustin",
"msg_date": "Sat, 3 Apr 2021 19:41:50 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "Yes. I have made suggestions on connection pooling as well. Currently it is\nbeing done from Application side.\n\nOn Sat, Apr 3, 2021 at 11:12 PM Pavel Stehule <[email protected]>\nwrote:\n\n>\n>\n> so 3. 4. 2021 v 19:37 odesílatel aditya desai <[email protected]> napsal:\n>\n>> Hi Justin/Bruce/Pavel,\n>> Thanks for your inputs. After setting force_parallel_mode=off Execution\n>> time of same query was reduced to 1ms from 200 ms. Worked like a charm. We\n>> also increased work_mem to 80=MB. Thanks\n>>\n>\n> super.\n>\n> The too big max_connection can cause a lot of problems. You should install\n> and use pgbouncer or pgpool II.\n>\n>\n> https://scalegrid.io/blog/postgresql-connection-pooling-part-4-pgbouncer-vs-pgpool/\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>> again.\n>>\n>> Regards,\n>> Aditya.\n>>\n>> On Sat, Apr 3, 2021 at 9:14 PM aditya desai <[email protected]> wrote:\n>>\n>>> Thanks Justin. Will review all parameters and get back to you.\n>>>\n>>> On Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <[email protected]>\n>>> wrote:\n>>>\n>>>> On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n>>>> > Bruce Momjian <[email protected]> writes:\n>>>> > > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n>>>> > >> Yes, force_parallel_mode is on. Should we set it off?\n>>>> >\n>>>> > > Yes. I bet someone set it without reading our docs:\n>>>> >\n>>>> > >\n>>>> https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n>>>> >\n>>>> > > --> Allows the use of parallel queries for testing purposes even in\n>>>> cases\n>>>> > > --> where no performance benefit is expected.\n>>>> >\n>>>> > > We might need to clarify this sentence to be clearer it is _only_\n>>>> for\n>>>> > > testing.\n>>>> >\n>>>> > I wonder why it is listed under planner options at all, and not under\n>>>> > developer options.\n>>>>\n>>>> Because it's there to help DBAs catch errors in functions incorrectly\n>>>> marked as\n>>>> parallel safe.\n>>>>\n>>>> --\n>>>> Justin\n>>>>\n>>>\n\nYes. I have made suggestions on connection pooling as well. Currently it is being done from Application side.On Sat, Apr 3, 2021 at 11:12 PM Pavel Stehule <[email protected]> wrote:so 3. 4. 2021 v 19:37 odesílatel aditya desai <[email protected]> napsal:Hi Justin/Bruce/Pavel,Thanks for your inputs. After setting force_parallel_mode=off Execution time of same query was reduced to 1ms from 200 ms. Worked like a charm. We also increased work_mem to 80=MB. Thanks super.The too big max_connection can cause a lot of problems. You should install and use pgbouncer or pgpool II. https://scalegrid.io/blog/postgresql-connection-pooling-part-4-pgbouncer-vs-pgpool/RegardsPavel again.Regards,Aditya.On Sat, Apr 3, 2021 at 9:14 PM aditya desai <[email protected]> wrote:Thanks Justin. Will review all parameters and get back to you.On Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <[email protected]> wrote:On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> >> Yes, force_parallel_mode is on. Should we set it off?\n> \n> > Yes. I bet someone set it without reading our docs:\n> \n> > https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> \n> > --> Allows the use of parallel queries for testing purposes even in cases\n> > --> where no performance benefit is expected.\n> \n> > We might need to clarify this sentence to be clearer it is _only_ for\n> > testing.\n> \n> I wonder why it is listed under planner options at all, and not under\n> developer options.\n\nBecause it's there to help DBAs catch errors in functions incorrectly marked as\nparallel safe.\n\n-- \nJustin",
"msg_date": "Sat, 3 Apr 2021 23:15:47 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "so 3. 4. 2021 v 19:45 odesílatel aditya desai <[email protected]> napsal:\n\n> Yes. I have made suggestions on connection pooling as well. Currently it\n> is being done from Application side.\n>\n\nIt is usual - but the application side pooling doesn't solve well\noverloading. The behaviour of the database is not linear. Usually opened\nconnections are not active. But any non active connection can be changed to\nan active connection (there is not any limit for active connections), and\nthen the performance can be very very slow. Good pooling and good setting\nof max_connections is protection against overloading. max_connection should\nbe 10-20 x CPU cores (for OLTP)\n\nRegards\n\nPavel\n\n\n\n\n> On Sat, Apr 3, 2021 at 11:12 PM Pavel Stehule <[email protected]>\n> wrote:\n>\n>>\n>>\n>> so 3. 4. 2021 v 19:37 odesílatel aditya desai <[email protected]>\n>> napsal:\n>>\n>>> Hi Justin/Bruce/Pavel,\n>>> Thanks for your inputs. After setting force_parallel_mode=off Execution\n>>> time of same query was reduced to 1ms from 200 ms. Worked like a charm. We\n>>> also increased work_mem to 80=MB. Thanks\n>>>\n>>\n>> super.\n>>\n>> The too big max_connection can cause a lot of problems. You should\n>> install and use pgbouncer or pgpool II.\n>>\n>>\n>> https://scalegrid.io/blog/postgresql-connection-pooling-part-4-pgbouncer-vs-pgpool/\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>\n>>\n>>> again.\n>>>\n>>> Regards,\n>>> Aditya.\n>>>\n>>> On Sat, Apr 3, 2021 at 9:14 PM aditya desai <[email protected]> wrote:\n>>>\n>>>> Thanks Justin. Will review all parameters and get back to you.\n>>>>\n>>>> On Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <[email protected]>\n>>>> wrote:\n>>>>\n>>>>> On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n>>>>> > Bruce Momjian <[email protected]> writes:\n>>>>> > > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n>>>>> > >> Yes, force_parallel_mode is on. Should we set it off?\n>>>>> >\n>>>>> > > Yes. I bet someone set it without reading our docs:\n>>>>> >\n>>>>> > >\n>>>>> https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n>>>>> >\n>>>>> > > --> Allows the use of parallel queries for testing purposes even\n>>>>> in cases\n>>>>> > > --> where no performance benefit is expected.\n>>>>> >\n>>>>> > > We might need to clarify this sentence to be clearer it is _only_\n>>>>> for\n>>>>> > > testing.\n>>>>> >\n>>>>> > I wonder why it is listed under planner options at all, and not under\n>>>>> > developer options.\n>>>>>\n>>>>> Because it's there to help DBAs catch errors in functions incorrectly\n>>>>> marked as\n>>>>> parallel safe.\n>>>>>\n>>>>> --\n>>>>> Justin\n>>>>>\n>>>>\n\nso 3. 4. 2021 v 19:45 odesílatel aditya desai <[email protected]> napsal:Yes. I have made suggestions on connection pooling as well. Currently it is being done from Application side.It is usual - but the application side pooling doesn't solve well overloading. The behaviour of the database is not linear. Usually opened connections are not active. But any non active connection can be changed to an active connection (there is not any limit for active connections), and then the performance can be very very slow. Good pooling and good setting of max_connections is protection against overloading. max_connection should be 10-20 x CPU cores (for OLTP)RegardsPavel On Sat, Apr 3, 2021 at 11:12 PM Pavel Stehule <[email protected]> wrote:so 3. 4. 2021 v 19:37 odesílatel aditya desai <[email protected]> napsal:Hi Justin/Bruce/Pavel,Thanks for your inputs. After setting force_parallel_mode=off Execution time of same query was reduced to 1ms from 200 ms. Worked like a charm. We also increased work_mem to 80=MB. Thanks super.The too big max_connection can cause a lot of problems. You should install and use pgbouncer or pgpool II. https://scalegrid.io/blog/postgresql-connection-pooling-part-4-pgbouncer-vs-pgpool/RegardsPavel again.Regards,Aditya.On Sat, Apr 3, 2021 at 9:14 PM aditya desai <[email protected]> wrote:Thanks Justin. Will review all parameters and get back to you.On Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <[email protected]> wrote:On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> >> Yes, force_parallel_mode is on. Should we set it off?\n> \n> > Yes. I bet someone set it without reading our docs:\n> \n> > https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> \n> > --> Allows the use of parallel queries for testing purposes even in cases\n> > --> where no performance benefit is expected.\n> \n> > We might need to clarify this sentence to be clearer it is _only_ for\n> > testing.\n> \n> I wonder why it is listed under planner options at all, and not under\n> developer options.\n\nBecause it's there to help DBAs catch errors in functions incorrectly marked as\nparallel safe.\n\n-- \nJustin",
"msg_date": "Sat, 3 Apr 2021 19:50:38 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "Forking this thread\nhttps://www.postgresql.org/message-id/20210403154336.GG29125%40momjian.us\n\nOn Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> > > >> Yes, force_parallel_mode is on. Should we set it off?\n\nBruce Momjian <[email protected]> writes:\n> > > > Yes. I bet someone set it without reading our docs:\n...\n> > > > We might need to clarify this sentence to be clearer it is _only_ for\n> > > > testing.\n\nOn Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n> > > I wonder why it is listed under planner options at all, and not under\n> > > developer options.\n\nOn Sat, Apr 3, 2021 at 10:41:14AM -0500, Justin Pryzby wrote:\n> > Because it's there to help DBAs catch errors in functions incorrectly marked as\n> > parallel safe.\n\nOn Sat, Apr 03, 2021 at 11:43:36AM -0400, Bruce Momjian wrote:\n> Uh, isn't that developer/debugging?\n\nI understood \"developer\" to mean someone who's debugging postgres itself, not\n(say) a function written using pl/pgsql. Like backtrace_functions,\npost_auth_delay, jit_profiling_support.\n\nBut I see that some \"dev\" options are more user-facing (for a sufficiently\nadvanced user):\nignore_checksum_failure, ignore_invalid_pages, zero_damaged_pages.\n\nAlso, I understood this to mean the \"category\" in pg_settings, but I guess\nwhat's important here is the absense of the GUC in the sample/template config\nfile. pg_settings.category and the sample headings it appears are intended to\nbe synchronized, but a few of them are out of sync. See attached.\n\n+1 to move this to \"developer\" options and remove it from the sample config:\n\n# - Other Planner Options -\n#force_parallel_mode = off\n\n-- \nJustin",
"msg_date": "Sat, 3 Apr 2021 20:25:46 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "[PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "Noted thanks!!\n\nOn Sun, Apr 4, 2021 at 4:19 PM Pavel Stehule <[email protected]>\nwrote:\n\n>\n>\n> ne 4. 4. 2021 v 12:39 odesílatel aditya desai <[email protected]> napsal:\n>\n>> Hi Pavel,\n>> Notes thanks. We have 64 core cpu and 320 GB RAM.\n>>\n>\n> ok - this is probably good for max thousand connections, maybe less (about\n> 6 hundred). Postgres doesn't perform well, when there are too many active\n> queries. Other databases have limits for active queries, and then use an\n> internal queue. But Postgres has nothing similar.\n>\n>\n>\n>\n>\n>\n>\n>> Regards,\n>> Aditya.\n>>\n>> On Sat, Apr 3, 2021 at 11:21 PM Pavel Stehule <[email protected]>\n>> wrote:\n>>\n>>>\n>>>\n>>> so 3. 4. 2021 v 19:45 odesílatel aditya desai <[email protected]>\n>>> napsal:\n>>>\n>>>> Yes. I have made suggestions on connection pooling as well. Currently\n>>>> it is being done from Application side.\n>>>>\n>>>\n>>> It is usual - but the application side pooling doesn't solve well\n>>> overloading. The behaviour of the database is not linear. Usually opened\n>>> connections are not active. But any non active connection can be changed to\n>>> an active connection (there is not any limit for active connections), and\n>>> then the performance can be very very slow. Good pooling and good setting\n>>> of max_connections is protection against overloading. max_connection should\n>>> be 10-20 x CPU cores (for OLTP)\n>>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>>>\n>>>\n>>>\n>>>> On Sat, Apr 3, 2021 at 11:12 PM Pavel Stehule <[email protected]>\n>>>> wrote:\n>>>>\n>>>>>\n>>>>>\n>>>>> so 3. 4. 2021 v 19:37 odesílatel aditya desai <[email protected]>\n>>>>> napsal:\n>>>>>\n>>>>>> Hi Justin/Bruce/Pavel,\n>>>>>> Thanks for your inputs. After setting force_parallel_mode=off\n>>>>>> Execution time of same query was reduced to 1ms from 200 ms. Worked like a\n>>>>>> charm. We also increased work_mem to 80=MB. Thanks\n>>>>>>\n>>>>>\n>>>>> super.\n>>>>>\n>>>>> The too big max_connection can cause a lot of problems. You should\n>>>>> install and use pgbouncer or pgpool II.\n>>>>>\n>>>>>\n>>>>> https://scalegrid.io/blog/postgresql-connection-pooling-part-4-pgbouncer-vs-pgpool/\n>>>>>\n>>>>> Regards\n>>>>>\n>>>>> Pavel\n>>>>>\n>>>>>\n>>>>>\n>>>>>\n>>>>>> again.\n>>>>>>\n>>>>>> Regards,\n>>>>>> Aditya.\n>>>>>>\n>>>>>> On Sat, Apr 3, 2021 at 9:14 PM aditya desai <[email protected]>\n>>>>>> wrote:\n>>>>>>\n>>>>>>> Thanks Justin. Will review all parameters and get back to you.\n>>>>>>>\n>>>>>>> On Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <[email protected]>\n>>>>>>> wrote:\n>>>>>>>\n>>>>>>>> On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n>>>>>>>> > Bruce Momjian <[email protected]> writes:\n>>>>>>>> > > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n>>>>>>>> > >> Yes, force_parallel_mode is on. Should we set it off?\n>>>>>>>> >\n>>>>>>>> > > Yes. I bet someone set it without reading our docs:\n>>>>>>>> >\n>>>>>>>> > >\n>>>>>>>> https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n>>>>>>>> >\n>>>>>>>> > > --> Allows the use of parallel queries for testing purposes\n>>>>>>>> even in cases\n>>>>>>>> > > --> where no performance benefit is expected.\n>>>>>>>> >\n>>>>>>>> > > We might need to clarify this sentence to be clearer it is\n>>>>>>>> _only_ for\n>>>>>>>> > > testing.\n>>>>>>>> >\n>>>>>>>> > I wonder why it is listed under planner options at all, and not\n>>>>>>>> under\n>>>>>>>> > developer options.\n>>>>>>>>\n>>>>>>>> Because it's there to help DBAs catch errors in functions\n>>>>>>>> incorrectly marked as\n>>>>>>>> parallel safe.\n>>>>>>>>\n>>>>>>>> --\n>>>>>>>> Justin\n>>>>>>>>\n>>>>>>>\n\nNoted thanks!!On Sun, Apr 4, 2021 at 4:19 PM Pavel Stehule <[email protected]> wrote:ne 4. 4. 2021 v 12:39 odesílatel aditya desai <[email protected]> napsal:Hi Pavel,Notes thanks. We have 64 core cpu and 320 GB RAM.ok - this is probably good for max thousand connections, maybe less (about 6 hundred). Postgres doesn't perform well, when there are too many active queries. Other databases have limits for active queries, and then use an internal queue. But Postgres has nothing similar. Regards,Aditya.On Sat, Apr 3, 2021 at 11:21 PM Pavel Stehule <[email protected]> wrote:so 3. 4. 2021 v 19:45 odesílatel aditya desai <[email protected]> napsal:Yes. I have made suggestions on connection pooling as well. Currently it is being done from Application side.It is usual - but the application side pooling doesn't solve well overloading. The behaviour of the database is not linear. Usually opened connections are not active. But any non active connection can be changed to an active connection (there is not any limit for active connections), and then the performance can be very very slow. Good pooling and good setting of max_connections is protection against overloading. max_connection should be 10-20 x CPU cores (for OLTP)RegardsPavel On Sat, Apr 3, 2021 at 11:12 PM Pavel Stehule <[email protected]> wrote:so 3. 4. 2021 v 19:37 odesílatel aditya desai <[email protected]> napsal:Hi Justin/Bruce/Pavel,Thanks for your inputs. After setting force_parallel_mode=off Execution time of same query was reduced to 1ms from 200 ms. Worked like a charm. We also increased work_mem to 80=MB. Thanks super.The too big max_connection can cause a lot of problems. You should install and use pgbouncer or pgpool II. https://scalegrid.io/blog/postgresql-connection-pooling-part-4-pgbouncer-vs-pgpool/RegardsPavel again.Regards,Aditya.On Sat, Apr 3, 2021 at 9:14 PM aditya desai <[email protected]> wrote:Thanks Justin. Will review all parameters and get back to you.On Sat, Apr 3, 2021 at 9:11 PM Justin Pryzby <[email protected]> wrote:On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > On Sat, Apr 3, 2021 at 08:38:18PM +0530, aditya desai wrote:\n> >> Yes, force_parallel_mode is on. Should we set it off?\n> \n> > Yes. I bet someone set it without reading our docs:\n> \n> > https://www.postgresql.org/docs/13/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\n> \n> > --> Allows the use of parallel queries for testing purposes even in cases\n> > --> where no performance benefit is expected.\n> \n> > We might need to clarify this sentence to be clearer it is _only_ for\n> > testing.\n> \n> I wonder why it is listed under planner options at all, and not under\n> developer options.\n\nBecause it's there to help DBAs catch errors in functions incorrectly marked as\nparallel safe.\n\n-- \nJustin",
"msg_date": "Sun, 4 Apr 2021 16:40:33 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT Query taking 200 ms on PostgreSQL compared to 4 ms on\n Oracle after migration."
},
{
"msg_contents": "The previous patches accidentally included some unrelated changes.\n\n-- \nJustin",
"msg_date": "Thu, 8 Apr 2021 16:38:13 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Sat, Apr 03, 2021 at 08:25:46PM -0500, Justin Pryzby wrote:\n> Forking this thread\n> https://www.postgresql.org/message-id/20210403154336.GG29125%40momjian.us\n\nDidn't see this one, thanks for forking.\n\n> I understood \"developer\" to mean someone who's debugging postgres itself, not\n> (say) a function written using pl/pgsql. Like backtrace_functions,\n> post_auth_delay, jit_profiling_support.\n> \n> But I see that some \"dev\" options are more user-facing (for a sufficiently\n> advanced user):\n> ignore_checksum_failure, ignore_invalid_pages, zero_damaged_pages.\n> \n> Also, I understood this to mean the \"category\" in pg_settings, but I guess\n> what's important here is the absense of the GUC in the sample/template config\n> file. pg_settings.category and the sample headings it appears are intended to\n> be synchronized, but a few of them are out of sync. See attached.\n> \n> +1 to move this to \"developer\" options and remove it from the sample config:\n> \n> # - Other Planner Options -\n> #force_parallel_mode = off\n\n0001 has some changes to pg_config_manual.h related to valgrind and\nmemory randomization. You may want to remove that before posting a\npatch.\n\n- {\"track_commit_timestamp\", PGC_POSTMASTER, REPLICATION,\n+ {\"track_commit_timestamp\", PGC_POSTMASTER, REPLICATION_SENDING,\nI can get behind this change for clarity where it gets actively used.\n\n- {\"track_activity_query_size\", PGC_POSTMASTER, RESOURCES_MEM,\n+ {\"track_activity_query_size\", PGC_POSTMASTER, STATS_COLLECTOR,\nBut not this one, because it is a memory setting.\n\n- {\"force_parallel_mode\", PGC_USERSET, QUERY_TUNING_OTHER,\n+ {\"force_parallel_mode\", PGC_USERSET, DEVELOPER_OPTIONS,\nAnd not this one either, as it is mainly a planner thing, like the\nother parameters in the same area.\n\nThe last change is related to log_autovacuum_min_duration, and I can\nget behind the argument you are making to group all log activity\nparameters together. Now, about this part:\n+#log_autovacuum_min_duration = -1 # -1 disables, 0 logs all actions and\n+ # their durations, > 0 logs only\n+ # actions running at least this number\n+ # of milliseconds.\nI think that we should clarify in the description that this is an\nautovacuum-only thing, say by appending a small sentence about the\nfact that it logs autovacuum activities, in a similar fashion to\nlog_temp_files. Moving the parameter out of the autovacuum section\nmakes it lose a bit of context.\n\n@@ -6903,6 +6903,7 @@ fetch_more_data_begin(AsyncRequest *areq)\n char sql[64];\n\n Assert(!fsstate->conn_state->pendingAreq);\n+ Assert(fsstate->conn);\nWhat's this diff doing here? \n--\nMichaelx",
"msg_date": "Fri, 9 Apr 2021 10:50:53 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Fri, Apr 09, 2021 at 10:50:53AM +0900, Michael Paquier wrote:\n> On Sat, Apr 03, 2021 at 08:25:46PM -0500, Justin Pryzby wrote:\n> > Forking this thread\n> > https://www.postgresql.org/message-id/20210403154336.GG29125%40momjian.us\n> \n> Didn't see this one, thanks for forking.\n> \n> - {\"force_parallel_mode\", PGC_USERSET, QUERY_TUNING_OTHER,\n> + {\"force_parallel_mode\", PGC_USERSET, DEVELOPER_OPTIONS,\n> And not this one either, as it is mainly a planner thing, like the\n> other parameters in the same area.\n\nThis is the main motive behind the patch.\n\nDeveloper options aren't shown in postgresql.conf.sample, which it seems like\nsometimes people read through quickly, setting a whole bunch of options that\nsound good, sometimes including this one. And in the best case they then ask\non -performance why their queries are slow and we tell them to turn it back off\nto fix their issues. This changes to no longer put it in .sample, and calling\nit a \"dev\" option seems to be the classification and mechanism by which to do\nthat.\n\n-- \nJustin\n\nps, Maybe you saw that I'd already resent without including the accidental junk\nhunks.\n\n\n",
"msg_date": "Thu, 8 Apr 2021 22:17:18 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 10:17:18PM -0500, Justin Pryzby wrote:\n> On Fri, Apr 09, 2021 at 10:50:53AM +0900, Michael Paquier wrote:\n> > On Sat, Apr 03, 2021 at 08:25:46PM -0500, Justin Pryzby wrote:\n> > > Forking this thread\n> > > https://www.postgresql.org/message-id/20210403154336.GG29125%40momjian.us\n> > \n> > Didn't see this one, thanks for forking.\n> > \n> > - {\"force_parallel_mode\", PGC_USERSET, QUERY_TUNING_OTHER,\n> > + {\"force_parallel_mode\", PGC_USERSET, DEVELOPER_OPTIONS,\n> > And not this one either, as it is mainly a planner thing, like the\n> > other parameters in the same area.\n> \n> This is the main motive behind the patch.\n> \n> Developer options aren't shown in postgresql.conf.sample, which it seems like\n> sometimes people read through quickly, setting a whole bunch of options that\n> sound good, sometimes including this one. And in the best case they then ask\n> on -performance why their queries are slow and we tell them to turn it back off\n> to fix their issues. This changes to no longer put it in .sample, and calling\n> it a \"dev\" option seems to be the classification and mechanism by which to do\n> that.\n\n+1\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 9 Apr 2021 07:39:28 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Fri, Apr 09, 2021 at 07:39:28AM -0400, Bruce Momjian wrote:\n> On Thu, Apr 8, 2021 at 10:17:18PM -0500, Justin Pryzby wrote:\n>> This is the main motive behind the patch.\n>> \n>> Developer options aren't shown in postgresql.conf.sample, which it seems like\n>> sometimes people read through quickly, setting a whole bunch of options that\n>> sound good, sometimes including this one. And in the best case they then ask\n>> on -performance why their queries are slow and we tell them to turn it back off\n>> to fix their issues. This changes to no longer put it in .sample, and calling\n>> it a \"dev\" option seems to be the classification and mechanism by which to do\n>> that.\n> \n> +1\n\nHm. I can see the point you are making based on the bug report that\nhas led to this thread:\nhttps://www.postgresql.org/message-id/CAN0SRDFV=Fv0zXHCGbh7gh=MTfw05Xd1x7gjJrZs5qn-TEphOw@mail.gmail.com\n\nHowever, I'd like to think that we can do better than what's proposed\nin the patch. There are a couple of things to consider here:\n- Should the parameter be renamed to reflect that it should only be\nused for testing purposes?\n- Should we make more general the description of the developer options\nin the docs?\n\nI have applied the patch for log_autovacuum_min_duration for now, as\nthis one is clearly wrong.\n--\nMichael",
"msg_date": "Mon, 12 Apr 2021 14:01:17 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 10:31 AM Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Apr 09, 2021 at 07:39:28AM -0400, Bruce Momjian wrote:\n> > On Thu, Apr 8, 2021 at 10:17:18PM -0500, Justin Pryzby wrote:\n> >> This is the main motive behind the patch.\n> >>\n> >> Developer options aren't shown in postgresql.conf.sample, which it seems like\n> >> sometimes people read through quickly, setting a whole bunch of options that\n> >> sound good, sometimes including this one. And in the best case they then ask\n> >> on -performance why their queries are slow and we tell them to turn it back off\n> >> to fix their issues. This changes to no longer put it in .sample, and calling\n> >> it a \"dev\" option seems to be the classification and mechanism by which to do\n> >> that.\n> >\n> > +1\n>\n> Hm. I can see the point you are making based on the bug report that\n> has led to this thread:\n> https://www.postgresql.org/message-id/CAN0SRDFV=Fv0zXHCGbh7gh=MTfw05Xd1x7gjJrZs5qn-TEphOw@mail.gmail.com\n>\n> However, I'd like to think that we can do better than what's proposed\n> in the patch. There are a couple of things to consider here:\n> - Should the parameter be renamed to reflect that it should only be\n> used for testing purposes?\n> - Should we make more general the description of the developer options\n> in the docs?\n\nIMO, categorizing force_parallel_mode to DEVELOPER_OPTIONS and moving\nit to the \"Developer Options\" section in config.sgml looks\nappropriate. So, the v2-0004 patch proposed by Justin at [1] looks\ngood to me. If there are any other GUCs that are not meant to be used\nin production, IMO we could follow the same.\n\n[1] https://www.postgresql.org/message-id/20210408213812.GA18734%40telsasoft.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 12 Apr 2021 10:58:37 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> However, I'd like to think that we can do better than what's proposed\n> in the patch. There are a couple of things to consider here:\n> - Should the parameter be renamed to reflect that it should only be\n> used for testing purposes?\n\n-1 to that part, because it would break a bunch of buildfarm animals'\nconfigurations. I doubt that any gain in clarity would be worth it.\n\n> - Should we make more general the description of the developer options\n> in the docs?\n\nPerhaps ... what did you have in mind?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Apr 2021 01:40:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 01:40:52AM -0400, Tom Lane wrote:\n> Michael Paquier <[email protected]> writes:\n>> However, I'd like to think that we can do better than what's proposed\n>> in the patch. There are a couple of things to consider here:\n>> - Should the parameter be renamed to reflect that it should only be\n>> used for testing purposes?\n> \n> -1 to that part, because it would break a bunch of buildfarm animals'\n> configurations. I doubt that any gain in clarity would be worth it.\n\nOkay.\n\n>> - Should we make more general the description of the developer options\n>> in the docs?\n> \n> Perhaps ... what did you have in mind?\n\nThe first sentence of the page now says that:\n\"The following parameters are intended for work on the PostgreSQL\nsource code, and in some cases to assist with recovery of severely\ndamaged databases.\"\n\nThat does not stick with force_parallel_mode IMO. Maybe:\n\"The following parameters are intended for development work related to\nPostgreSQL. Some of them work on the PostgreSQL source code, some of\nthem can be used to control the run-time behavior of the server, and\nin some cases they can be used to assist with the recovery of severely\ndamaged databases.\"\n--\nMichael",
"msg_date": "Tue, 13 Apr 2021 16:34:23 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Tue, Apr 13, 2021 at 04:34:23PM +0900, Michael Paquier wrote:\n> On Mon, Apr 12, 2021 at 01:40:52AM -0400, Tom Lane wrote:\n> >> - Should we make more general the description of the developer options\n> >> in the docs?\n> > \n> > Perhaps ... what did you have in mind?\n> \n> The first sentence of the page now says that:\n> \"The following parameters are intended for work on the PostgreSQL\n> source code, and in some cases to assist with recovery of severely\n> damaged databases.\"\n> \n> That does not stick with force_parallel_mode IMO. Maybe:\n\nGood point.\n\n-- \nJustin",
"msg_date": "Tue, 13 Apr 2021 07:31:39 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> On Mon, Apr 12, 2021 at 01:40:52AM -0400, Tom Lane wrote:\n>> Perhaps ... what did you have in mind?\n\n> The first sentence of the page now says that:\n> \"The following parameters are intended for work on the PostgreSQL\n> source code, and in some cases to assist with recovery of severely\n> damaged databases.\"\n\n> That does not stick with force_parallel_mode IMO. Maybe:\n> \"The following parameters are intended for development work related to\n> PostgreSQL. Some of them work on the PostgreSQL source code, some of\n> them can be used to control the run-time behavior of the server, and\n> in some cases they can be used to assist with the recovery of severely\n> damaged databases.\"\n\nI think that's overly wordy. Maybe\n\nThe following parameters are intended for developer testing, and\nshould never be enabled for production work. However, some of\nthem can be used to assist with the recovery of severely\ndamaged databases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Apr 2021 10:12:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Tue, Apr 13, 2021 at 10:12:35AM -0400, Tom Lane wrote:\n> The following parameters are intended for developer testing, and\n> should never be enabled for production work. However, some of\n> them can be used to assist with the recovery of severely\n> damaged databases.\n\nOkay, that's fine by me.\n--\nMichael",
"msg_date": "Wed, 14 Apr 2021 13:54:34 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Tue, Apr 13, 2021 at 07:31:39AM -0500, Justin Pryzby wrote:\n> Good point.\n\nThanks. I have used the wording that Tom has proposed upthread, added\none GUC_NOT_IN_SAMPLE that you forgot, and applied the\nforce_parallel_mode patch.\n--\nMichael",
"msg_date": "Wed, 14 Apr 2021 15:57:21 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Wed, Apr 14, 2021 at 03:57:21PM +0900, Michael Paquier wrote:\n> On Tue, Apr 13, 2021 at 07:31:39AM -0500, Justin Pryzby wrote:\n> > Good point.\n> \n> Thanks. I have used the wording that Tom has proposed upthread, added\n> one GUC_NOT_IN_SAMPLE that you forgot, and applied the\n> force_parallel_mode patch.\n\nThanks. It just occured to me to ask if we should backpatch it.\nThe goal is to avoid someone trying to use this as a peformance option.\n\nIt's to their benefit and ours if they don't do that on v10-13 for the next 5\nyears, not just v14-17.\n\nThe patch seems to apply cleanly on v12 but cherry-pick needs help for other\nbranches...\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 23 Apr 2021 13:23:26 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Fri, Apr 23, 2021 at 01:23:26PM -0500, Justin Pryzby wrote:\n> The patch seems to apply cleanly on v12 but cherry-pick needs help for other\n> branches...\n\nFWIW, this did not seem bad enough to me to require a back-patch.\nThis parameter got introduced in 2016 and this was the only report\nrelated to it for the last 5 years.\n--\nMichael",
"msg_date": "Sat, 24 Apr 2021 10:50:21 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Sat, Apr 24, 2021 at 10:50:21AM +0900, Michael Paquier wrote:\n> On Fri, Apr 23, 2021 at 01:23:26PM -0500, Justin Pryzby wrote:\n> > The patch seems to apply cleanly on v12 but cherry-pick needs help for other\n> > branches...\n> \n> FWIW, this did not seem bad enough to me to require a back-patch.\n> This parameter got introduced in 2016 and this was the only report\n> related to it for the last 5 years.\n\nNo, it's not the first report - although I'm surprised I wasn't able to find\nmore than these.\n\nhttps://www.postgresql.org/message-id/[email protected]\nhttps://www.postgresql.org/message-id/CAKJS1f_Qi0iboCos3wu6QiAbdF-9FoK57wxzKbe2-WcesN4rFA%40mail.gmail.com\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 23 Apr 2021 21:57:35 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "On Fri, Apr 09, 2021 at 10:50:53AM +0900, Michael Paquier wrote:\n> - {\"track_commit_timestamp\", PGC_POSTMASTER, REPLICATION,\n> + {\"track_commit_timestamp\", PGC_POSTMASTER, REPLICATION_SENDING,\n> I can get behind this change for clarity where it gets actively used.\n\nI'm not sure what you meant?\n\n...but, I realized just now that *zero* other GUCs use \"REPLICATION\".\nAnd the documentation puts it in 20.6.1. Sending Servers,\nso it still seems to me that this is correct to move this, too.\n\nhttps://www.postgresql.org/docs/devel/runtime-config-replication.html\n\nThen, I wonder if REPLICATION should be removed from guc_tables.h...\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 28 Apr 2021 23:24:04 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
},
{
"msg_contents": "Justin Pryzby <[email protected]> writes:\n> ...but, I realized just now that *zero* other GUCs use \"REPLICATION\".\n> And the documentation puts it in 20.6.1. Sending Servers,\n> so it still seems to me that this is correct to move this, too.\n> https://www.postgresql.org/docs/devel/runtime-config-replication.html\n> Then, I wonder if REPLICATION should be removed from guc_tables.h...\n\nFor the archives' sake --- these things are now committed as part of\na55a98477. I'd forgotten this thread, and then rediscovered the same\ninconsistencies as Justin had while reviewing Bharath Rupireddy's patch\nfor bug #16997 [1].\n\nI think this thread can now be closed off as done. However, there\nare some open issues mentioned in the other thread, if anyone here\nwants to comment.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/16997-ff16127f6e0d1390%40postgresql.org\n\n\n",
"msg_date": "Sat, 08 May 2021 12:39:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] force_parallel_mode and GUC categories"
}
] |
[
{
"msg_contents": "Hi,\nWe have few select queries during which we see SHARED LOCKS and EXCLUSIVE\nLOCKS on tables. Can these locks cause slowness? Is there any way to reduce\nthe locks?\n\nWhat must be causing ACCESS EXCLUSIVE LOCKS when the application is running\nselect queries? Is it AUTOVACUUM?\n\nRegards,\nAditya.\n\nHi,We have few select queries during which we see SHARED LOCKS and EXCLUSIVE LOCKS on tables. Can these locks cause slowness? Is there any way to reduce the locks?What must be causing ACCESS EXCLUSIVE LOCKS when the application is running select queries? Is it AUTOVACUUM?Regards,Aditya.",
"msg_date": "Sun, 4 Apr 2021 16:12:14 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "SHARED LOCKS , EXCLUSIVE LOCKS, ACCESS EXCLUSIVE LOCKS"
},
{
"msg_contents": "Hi;\n\nIt's normal to see locks on tables during queries. These are usually locks\nused automatically by postgres as a result of the operations you perform on\nyour database. You should check the document for the lock modes postgres\nuses.\n\nLock causes slowness if it causes other queries to wait. You can see the\nqueries waiting for lock from pg_locks view. Access Exclusive Lock\ncompletely locks the table, does not allow read and write operations,\nblocks queries.\n\nCommands such as drop table, truncate, reindex, vacuum full, alter table\nuse this lock. And autovacuum uses a weaker lock on the table, not using\nan exclusive lock.\n\naditya desai <[email protected]>, 4 Nis 2021 Paz, 13:42 tarihinde şunu\nyazdı:\n\n> Hi,\n> We have few select queries during which we see SHARED LOCKS and EXCLUSIVE\n> LOCKS on tables. Can these locks cause slowness? Is there any way to reduce\n> the locks?\n>\n> What must be causing ACCESS EXCLUSIVE LOCKS when the application is\n> running select queries? Is it AUTOVACUUM?\n>\n> Regards,\n> Aditya.\n>\n\nHi;It's normal to see locks on tables during queries. These are usually locks used automatically by postgres as a result of the operations you perform on your database. You should check the document for the lock modes postgres uses.Lock causes slowness if it causes other queries to wait. You can see the queries waiting for lock from pg_locks view. Access Exclusive Lock completely locks the table, does not allow read and write operations, blocks queries. Commands such as drop table, truncate, reindex, vacuum full, alter table use this lock. And autovacuum uses a weaker lock on the table, not using an exclusive lock.aditya desai <[email protected]>, 4 Nis 2021 Paz, 13:42 tarihinde şunu yazdı:Hi,We have few select queries during which we see SHARED LOCKS and EXCLUSIVE LOCKS on tables. Can these locks cause slowness? Is there any way to reduce the locks?What must be causing ACCESS EXCLUSIVE LOCKS when the application is running select queries? Is it AUTOVACUUM?Regards,Aditya.",
"msg_date": "Sun, 4 Apr 2021 20:01:27 +0300",
"msg_from": "Amine Tengilimoglu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SHARED LOCKS , EXCLUSIVE LOCKS, ACCESS EXCLUSIVE LOCKS"
},
{
"msg_contents": "On Sun, Apr 04, 2021 at 04:12:14PM +0530, aditya desai wrote:\n> Hi,\n> We have few select queries during which we see SHARED LOCKS and EXCLUSIVE\n> LOCKS on tables. Can these locks cause slowness? Is there any way to reduce\n> the locks?\n> \n> What must be causing ACCESS EXCLUSIVE LOCKS when the application is running\n> select queries? Is it AUTOVACUUM?\n\nI suggest to review all the logging settings, and consider setting:\nlog_destination = 'stderr,csvlog' \nlog_checkpoints = on \nlog_lock_waits = on \nlog_min_messages = info \nlog_min_error_statement = notice \nlog_temp_files = 0 \nlog_min_duration_statement = '9sec' \nlog_autovacuum_min_duration = '99sec' \n\nYou should probably set up some way to monitor logs.\nWe set log_destination=csvlog and import them into the DB.\nThen I have nagios checks for slow queries, errors, many tempfiles, etc.\nhttps://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-CSVLOG\nhttps://www.postgresql.org/message-id/[email protected]\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 4 Apr 2021 12:19:45 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SHARED LOCKS , EXCLUSIVE LOCKS, ACCESS EXCLUSIVE LOCKS"
},
{
"msg_contents": "Thanks Amine and Justin. I will check and try this.\n\nRegards,\nAditya.\n\nOn Sun, Apr 4, 2021 at 10:49 PM Justin Pryzby <[email protected]> wrote:\n\n> On Sun, Apr 04, 2021 at 04:12:14PM +0530, aditya desai wrote:\n> > Hi,\n> > We have few select queries during which we see SHARED LOCKS and EXCLUSIVE\n> > LOCKS on tables. Can these locks cause slowness? Is there any way to\n> reduce\n> > the locks?\n> >\n> > What must be causing ACCESS EXCLUSIVE LOCKS when the application is\n> running\n> > select queries? Is it AUTOVACUUM?\n>\n> I suggest to review all the logging settings, and consider setting:\n> log_destination = 'stderr,csvlog'\n>\n>\n> log_checkpoints = on\n>\n>\n>\n> log_lock_waits = on\n>\n>\n>\n> log_min_messages = info\n>\n>\n> log_min_error_statement = notice\n>\n>\n>\n> log_temp_files = 0\n>\n>\n> log_min_duration_statement = '9sec'\n>\n>\n>\n> log_autovacuum_min_duration = '99sec'\n>\n>\n>\n> You should probably set up some way to monitor logs.\n> We set log_destination=csvlog and import them into the DB.\n> Then I have nagios checks for slow queries, errors, many tempfiles, etc.\n>\n> https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-CSVLOG\n> https://www.postgresql.org/message-id/[email protected]\n>\n> --\n> Justin\n>\n\nThanks Amine and Justin. I will check and try this.Regards,Aditya.On Sun, Apr 4, 2021 at 10:49 PM Justin Pryzby <[email protected]> wrote:On Sun, Apr 04, 2021 at 04:12:14PM +0530, aditya desai wrote:\r\n> Hi,\r\n> We have few select queries during which we see SHARED LOCKS and EXCLUSIVE\r\n> LOCKS on tables. Can these locks cause slowness? Is there any way to reduce\r\n> the locks?\r\n> \r\n> What must be causing ACCESS EXCLUSIVE LOCKS when the application is running\r\n> select queries? Is it AUTOVACUUM?\n\r\nI suggest to review all the logging settings, and consider setting:\r\nlog_destination = 'stderr,csvlog' \r\nlog_checkpoints = on \r\nlog_lock_waits = on \r\nlog_min_messages = info \r\nlog_min_error_statement = notice \r\nlog_temp_files = 0 \r\nlog_min_duration_statement = '9sec' \r\nlog_autovacuum_min_duration = '99sec' \n\r\nYou should probably set up some way to monitor logs.\r\nWe set log_destination=csvlog and import them into the DB.\r\nThen I have nagios checks for slow queries, errors, many tempfiles, etc.\nhttps://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-CSVLOG\nhttps://www.postgresql.org/message-id/[email protected]\n\r\n-- \r\nJustin",
"msg_date": "Tue, 6 Apr 2021 10:39:27 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SHARED LOCKS , EXCLUSIVE LOCKS, ACCESS EXCLUSIVE LOCKS"
},
{
"msg_contents": "\nOn 4/4/21 6:42 AM, aditya desai wrote:\n> Hi,\n> We have few select queries during which we see SHARED LOCKS and\n> EXCLUSIVE LOCKS on tables. Can these locks cause slowness? Is there\n> any way to reduce the locks?\n>\n> What must be causing ACCESS EXCLUSIVE LOCKS when the application is\n> running select queries? Is it AUTOVACUUM?\n>\n\nSuggest you read this part of The Fine Manual:\n<https://www.postgresql.org/docs/current/explicit-locking.html>\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 6 Apr 2021 11:49:54 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SHARED LOCKS , EXCLUSIVE LOCKS, ACCESS EXCLUSIVE LOCKS"
}
] |
[
{
"msg_contents": "Hi,\n\nI wonder can we set up a hot standby in such a way that we don't need \nany log streaming nor shipping, where instead every hot standby just \nmounts the same disk in read-only mode which the master uses to write \nhis WAL files?\n\nEven without a clustered file system, e.g., a UFS on FreeBSD, one can \nhave the master mount in read-write mode while all the hot standbys \nwould mount the volume read-only. Given that WAL logs are written out at \na certain rate, one can at regular intervals issue\n\nmount -u /pg_wal\n\nand it should refresh the metadata, I assume. I am re-reading about \nhot-standby, and it strikes me that this method is essentially the \"log \nshipping\" method only that there is no actual \"shipping\" involved, the \nnew log files simply appear all of a sudden on the disk.\n\nI suppose there is a question how we know when a new WAL file is \nfinished appearing? And as I read the log-shipping method may not be \nsuitable for hot standby use?\n\nIs this something that has been written about already?\n\nregards,\n-Gunther\n\n\n_______________________________________________\[email protected] mailing list\nhttps://lists.freebsd.org/mailman/listinfo/freebsd-performance\nTo unsubscribe, send any mail to \n\"[email protected]\"\n\n\n",
"msg_date": "Mon, 5 Apr 2021 18:22:26 -0400",
"msg_from": "Gunther Schadow <[email protected]>",
"msg_from_op": true,
"msg_subject": "PosgtgreSQL hot standby reading WAL from muli-attached volume?"
}
] |
[
{
"msg_contents": "Hi,\nWe have to access data from one schema to another. We have created a view\nfor this but performance is not good. We tried materialized views as well\nbut Refresh MV is creating problem as it puts and access exclusive locks.\n\nIs there any other way to achieve this?\n\n\nRegards,\nAditya.\n\nHi,We have to access data from one schema to another. We have created a view for this but performance is not good. We tried materialized views as well but Refresh MV is creating problem as it puts and access exclusive locks.Is there any other way to achieve this?Regards,Aditya.",
"msg_date": "Tue, 6 Apr 2021 13:22:31 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Substitute for synonym in Oracle after migration to postgres"
},
{
"msg_contents": "On Tue, 2021-04-06 at 13:22 +0530, aditya desai wrote:\n> We have to access data from one schema to another. We have created\n> a view for this but performance is not good.\n\nThe performance of a view that is just a simple SELECT to a table\nin a different schema will be just as good as using that table\ndirectly.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Tue, 06 Apr 2021 10:19:32 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Substitute for synonym in Oracle after migration to postgres"
},
{
"msg_contents": "On Tue, Apr 06, 2021 at 01:22:31PM +0530, aditya desai wrote:\n> Hi,\n> We have to access data from one schema to another. We have created a view for this but performance is not good. We tried\n> materialized views as well but Refresh MV is creating problem as it puts and access exclusive locks.\n> Is there any other way to achieve this?\n\nYes, just use the other table right in your query. There is no need to\nadd wrappers.\n\nselect * from schema1.table join schema2.table on ...\n\ndepesz\n\n\n",
"msg_date": "Tue, 6 Apr 2021 12:41:48 +0200",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Substitute for synonym in Oracle after migration to postgres"
},
{
"msg_contents": "Thanks will check.\n\nOn Tue, Apr 6, 2021 at 4:11 PM hubert depesz lubaczewski <[email protected]>\nwrote:\n\n> On Tue, Apr 06, 2021 at 01:22:31PM +0530, aditya desai wrote:\n> > Hi,\n> > We have to access data from one schema to another. We have created a\n> view for this but performance is not good. We tried\n> > materialized views as well but Refresh MV is creating problem as it puts\n> and access exclusive locks.\n> > Is there any other way to achieve this?\n>\n> Yes, just use the other table right in your query. There is no need to\n> add wrappers.\n>\n> select * from schema1.table join schema2.table on ...\n>\n> depesz\n>\n\nThanks will check.On Tue, Apr 6, 2021 at 4:11 PM hubert depesz lubaczewski <[email protected]> wrote:On Tue, Apr 06, 2021 at 01:22:31PM +0530, aditya desai wrote:\n> Hi,\n> We have to access data from one schema to another. We have created a view for this but performance is not good. We tried\n> materialized views as well but Refresh MV is creating problem as it puts and access exclusive locks.\n> Is there any other way to achieve this?\n\nYes, just use the other table right in your query. There is no need to\nadd wrappers.\n\nselect * from schema1.table join schema2.table on ...\n\ndepesz",
"msg_date": "Tue, 6 Apr 2021 18:49:16 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Substitute for synonym in Oracle after migration to postgres"
}
] |
[
{
"msg_contents": "Hi,\nBelow query takes 12 seconds. We have an index on postcode.\n\nselect count(*) from table where postcode >= '00420' AND postcode <= '00500'\n\nindex:\n\nCREATE INDEX Table_i1\n ON table USING btree\n ((postcode::numeric));\n\nTable has 180,000 rows and the count is 150,000. Expectation is to run\nthis query in 2-3 seconds(it takes 2 seconds in Oracle).\n\nHere is a query plan:\n\n\"Aggregate (cost=622347.34..622347.35 rows=1 width=8) (actual\ntime=12850.580..12850.580 rows=1 loops=1)\"\n\" -> Bitmap Heap Scan on table (cost=413379.89..621681.38 rows=266383\nwidth=0) (actual time=12645.656..12835.185 rows=209749 loops=1)\"\n\" Recheck Cond: (((postcode)::text >= '00420'::text) AND\n((postcode)::text <= '00500'::text))\"\n\" Heap Blocks: exact=118286\"\n\" -> Bitmap Index Scan on table_i4 (cost=0.00..413313.29\nrows=266383 width=0) (actual time=12615.321..12615.321 rows=209982 loops=1)\"\n\" Index Cond: (((postcode)::text >= '00420'::text) AND\n((postcode)::text <= '00500'::text))\"\n\"Planning Time: 0.191 ms\"\n\"Execution Time: 12852.823 ms\"\n\n\n\nRegards,\nAditya.\n\nHi,Below query takes 12 seconds. We have an index on postcode.select count(*) from table where postcode >= '00420' AND postcode <= '00500'index:CREATE INDEX Table_i1 ON table USING btree ((postcode::numeric));Table has 180,000 rows and the count is 150,000. Expectation is to run this query in 2-3 seconds(it takes 2 seconds in Oracle).Here is a query plan:\"Aggregate (cost=622347.34..622347.35 rows=1 width=8) (actual time=12850.580..12850.580 rows=1 loops=1)\"\" -> Bitmap Heap Scan on table (cost=413379.89..621681.38 rows=266383 width=0) (actual time=12645.656..12835.185 rows=209749 loops=1)\"\" Recheck Cond: (((postcode)::text >= '00420'::text) AND ((postcode)::text <= '00500'::text))\"\" Heap Blocks: exact=118286\"\" -> Bitmap Index Scan on table_i4 (cost=0.00..413313.29 rows=266383 width=0) (actual time=12615.321..12615.321 rows=209982 loops=1)\"\" Index Cond: (((postcode)::text >= '00420'::text) AND ((postcode)::text <= '00500'::text))\"\"Planning Time: 0.191 ms\"\"Execution Time: 12852.823 ms\"Regards,Aditya.",
"msg_date": "Tue, 6 Apr 2021 18:44:02 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "select count(*) is slow"
},
{
"msg_contents": "aditya desai <[email protected]> writes:\n> Below query takes 12 seconds. We have an index on postcode.\n\n> select count(*) from table where postcode >= '00420' AND postcode <= '00500'\n\nThat query does not match this index:\n\n> CREATE INDEX Table_i1\n> ON table USING btree\n> ((postcode::numeric));\n\nYou could either change postcode to numeric, change all your queries\nof this sort to include the cast explicitly, or make an index that\ndoesn't have a cast.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Apr 2021 09:25:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select count(*) is slow"
},
{
"msg_contents": "Thanks Tom. Will try with numeric. Please ignore table and index naming.\n\nOn Tue, Apr 6, 2021 at 6:55 PM Tom Lane <[email protected]> wrote:\n\n> aditya desai <[email protected]> writes:\n> > Below query takes 12 seconds. We have an index on postcode.\n>\n> > select count(*) from table where postcode >= '00420' AND postcode <=\n> '00500'\n>\n> That query does not match this index:\n>\n> > CREATE INDEX Table_i1\n> > ON table USING btree\n> > ((postcode::numeric));\n>\n> You could either change postcode to numeric, change all your queries\n> of this sort to include the cast explicitly, or make an index that\n> doesn't have a cast.\n>\n> regards, tom lane\n>\n\nThanks Tom. Will try with numeric. Please ignore table and index naming.On Tue, Apr 6, 2021 at 6:55 PM Tom Lane <[email protected]> wrote:aditya desai <[email protected]> writes:\n> Below query takes 12 seconds. We have an index on postcode.\n\n> select count(*) from table where postcode >= '00420' AND postcode <= '00500'\n\nThat query does not match this index:\n\n> CREATE INDEX Table_i1\n> ON table USING btree\n> ((postcode::numeric));\n\nYou could either change postcode to numeric, change all your queries\nof this sort to include the cast explicitly, or make an index that\ndoesn't have a cast.\n\n regards, tom lane",
"msg_date": "Tue, 6 Apr 2021 19:00:26 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: select count(*) is slow"
},
{
"msg_contents": "\nOn 4/6/21 9:30 AM, aditya desai wrote:\n> Thanks Tom. Will try with numeric. Please ignore table and index naming.\n>\n> On Tue, Apr 6, 2021 at 6:55 PM Tom Lane <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n> aditya desai <[email protected] <mailto:[email protected]>> writes:\n> > Below query takes 12 seconds. We have an index on postcode.\n>\n> > select count(*) from table where postcode >= '00420' AND\n> postcode <= '00500'\n>\n> That query does not match this index:\n>\n> > CREATE INDEX Table_i1\n> > ON table USING btree\n> > ((postcode::numeric));\n>\n> You could either change postcode to numeric, change all your queries\n> of this sort to include the cast explicitly, or make an index that\n> doesn't have a cast.\n>\n> \n>\n\n\nIMNSHO postcodes, zip codes, telephone numbers and the like should never\nbe numeric under any circumstances. This isn't numeric data (what is the\naverage postcode?), it's textual data consisting of digits, so they\nshould always be text/varchar. The index here should just be on the\nplain text column, not cast to numeric.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 6 Apr 2021 11:44:30 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select count(*) is slow"
},
{
"msg_contents": "Thanks to all of you. Removed casting to numeric from Index. Performance\nimproved from 12 sec to 500 ms. Rocket!!!\n\nOn Tue, Apr 6, 2021 at 9:14 PM Andrew Dunstan <[email protected]> wrote:\n\n>\n> On 4/6/21 9:30 AM, aditya desai wrote:\n> > Thanks Tom. Will try with numeric. Please ignore table and index naming.\n> >\n> > On Tue, Apr 6, 2021 at 6:55 PM Tom Lane <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n> > aditya desai <[email protected] <mailto:[email protected]>>\n> writes:\n> > > Below query takes 12 seconds. We have an index on postcode.\n> >\n> > > select count(*) from table where postcode >= '00420' AND\n> > postcode <= '00500'\n> >\n> > That query does not match this index:\n> >\n> > > CREATE INDEX Table_i1\n> > > ON table USING btree\n> > > ((postcode::numeric));\n> >\n> > You could either change postcode to numeric, change all your queries\n> > of this sort to include the cast explicitly, or make an index that\n> > doesn't have a cast.\n> >\n> >\n> >\n>\n>\n> IMNSHO postcodes, zip codes, telephone numbers and the like should never\n> be numeric under any circumstances. This isn't numeric data (what is the\n> average postcode?), it's textual data consisting of digits, so they\n> should always be text/varchar. The index here should just be on the\n> plain text column, not cast to numeric.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\nThanks to all of you. Removed casting to numeric from Index. Performance improved from 12 sec to 500 ms. Rocket!!!On Tue, Apr 6, 2021 at 9:14 PM Andrew Dunstan <[email protected]> wrote:\nOn 4/6/21 9:30 AM, aditya desai wrote:\n> Thanks Tom. Will try with numeric. Please ignore table and index naming.\n>\n> On Tue, Apr 6, 2021 at 6:55 PM Tom Lane <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n> aditya desai <[email protected] <mailto:[email protected]>> writes:\n> > Below query takes 12 seconds. We have an index on postcode.\n>\n> > select count(*) from table where postcode >= '00420' AND\n> postcode <= '00500'\n>\n> That query does not match this index:\n>\n> > CREATE INDEX Table_i1\n> > ON table USING btree\n> > ((postcode::numeric));\n>\n> You could either change postcode to numeric, change all your queries\n> of this sort to include the cast explicitly, or make an index that\n> doesn't have a cast.\n>\n> \n>\n\n\nIMNSHO postcodes, zip codes, telephone numbers and the like should never\nbe numeric under any circumstances. This isn't numeric data (what is the\naverage postcode?), it's textual data consisting of digits, so they\nshould always be text/varchar. The index here should just be on the\nplain text column, not cast to numeric.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 7 Apr 2021 13:39:47 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: select count(*) is slow"
}
] |
[
{
"msg_contents": "Hi,\n\n \n\nWe have a Class db.t2.medium database on AWS.\n\nWe use a procedure to transfer data records from the Source to the Target\nSchema.\n\nTransfers are identified by the log_id field in the target table.\n\n \n\nThe procedure is:\n\n1 all records are deleted from the Target table with the actual log_id value\n\n2 a complicated SELECT (numerous tables are joined) is created on the Source\nsystem \n\n3 a cursor is defined based on this SELECT\n\n4 we go trough the CURSOR and insert new records into the Target table with\nthis log_id\n\n \n\n(Actually we have about 100 tables in the Target schema and the size of the\ndatabase backup file is about 1GByte. But we do the same for all the Target\ntables.)\n\n \n\nOur procedure is extremely slow for the first run: 3 days for the 100\ntables. For the second and all subsequent run it is fast enough (15\nminutes). \n\nThe only difference between the first run and all the others is that in the\nfirst run there are no records in the Target schema with this log_id.\n\n \n\nIt seems, that in the first step the DELETE operation makes free some\n\"space\", and the INSET operation in the 4. step can reuse this space. But if\nno records are deleted in the first step, the procedure is extremely slow.\n\n \n\nTo speed up the first run we found the following workaround:\n\nWe inserted dummy records into the Target tables with the proper log_id, and\nreally the first run became very fast again.\n\n \n\nIs there any \"normal\" way to speed up this procedure?\n\nIn the production environment there will be only \"first runs\", the same\nlog_id will never be used again.\n\n \n\n \n\nthank\n\nZoltán\n\n \n\n \n\n\nHi, We have a Class db.t2.medium database on AWS.We use a procedure to transfer data records from the Source to the Target Schema.Transfers are identified by the log_id field in the target table. The procedure is:1 all records are deleted from the Target table with the actual log_id value2 a complicated SELECT (numerous tables are joined) is created on the Source system 3 a cursor is defined based on this SELECT4 we go trough the CURSOR and insert new records into the Target table with this log_id (Actually we have about 100 tables in the Target schema and the size of the database backup file is about 1GByte. But we do the same for all the Target tables.) Our procedure is extremely slow for the first run: 3 days for the 100 tables. For the second and all subsequent run it is fast enough (15 minutes). The only difference between the first run and all the others is that in the first run there are no records in the Target schema with this log_id. It seems, that in the first step the DELETE operation makes free some “space”, and the INSET operation in the 4. step can reuse this space. But if no records are deleted in the first step, the procedure is extremely slow. To speed up the first run we found the following workaround:We inserted dummy records into the Target tables with the proper log_id, and really the first run became very fast again. Is there any “normal” way to speed up this procedure?In the production environment there will be only “first runs”, the same log_id will never be used again. thankZoltán",
"msg_date": "Thu, 8 Apr 2021 13:24:00 +0200",
"msg_from": "=?iso-8859-2?Q?Szalontai_Zolt=E1n?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "procedure using CURSOR to insert is extremely slow"
},
{
"msg_contents": "If you do a delete on the first step without any statistics, you request will do a full scan of the table, which will be slower.\n\nDid you check the different execution plans ?\n\n\n________________________________\nFrom: Szalontai Zoltán <[email protected]>\nSent: Thursday, April 8, 2021 01:24 PM\nTo: [email protected] <[email protected]>\nSubject: procedure using CURSOR to insert is extremely slow\n\n\nHi,\n\n\n\nWe have a Class db.t2.medium database on AWS.\n\nWe use a procedure to transfer data records from the Source to the Target Schema.\n\nTransfers are identified by the log_id field in the target table.\n\n\n\nThe procedure is:\n\n1 all records are deleted from the Target table with the actual log_id value\n\n2 a complicated SELECT (numerous tables are joined) is created on the Source system\n\n3 a cursor is defined based on this SELECT\n\n4 we go trough the CURSOR and insert new records into the Target table with this log_id\n\n\n\n(Actually we have about 100 tables in the Target schema and the size of the database backup file is about 1GByte. But we do the same for all the Target tables.)\n\n\n\nOur procedure is extremely slow for the first run: 3 days for the 100 tables. For the second and all subsequent run it is fast enough (15 minutes).\n\nThe only difference between the first run and all the others is that in the first run there are no records in the Target schema with this log_id.\n\n\n\nIt seems, that in the first step the DELETE operation makes free some “space”, and the INSET operation in the 4. step can reuse this space. But if no records are deleted in the first step, the procedure is extremely slow.\n\n\n\nTo speed up the first run we found the following workaround:\n\nWe inserted dummy records into the Target tables with the proper log_id, and really the first run became very fast again.\n\n\n\nIs there any “normal” way to speed up this procedure?\n\nIn the production environment there will be only “first runs”, the same log_id will never be used again.\n\n\n\n\n\nthank\n\nZoltán\n\n\n\n\n\n\n\n\n\n\n\nIf you do a delete on the first step without any statistics, you request will do a full scan of the table, which will be slower.\n\n\nDid you check the different execution plans ? \n\n\n\n\n\n\n\n\n\n\n\nFrom: Szalontai Zoltán <[email protected]>\nSent: Thursday, April 8, 2021 01:24 PM\nTo: [email protected] <[email protected]>\nSubject: procedure using CURSOR to insert is extremely slow\n \n\n\n\n\nHi,\n\n \n\nWe have a Class db.t2.medium database on AWS.\n\nWe use a procedure to transfer data records from the Source to the Target Schema.\n\nTransfers are identified by the log_id field in the target table.\n\n \n\nThe procedure is:\n\n1 all records are deleted from the Target table with the actual log_id value\n\n2 a complicated SELECT (numerous tables are joined) is created on the Source system\n\n\n3 a cursor is defined based on this SELECT\n\n4 we go trough the CURSOR and insert new records into the Target table with this log_id\n\n \n\n(Actually we have about 100 tables in the Target schema and the size of the database backup file is about 1GByte. But we do the same for all the Target tables.)\n\n \n\nOur procedure is extremely slow for the first run: 3 days for the 100 tables. For the second and all subsequent run it is fast enough (15 minutes).\n\n\nThe only difference between the first run and all the others is that in the first run there are no records in the Target schema with this log_id.\n\n \n\nIt seems, that in the first step the DELETE operation makes free some “space”, and the INSET operation in the 4. step can reuse this space. But if no records are deleted in the first step, the procedure is extremely slow.\n\n \n\nTo speed up the first run we found the following workaround:\n\nWe inserted dummy records into the Target tables with the proper log_id, and really the first run became very fast again.\n\n \n\nIs there any “normal” way to speed up this procedure?\n\nIn the production environment there will be only “first runs”, the same log_id will never be used again.\n\n \n\n \n\nthank\n\nZoltán",
"msg_date": "Thu, 8 Apr 2021 11:40:19 +0000",
"msg_from": "=?Windows-1252?Q?Herv=E9_Schweitzer_=28HER=29?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: procedure using CURSOR to insert is extremely slow"
},
{
"msg_contents": "How to check execution plans?\n\nWe are in the Loop of the Cursor, and we do insert operations in it.\n\n \n\nFrom: Hervé Schweitzer (HER) <[email protected]> \nSent: Thursday, April 8, 2021 1:40 PM\nTo: Szalontai Zoltán <[email protected]>;\[email protected]\nSubject: Re: procedure using CURSOR to insert is extremely slow\n\n \n\nIf you do a delete on the first step without any statistics, you request\nwill do a full scan of the table, which will be slower.\n\n \n\nDid you check the different execution plans ? \n\n \n\n _____ \n\nFrom: Szalontai Zoltán <[email protected]\n<mailto:[email protected]> >\nSent: Thursday, April 8, 2021 01:24 PM\nTo: [email protected]\n<mailto:[email protected]>\n<[email protected]\n<mailto:[email protected]> >\nSubject: procedure using CURSOR to insert is extremely slow \n\n \n\nHi,\n\n \n\nWe have a Class db.t2.medium database on AWS.\n\nWe use a procedure to transfer data records from the Source to the Target\nSchema.\n\nTransfers are identified by the log_id field in the target table.\n\n \n\nThe procedure is:\n\n1 all records are deleted from the Target table with the actual log_id value\n\n2 a complicated SELECT (numerous tables are joined) is created on the Source\nsystem \n\n3 a cursor is defined based on this SELECT\n\n4 we go trough the CURSOR and insert new records into the Target table with\nthis log_id\n\n \n\n(Actually we have about 100 tables in the Target schema and the size of the\ndatabase backup file is about 1GByte. But we do the same for all the Target\ntables.)\n\n \n\nOur procedure is extremely slow for the first run: 3 days for the 100\ntables. For the second and all subsequent run it is fast enough (15\nminutes). \n\nThe only difference between the first run and all the others is that in the\nfirst run there are no records in the Target schema with this log_id.\n\n \n\nIt seems, that in the first step the DELETE operation makes free some\n\"space\", and the INSET operation in the 4. step can reuse this space. But if\nno records are deleted in the first step, the procedure is extremely slow.\n\n \n\nTo speed up the first run we found the following workaround:\n\nWe inserted dummy records into the Target tables with the proper log_id, and\nreally the first run became very fast again.\n\n \n\nIs there any \"normal\" way to speed up this procedure?\n\nIn the production environment there will be only \"first runs\", the same\nlog_id will never be used again.\n\n \n\n \n\nthank\n\nZoltán\n\n \n\n \n\n\nHow to check execution plans?We are in the Loop of the Cursor, and we do insert operations in it. From: Hervé Schweitzer (HER) <[email protected]> Sent: Thursday, April 8, 2021 1:40 PMTo: Szalontai Zoltán <[email protected]>; [email protected]: Re: procedure using CURSOR to insert is extremely slow If you do a delete on the first step without any statistics, you request will do a full scan of the table, which will be slower. Did you check the different execution plans ? From: Szalontai Zoltán <[email protected]>Sent: Thursday, April 8, 2021 01:24 PMTo: [email protected] <[email protected]>Subject: procedure using CURSOR to insert is extremely slow Hi, We have a Class db.t2.medium database on AWS.We use a procedure to transfer data records from the Source to the Target Schema.Transfers are identified by the log_id field in the target table. The procedure is:1 all records are deleted from the Target table with the actual log_id value2 a complicated SELECT (numerous tables are joined) is created on the Source system 3 a cursor is defined based on this SELECT4 we go trough the CURSOR and insert new records into the Target table with this log_id (Actually we have about 100 tables in the Target schema and the size of the database backup file is about 1GByte. But we do the same for all the Target tables.) Our procedure is extremely slow for the first run: 3 days for the 100 tables. For the second and all subsequent run it is fast enough (15 minutes). The only difference between the first run and all the others is that in the first run there are no records in the Target schema with this log_id. It seems, that in the first step the DELETE operation makes free some “space”, and the INSET operation in the 4. step can reuse this space. But if no records are deleted in the first step, the procedure is extremely slow. To speed up the first run we found the following workaround:We inserted dummy records into the Target tables with the proper log_id, and really the first run became very fast again. Is there any “normal” way to speed up this procedure?In the production environment there will be only “first runs”, the same log_id will never be used again. thankZoltán",
"msg_date": "Thu, 8 Apr 2021 13:58:23 +0200",
"msg_from": "=?iso-8859-2?Q?Szalontai_Zolt=E1n?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: procedure using CURSOR to insert is extremely slow"
},
{
"msg_contents": "Hi Zoltan,\n\nis there any particular reason why you don't do a bulk insert as:\n insert into target_table\n select ... from source_table(s) (with joins etc)\n\nRegards,\nMilos\n\n\n\nOn Thu, Apr 8, 2021 at 1:24 PM Szalontai Zoltán <\[email protected]> wrote:\n\n> Hi,\n>\n>\n>\n> We have a Class db.t2.medium database on AWS.\n>\n> We use a procedure to transfer data records from the Source to the Target\n> Schema.\n>\n> Transfers are identified by the log_id field in the target table.\n>\n>\n>\n> The procedure is:\n>\n> 1 all records are deleted from the Target table with the actual log_id\n> value\n>\n> 2 a complicated SELECT (numerous tables are joined) is created on the\n> Source system\n>\n> 3 a cursor is defined based on this SELECT\n>\n> 4 we go trough the CURSOR and insert new records into the Target table\n> with this log_id\n>\n>\n>\n> (Actually we have about 100 tables in the Target schema and the size of\n> the database backup file is about 1GByte. But we do the same for all the\n> Target tables.)\n>\n>\n>\n> Our procedure is extremely slow for the first run: 3 days for the 100\n> tables. For the second and all subsequent run it is fast enough (15\n> minutes).\n>\n> The only difference between the first run and all the others is that in\n> the first run there are no records in the Target schema with this log_id.\n>\n>\n>\n> It seems, that in the first step the DELETE operation makes free some\n> “space”, and the INSET operation in the 4. step can reuse this space. But\n> if no records are deleted in the first step, the procedure is extremely\n> slow.\n>\n>\n>\n> To speed up the first run we found the following workaround:\n>\n> We inserted dummy records into the Target tables with the proper log_id,\n> and really the first run became very fast again.\n>\n>\n>\n> Is there any “normal” way to speed up this procedure?\n>\n> In the production environment there will be only “first runs”, the same\n> log_id will never be used again.\n>\n>\n>\n>\n>\n> thank\n>\n> Zoltán\n>\n>\n>\n>\n>\n\n\n-- \nMilos Babic\nhttp://www.linkedin.com/in/milosbabic\n\nHi Zoltan,is there any particular reason why you don't do a bulk insert as: insert into target_table select ... from source_table(s) (with joins etc)Regards,MilosOn Thu, Apr 8, 2021 at 1:24 PM Szalontai Zoltán <[email protected]> wrote:Hi, We have a Class db.t2.medium database on AWS.We use a procedure to transfer data records from the Source to the Target Schema.Transfers are identified by the log_id field in the target table. The procedure is:1 all records are deleted from the Target table with the actual log_id value2 a complicated SELECT (numerous tables are joined) is created on the Source system 3 a cursor is defined based on this SELECT4 we go trough the CURSOR and insert new records into the Target table with this log_id (Actually we have about 100 tables in the Target schema and the size of the database backup file is about 1GByte. But we do the same for all the Target tables.) Our procedure is extremely slow for the first run: 3 days for the 100 tables. For the second and all subsequent run it is fast enough (15 minutes). The only difference between the first run and all the others is that in the first run there are no records in the Target schema with this log_id. It seems, that in the first step the DELETE operation makes free some “space”, and the INSET operation in the 4. step can reuse this space. But if no records are deleted in the first step, the procedure is extremely slow. To speed up the first run we found the following workaround:We inserted dummy records into the Target tables with the proper log_id, and really the first run became very fast again. Is there any “normal” way to speed up this procedure?In the production environment there will be only “first runs”, the same log_id will never be used again. thankZoltán -- Milos Babichttp://www.linkedin.com/in/milosbabic",
"msg_date": "Thu, 8 Apr 2021 14:31:28 +0200",
"msg_from": "Milos Babic <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: procedure using CURSOR to insert is extremely slow"
},
{
"msg_contents": "Hi Milos,\n\n \n\nInside the loops there are frequently if / else branches value transformations used.\n\nWe could not solve it without using a cursor.\n\n \n\nRegards,\n\nZoltán\n\n \n\nFrom: Milos Babic <[email protected]> \nSent: Thursday, April 8, 2021 2:31 PM\nTo: Szalontai Zoltán <[email protected]>\nCc: Pgsql Performance <[email protected]>\nSubject: Re: procedure using CURSOR to insert is extremely slow\n\n \n\nHi Zoltan,\n\n \n\nis there any particular reason why you don't do a bulk insert as:\n\n insert into target_table\n\n select ... from source_table(s) (with joins etc)\n\n \n\nRegards,\n\nMilos\n\n \n\n \n\n \n\nOn Thu, Apr 8, 2021 at 1:24 PM Szalontai Zoltán <[email protected] <mailto:[email protected]> > wrote:\n\nHi,\n\n \n\nWe have a Class db.t2.medium database on AWS.\n\nWe use a procedure to transfer data records from the Source to the Target Schema.\n\nTransfers are identified by the log_id field in the target table.\n\n \n\nThe procedure is:\n\n1 all records are deleted from the Target table with the actual log_id value\n\n2 a complicated SELECT (numerous tables are joined) is created on the Source system \n\n3 a cursor is defined based on this SELECT\n\n4 we go trough the CURSOR and insert new records into the Target table with this log_id\n\n \n\n(Actually we have about 100 tables in the Target schema and the size of the database backup file is about 1GByte. But we do the same for all the Target tables.)\n\n \n\nOur procedure is extremely slow for the first run: 3 days for the 100 tables. For the second and all subsequent run it is fast enough (15 minutes). \n\nThe only difference between the first run and all the others is that in the first run there are no records in the Target schema with this log_id.\n\n \n\nIt seems, that in the first step the DELETE operation makes free some “space”, and the INSET operation in the 4. step can reuse this space. But if no records are deleted in the first step, the procedure is extremely slow.\n\n \n\nTo speed up the first run we found the following workaround:\n\nWe inserted dummy records into the Target tables with the proper log_id, and really the first run became very fast again.\n\n \n\nIs there any “normal” way to speed up this procedure?\n\nIn the production environment there will be only “first runs”, the same log_id will never be used again.\n\n \n\n \n\nthank\n\nZoltán\n\n \n\n \n\n\n\n\n \n\n-- \n\nMilos Babic\n\nhttp://www.linkedin.com/in/milosbabic\n\n\nHi Milos, Inside the loops there are frequently if / else branches value transformations used.We could not solve it without using a cursor. Regards,Zoltán From: Milos Babic <[email protected]> Sent: Thursday, April 8, 2021 2:31 PMTo: Szalontai Zoltán <[email protected]>Cc: Pgsql Performance <[email protected]>Subject: Re: procedure using CURSOR to insert is extremely slow Hi Zoltan, is there any particular reason why you don't do a bulk insert as: insert into target_table select ... from source_table(s) (with joins etc) Regards,Milos On Thu, Apr 8, 2021 at 1:24 PM Szalontai Zoltán <[email protected]> wrote:Hi, We have a Class db.t2.medium database on AWS.We use a procedure to transfer data records from the Source to the Target Schema.Transfers are identified by the log_id field in the target table. The procedure is:1 all records are deleted from the Target table with the actual log_id value2 a complicated SELECT (numerous tables are joined) is created on the Source system 3 a cursor is defined based on this SELECT4 we go trough the CURSOR and insert new records into the Target table with this log_id (Actually we have about 100 tables in the Target schema and the size of the database backup file is about 1GByte. But we do the same for all the Target tables.) Our procedure is extremely slow for the first run: 3 days for the 100 tables. For the second and all subsequent run it is fast enough (15 minutes). The only difference between the first run and all the others is that in the first run there are no records in the Target schema with this log_id. It seems, that in the first step the DELETE operation makes free some “space”, and the INSET operation in the 4. step can reuse this space. But if no records are deleted in the first step, the procedure is extremely slow. To speed up the first run we found the following workaround:We inserted dummy records into the Target tables with the proper log_id, and really the first run became very fast again. Is there any “normal” way to speed up this procedure?In the production environment there will be only “first runs”, the same log_id will never be used again. thankZoltán -- Milos Babichttp://www.linkedin.com/in/milosbabic",
"msg_date": "Thu, 8 Apr 2021 15:56:35 +0200",
"msg_from": "=?utf-8?Q?Szalontai_Zolt=C3=A1n?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: procedure using CURSOR to insert is extremely slow"
},
{
"msg_contents": "Hi Zoltan,\n\n \n\nI haven’t needed to use a cursor in 20 years of sometimes very complex sql coding. \n\n \n\nWhy? Cursors result in RBAR (row by agonizing row) operation which eliminates the power of set-based sql operations. Performance will always suffer – sometimes to extremes. I’m all about fastest possible performance for a given sql solution.\n\n \n\nHow? There have been times I’ve initially said a similar thing – “I don’t see how to solve this without a cursor”. When I hit that point, I stop and decompose the problem into simpler bits, and soak on it and always – literally always – a solution will appear. \n\n \n\nIt’s all in how we envision the solution, especially with Postgres and its amazing ecosystem of sql functions. We really can do almost anything. Since the code is obviously way to complex to post here, I’d simply encourage you to rethink how you’re approaching the solution.\n\n \n\nMike\n\n \n\nFrom: Szalontai Zoltán <[email protected]> \nSent: Thursday, April 08, 2021 6:57 AM\nTo: 'Milos Babic' <[email protected]>\nCc: 'Pgsql Performance' <[email protected]>\nSubject: RE: procedure using CURSOR to insert is extremely slow\n\n \n\nHi Milos,\n\n \n\nInside the loops there are frequently if / else branches value transformations used.\n\nWe could not solve it without using a cursor.\n\n \n\nRegards,\n\nZoltán\n\n \n\nFrom: Milos Babic <[email protected] <mailto:[email protected]> > \nSent: Thursday, April 8, 2021 2:31 PM\nTo: Szalontai Zoltán <[email protected] <mailto:[email protected]> >\nCc: Pgsql Performance <[email protected] <mailto:[email protected]> >\nSubject: Re: procedure using CURSOR to insert is extremely slow\n\n \n\nHi Zoltan,\n\n \n\nis there any particular reason why you don't do a bulk insert as:\n\n insert into target_table\n\n select ... from source_table(s) (with joins etc)\n\n \n\nRegards,\n\nMilos\n\n \n\n \n\n \n\nOn Thu, Apr 8, 2021 at 1:24 PM Szalontai Zoltán <[email protected] <mailto:[email protected]> > wrote:\n\nHi,\n\n \n\nWe have a Class db.t2.medium database on AWS.\n\nWe use a procedure to transfer data records from the Source to the Target Schema.\n\nTransfers are identified by the log_id field in the target table.\n\n \n\nThe procedure is:\n\n1 all records are deleted from the Target table with the actual log_id value\n\n2 a complicated SELECT (numerous tables are joined) is created on the Source system \n\n3 a cursor is defined based on this SELECT\n\n4 we go trough the CURSOR and insert new records into the Target table with this log_id\n\n \n\n(Actually we have about 100 tables in the Target schema and the size of the database backup file is about 1GByte. But we do the same for all the Target tables.)\n\n \n\nOur procedure is extremely slow for the first run: 3 days for the 100 tables. For the second and all subsequent run it is fast enough (15 minutes). \n\nThe only difference between the first run and all the others is that in the first run there are no records in the Target schema with this log_id.\n\n \n\nIt seems, that in the first step the DELETE operation makes free some “space”, and the INSET operation in the 4. step can reuse this space. But if no records are deleted in the first step, the procedure is extremely slow.\n\n \n\nTo speed up the first run we found the following workaround:\n\nWe inserted dummy records into the Target tables with the proper log_id, and really the first run became very fast again.\n\n \n\nIs there any “normal” way to speed up this procedure?\n\nIn the production environment there will be only “first runs”, the same log_id will never be used again.\n\n \n\n \n\nthank\n\nZoltán\n\n \n\n \n\n\n\n\n \n\n-- \n\nMilos Babic\n\nhttp://www.linkedin.com/in/milosbabic\n\n\nHi Zoltan, I haven’t needed to use a cursor in 20 years of sometimes very complex sql coding. Why? Cursors result in RBAR (row by agonizing row) operation which eliminates the power of set-based sql operations. Performance will always suffer – sometimes to extremes. I’m all about fastest possible performance for a given sql solution. How? There have been times I’ve initially said a similar thing – “I don’t see how to solve this without a cursor”. When I hit that point, I stop and decompose the problem into simpler bits, and soak on it and always – literally always – a solution will appear. It’s all in how we envision the solution, especially with Postgres and its amazing ecosystem of sql functions. We really can do almost anything. Since the code is obviously way to complex to post here, I’d simply encourage you to rethink how you’re approaching the solution. Mike From: Szalontai Zoltán <[email protected]> Sent: Thursday, April 08, 2021 6:57 AMTo: 'Milos Babic' <[email protected]>Cc: 'Pgsql Performance' <[email protected]>Subject: RE: procedure using CURSOR to insert is extremely slow Hi Milos, Inside the loops there are frequently if / else branches value transformations used.We could not solve it without using a cursor. Regards,Zoltán From: Milos Babic <[email protected]> Sent: Thursday, April 8, 2021 2:31 PMTo: Szalontai Zoltán <[email protected]>Cc: Pgsql Performance <[email protected]>Subject: Re: procedure using CURSOR to insert is extremely slow Hi Zoltan, is there any particular reason why you don't do a bulk insert as: insert into target_table select ... from source_table(s) (with joins etc) Regards,Milos On Thu, Apr 8, 2021 at 1:24 PM Szalontai Zoltán <[email protected]> wrote:Hi, We have a Class db.t2.medium database on AWS.We use a procedure to transfer data records from the Source to the Target Schema.Transfers are identified by the log_id field in the target table. The procedure is:1 all records are deleted from the Target table with the actual log_id value2 a complicated SELECT (numerous tables are joined) is created on the Source system 3 a cursor is defined based on this SELECT4 we go trough the CURSOR and insert new records into the Target table with this log_id (Actually we have about 100 tables in the Target schema and the size of the database backup file is about 1GByte. But we do the same for all the Target tables.) Our procedure is extremely slow for the first run: 3 days for the 100 tables. For the second and all subsequent run it is fast enough (15 minutes). The only difference between the first run and all the others is that in the first run there are no records in the Target schema with this log_id. It seems, that in the first step the DELETE operation makes free some “space”, and the INSET operation in the 4. step can reuse this space. But if no records are deleted in the first step, the procedure is extremely slow. To speed up the first run we found the following workaround:We inserted dummy records into the Target tables with the proper log_id, and really the first run became very fast again. Is there any “normal” way to speed up this procedure?In the production environment there will be only “first runs”, the same log_id will never be used again. thankZoltán -- Milos Babichttp://www.linkedin.com/in/milosbabic",
"msg_date": "Thu, 8 Apr 2021 09:52:44 -0700",
"msg_from": "\"Mike Sofen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: procedure using CURSOR to insert is extremely slow"
},
{
"msg_contents": "Hi Zoltan,\n\nyou should try to rethink the logic behind the query.\nNumerous if/then/else can be transformed into case-when, or a bunch of\nunions, which, I'm 100% certain will do much better than row-by-row\ninsertion.\n\nHowever, this is a general note.\nStill doesn't explain why it takes faster to insert with deletions (?!!)\nIs there any chance the set you inserting in the second run is smaller\n(e.g. only a fraction of the original one)?\n\nIf possible, you can send over a fragment of the code, and we can look into\nit.\n\nregards,\nMilos\n\n\n\n\n\n\nOn Thu, Apr 8, 2021 at 3:56 PM Szalontai Zoltán <\[email protected]> wrote:\n\n> Hi Milos,\n>\n>\n>\n> Inside the loops there are frequently if / else branches value\n> transformations used.\n>\n> We could not solve it without using a cursor.\n>\n>\n>\n> Regards,\n>\n> Zoltán\n>\n>\n>\n> *From:* Milos Babic <[email protected]>\n> *Sent:* Thursday, April 8, 2021 2:31 PM\n> *To:* Szalontai Zoltán <[email protected]>\n> *Cc:* Pgsql Performance <[email protected]>\n> *Subject:* Re: procedure using CURSOR to insert is extremely slow\n>\n>\n>\n> Hi Zoltan,\n>\n>\n>\n> is there any particular reason why you don't do a bulk insert as:\n>\n> insert into target_table\n>\n> select ... from source_table(s) (with joins etc)\n>\n>\n>\n> Regards,\n>\n> Milos\n>\n>\n>\n>\n>\n>\n>\n> On Thu, Apr 8, 2021 at 1:24 PM Szalontai Zoltán <\n> [email protected]> wrote:\n>\n> Hi,\n>\n>\n>\n> We have a Class db.t2.medium database on AWS.\n>\n> We use a procedure to transfer data records from the Source to the Target\n> Schema.\n>\n> Transfers are identified by the log_id field in the target table.\n>\n>\n>\n> The procedure is:\n>\n> 1 all records are deleted from the Target table with the actual log_id\n> value\n>\n> 2 a complicated SELECT (numerous tables are joined) is created on the\n> Source system\n>\n> 3 a cursor is defined based on this SELECT\n>\n> 4 we go trough the CURSOR and insert new records into the Target table\n> with this log_id\n>\n>\n>\n> (Actually we have about 100 tables in the Target schema and the size of\n> the database backup file is about 1GByte. But we do the same for all the\n> Target tables.)\n>\n>\n>\n> Our procedure is extremely slow for the first run: 3 days for the 100\n> tables. For the second and all subsequent run it is fast enough (15\n> minutes).\n>\n> The only difference between the first run and all the others is that in\n> the first run there are no records in the Target schema with this log_id.\n>\n>\n>\n> It seems, that in the first step the DELETE operation makes free some\n> “space”, and the INSET operation in the 4. step can reuse this space. But\n> if no records are deleted in the first step, the procedure is extremely\n> slow.\n>\n>\n>\n> To speed up the first run we found the following workaround:\n>\n> We inserted dummy records into the Target tables with the proper log_id,\n> and really the first run became very fast again.\n>\n>\n>\n> Is there any “normal” way to speed up this procedure?\n>\n> In the production environment there will be only “first runs”, the same\n> log_id will never be used again.\n>\n>\n>\n>\n>\n> thank\n>\n> Zoltán\n>\n>\n>\n>\n>\n>\n>\n>\n> --\n>\n> Milos Babic\n>\n> http://www.linkedin.com/in/milosbabic\n>\n\n\n-- \nMilos Babic\nhttp://www.linkedin.com/in/milosbabic\n\nHi Zoltan,you should try to rethink the logic behind the query.Numerous if/then/else can be transformed into case-when, or a bunch of unions, which, I'm 100% certain will do much better than row-by-row insertion.However, this is a general note.Still doesn't explain why it takes faster to insert with deletions (?!!)Is there any chance the set you inserting in the second run is smaller (e.g. only a fraction of the original one)?If possible, you can send over a fragment of the code, and we can look into it.regards,MilosOn Thu, Apr 8, 2021 at 3:56 PM Szalontai Zoltán <[email protected]> wrote:Hi Milos, Inside the loops there are frequently if / else branches value transformations used.We could not solve it without using a cursor. Regards,Zoltán From: Milos Babic <[email protected]> Sent: Thursday, April 8, 2021 2:31 PMTo: Szalontai Zoltán <[email protected]>Cc: Pgsql Performance <[email protected]>Subject: Re: procedure using CURSOR to insert is extremely slow Hi Zoltan, is there any particular reason why you don't do a bulk insert as: insert into target_table select ... from source_table(s) (with joins etc) Regards,Milos On Thu, Apr 8, 2021 at 1:24 PM Szalontai Zoltán <[email protected]> wrote:Hi, We have a Class db.t2.medium database on AWS.We use a procedure to transfer data records from the Source to the Target Schema.Transfers are identified by the log_id field in the target table. The procedure is:1 all records are deleted from the Target table with the actual log_id value2 a complicated SELECT (numerous tables are joined) is created on the Source system 3 a cursor is defined based on this SELECT4 we go trough the CURSOR and insert new records into the Target table with this log_id (Actually we have about 100 tables in the Target schema and the size of the database backup file is about 1GByte. But we do the same for all the Target tables.) Our procedure is extremely slow for the first run: 3 days for the 100 tables. For the second and all subsequent run it is fast enough (15 minutes). The only difference between the first run and all the others is that in the first run there are no records in the Target schema with this log_id. It seems, that in the first step the DELETE operation makes free some “space”, and the INSET operation in the 4. step can reuse this space. But if no records are deleted in the first step, the procedure is extremely slow. To speed up the first run we found the following workaround:We inserted dummy records into the Target tables with the proper log_id, and really the first run became very fast again. Is there any “normal” way to speed up this procedure?In the production environment there will be only “first runs”, the same log_id will never be used again. thankZoltán -- Milos Babichttp://www.linkedin.com/in/milosbabic-- Milos Babichttp://www.linkedin.com/in/milosbabic",
"msg_date": "Thu, 8 Apr 2021 20:21:59 +0200",
"msg_from": "Milos Babic <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: procedure using CURSOR to insert is extremely slow"
},
{
"msg_contents": "Hi Milos,\n\n \n\nI discuss this kind of rethinking with the team.\n\n \n\nPerhaps I can copy our database on AWS for you, and you can check it.\n\n \n\nthanks,\n\nZoltán\n\n \n\nFrom: Milos Babic <[email protected]> \nSent: Thursday, April 8, 2021 8:22 PM\nTo: Szalontai Zoltán <[email protected]>\nCc: Pgsql Performance <[email protected]>\nSubject: Re: procedure using CURSOR to insert is extremely slow\n\n \n\nHi Zoltan,\n\n \n\nyou should try to rethink the logic behind the query.\n\nNumerous if/then/else can be transformed into case-when, or a bunch of unions, which, I'm 100% certain will do much better than row-by-row insertion.\n\n \n\nHowever, this is a general note.\n\nStill doesn't explain why it takes faster to insert with deletions (?!!)\n\nIs there any chance the set you inserting in the second run is smaller (e.g. only a fraction of the original one)?\n\n \n\nIf possible, you can send over a fragment of the code, and we can look into it.\n\n \n\nregards,\n\nMilos\n\n \n\n \n\n \n\n \n\n \n\n \n\nOn Thu, Apr 8, 2021 at 3:56 PM Szalontai Zoltán <[email protected] <mailto:[email protected]> > wrote:\n\nHi Milos,\n\n \n\nInside the loops there are frequently if / else branches value transformations used.\n\nWe could not solve it without using a cursor.\n\n \n\nRegards,\n\nZoltán\n\n \n\nFrom: Milos Babic <[email protected] <mailto:[email protected]> > \nSent: Thursday, April 8, 2021 2:31 PM\nTo: Szalontai Zoltán <[email protected] <mailto:[email protected]> >\nCc: Pgsql Performance <[email protected] <mailto:[email protected]> >\nSubject: Re: procedure using CURSOR to insert is extremely slow\n\n \n\nHi Zoltan,\n\n \n\nis there any particular reason why you don't do a bulk insert as:\n\n insert into target_table\n\n select ... from source_table(s) (with joins etc)\n\n \n\nRegards,\n\nMilos\n\n \n\n \n\n \n\nOn Thu, Apr 8, 2021 at 1:24 PM Szalontai Zoltán <[email protected] <mailto:[email protected]> > wrote:\n\nHi,\n\n \n\nWe have a Class db.t2.medium database on AWS.\n\nWe use a procedure to transfer data records from the Source to the Target Schema.\n\nTransfers are identified by the log_id field in the target table.\n\n \n\nThe procedure is:\n\n1 all records are deleted from the Target table with the actual log_id value\n\n2 a complicated SELECT (numerous tables are joined) is created on the Source system \n\n3 a cursor is defined based on this SELECT\n\n4 we go trough the CURSOR and insert new records into the Target table with this log_id\n\n \n\n(Actually we have about 100 tables in the Target schema and the size of the database backup file is about 1GByte. But we do the same for all the Target tables.)\n\n \n\nOur procedure is extremely slow for the first run: 3 days for the 100 tables. For the second and all subsequent run it is fast enough (15 minutes). \n\nThe only difference between the first run and all the others is that in the first run there are no records in the Target schema with this log_id.\n\n \n\nIt seems, that in the first step the DELETE operation makes free some “space”, and the INSET operation in the 4. step can reuse this space. But if no records are deleted in the first step, the procedure is extremely slow.\n\n \n\nTo speed up the first run we found the following workaround:\n\nWe inserted dummy records into the Target tables with the proper log_id, and really the first run became very fast again.\n\n \n\nIs there any “normal” way to speed up this procedure?\n\nIn the production environment there will be only “first runs”, the same log_id will never be used again.\n\n \n\n \n\nthank\n\nZoltán\n\n \n\n \n\n\n\n\n \n\n-- \n\nMilos Babic\n\nhttp://www.linkedin.com/in/milosbabic\n\n\n\n\n \n\n-- \n\nMilos Babic\n\nhttp://www.linkedin.com/in/milosbabic\n\n\nHi Milos, I discuss this kind of rethinking with the team. Perhaps I can copy our database on AWS for you, and you can check it. thanks,Zoltán From: Milos Babic <[email protected]> Sent: Thursday, April 8, 2021 8:22 PMTo: Szalontai Zoltán <[email protected]>Cc: Pgsql Performance <[email protected]>Subject: Re: procedure using CURSOR to insert is extremely slow Hi Zoltan, you should try to rethink the logic behind the query.Numerous if/then/else can be transformed into case-when, or a bunch of unions, which, I'm 100% certain will do much better than row-by-row insertion. However, this is a general note.Still doesn't explain why it takes faster to insert with deletions (?!!)Is there any chance the set you inserting in the second run is smaller (e.g. only a fraction of the original one)? If possible, you can send over a fragment of the code, and we can look into it. regards,Milos On Thu, Apr 8, 2021 at 3:56 PM Szalontai Zoltán <[email protected]> wrote:Hi Milos, Inside the loops there are frequently if / else branches value transformations used.We could not solve it without using a cursor. Regards,Zoltán From: Milos Babic <[email protected]> Sent: Thursday, April 8, 2021 2:31 PMTo: Szalontai Zoltán <[email protected]>Cc: Pgsql Performance <[email protected]>Subject: Re: procedure using CURSOR to insert is extremely slow Hi Zoltan, is there any particular reason why you don't do a bulk insert as: insert into target_table select ... from source_table(s) (with joins etc) Regards,Milos On Thu, Apr 8, 2021 at 1:24 PM Szalontai Zoltán <[email protected]> wrote:Hi, We have a Class db.t2.medium database on AWS.We use a procedure to transfer data records from the Source to the Target Schema.Transfers are identified by the log_id field in the target table. The procedure is:1 all records are deleted from the Target table with the actual log_id value2 a complicated SELECT (numerous tables are joined) is created on the Source system 3 a cursor is defined based on this SELECT4 we go trough the CURSOR and insert new records into the Target table with this log_id (Actually we have about 100 tables in the Target schema and the size of the database backup file is about 1GByte. But we do the same for all the Target tables.) Our procedure is extremely slow for the first run: 3 days for the 100 tables. For the second and all subsequent run it is fast enough (15 minutes). The only difference between the first run and all the others is that in the first run there are no records in the Target schema with this log_id. It seems, that in the first step the DELETE operation makes free some “space”, and the INSET operation in the 4. step can reuse this space. But if no records are deleted in the first step, the procedure is extremely slow. To speed up the first run we found the following workaround:We inserted dummy records into the Target tables with the proper log_id, and really the first run became very fast again. Is there any “normal” way to speed up this procedure?In the production environment there will be only “first runs”, the same log_id will never be used again. thankZoltán -- Milos Babichttp://www.linkedin.com/in/milosbabic -- Milos Babichttp://www.linkedin.com/in/milosbabic",
"msg_date": "Thu, 8 Apr 2021 21:45:52 +0200",
"msg_from": "=?utf-8?Q?Szalontai_Zolt=C3=A1n?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: procedure using CURSOR to insert is extremely slow"
}
] |
[
{
"msg_contents": "Hi,\nI need to combine results of multiple rows in one row. I get below error.\nCould you please help.\n\nQuery:\n\nselect string_agg((select '******' || P.PhaseName || ' - ' ||\nR.Recommendation AS \"ABC\" from tblAssessmentRecommendation\nR,tblAssessmentPhases P\nwhere R.PhaseID = P.PhaseID Order BY P.sortOrder DESC),' ')\n\nError:\n\nERROR: more than one row returned by a subquery used as an expression SQL\nstate: 21000\n\nRegards,\nAditya.\n\nHi,I need to combine results of multiple rows in one row. I get below error. Could you please help.Query:select string_agg((select '******' || P.PhaseName || ' - ' || R.Recommendation AS \"ABC\" from tblAssessmentRecommendation R,tblAssessmentPhases Pwhere R.PhaseID = P.PhaseID Order BY P.sortOrder DESC),' ') Error:ERROR: more than one row returned by a subquery used as an expression\nSQL state: 21000Regards,Aditya.",
"msg_date": "Thu, 8 Apr 2021 17:01:38 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "str_aggr function not wokring"
},
{
"msg_contents": "From: aditya desai <[email protected]>\r\nSent: Thursday, April 8, 2021 1:32 PM\r\nTo: Pgsql Performance <[email protected]>\r\nSubject: str_aggr function not wokring\r\n\r\nHi,\r\nI need to combine results of multiple rows in one row. I get below error. Could you please help.\r\n\r\nQuery:\r\n\r\nselect string_agg((select '******' || P.PhaseName || ' - ' || R.Recommendation AS \"ABC\" from tblAssessmentRecommendation R,tblAssessmentPhases P\r\nwhere R.PhaseID = P.PhaseID Order BY P.sortOrder DESC),' ')\r\n\r\nError:\r\n\r\nERROR: more than one row returned by a subquery used as an expression SQL state: 21000\r\n\r\nRegards,\r\nAditya.\r\n\r\n\r\nHi,\r\n\r\nI would suggest you to try something like this instead\r\n\r\nselect string_agg( '******' || P.PhaseName || ' - ' || R.Recommendation '' ORDER BY P.sortOrder DESC ) AS \"ABC\"\r\nfrom tblAssessmentRecommendation R,tblAssessmentPhases P\r\nwhere R.PhaseID = P.PhaseID\r\n\r\nRegards,\r\n\r\nPatrick\r\n\r\n\n\n\n\n\n\n\n\n\n\nFrom: aditya desai <[email protected]> \nSent: Thursday, April 8, 2021 1:32 PM\nTo: Pgsql Performance <[email protected]>\nSubject: str_aggr function not wokring\n\n \n\n\n\nHi,\n\nI need to combine results of multiple rows in one row. I get below error. Could you please help.\n\n\n \n\n\nQuery:\n\n\n \n\n\n\nselect string_agg((select '******' || P.PhaseName || ' - ' || R.Recommendation AS \"ABC\" from tblAssessmentRecommendation R,tblAssessmentPhases P\n\n\nwhere R.PhaseID = P.PhaseID Order BY P.sortOrder DESC),' ') \n\n\n\n \n\n\nError:\n\n\n \n\n\nERROR: more than one row returned by a subquery used as an expression SQL state: 21000\n\n\n \n\n\nRegards,\n\n\nAditya.\n \n \nHi,\n \nI would suggest you to try something like this instead\n \nselect string_agg( '******' || P.PhaseName || ' - ' || R.Recommendation '' ORDER BY P.sortOrder DESC ) AS \"ABC\"\r\n\nfrom tblAssessmentRecommendation R,tblAssessmentPhases P\nwhere R.PhaseID = P.PhaseID \n \nRegards,\n \nPatrick",
"msg_date": "Thu, 8 Apr 2021 11:41:37 +0000",
"msg_from": "Patrick FICHE <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: str_aggr function not wokring"
},
{
"msg_contents": "Thanks Patrick. I used WITH Query and feeded that output to string_aggr\nwhich worked. However it is giving performance issues. Will check on that.\nTHanks.\n\nOn Thu, Apr 8, 2021 at 5:11 PM Patrick FICHE <[email protected]>\nwrote:\n\n> *From:* aditya desai <[email protected]>\n> *Sent:* Thursday, April 8, 2021 1:32 PM\n> *To:* Pgsql Performance <[email protected]>\n> *Subject:* str_aggr function not wokring\n>\n>\n>\n> Hi,\n>\n> I need to combine results of multiple rows in one row. I get below error.\n> Could you please help.\n>\n>\n>\n> Query:\n>\n>\n>\n> select string_agg((select '******' || P.PhaseName || ' - ' ||\n> R.Recommendation AS \"ABC\" from tblAssessmentRecommendation\n> R,tblAssessmentPhases P\n>\n> where R.PhaseID = P.PhaseID Order BY P.sortOrder DESC),' ')\n>\n>\n>\n> Error:\n>\n>\n>\n> ERROR: more than one row returned by a subquery used as an expression SQL\n> state: 21000\n>\n>\n>\n> Regards,\n>\n> Aditya.\n>\n>\n>\n>\n>\n> Hi,\n>\n>\n>\n> I would suggest you to try something like this instead\n>\n>\n>\n> select string_agg( '******' || P.PhaseName || ' - ' || R.Recommendation ''\n> ORDER BY P.sortOrder DESC ) AS \"ABC\"\n>\n> from tblAssessmentRecommendation R,tblAssessmentPhases P\n>\n> where R.PhaseID = P.PhaseID\n>\n>\n>\n> Regards,\n>\n>\n>\n> Patrick\n>\n>\n>\n\nThanks Patrick. I used WITH Query and feeded that output to string_aggr which worked. However it is giving performance issues. Will check on that. THanks.On Thu, Apr 8, 2021 at 5:11 PM Patrick FICHE <[email protected]> wrote:\n\n\n\nFrom: aditya desai <[email protected]> \nSent: Thursday, April 8, 2021 1:32 PM\nTo: Pgsql Performance <[email protected]>\nSubject: str_aggr function not wokring\n\n \n\n\n\nHi,\n\nI need to combine results of multiple rows in one row. I get below error. Could you please help.\n\n\n \n\n\nQuery:\n\n\n \n\n\n\nselect string_agg((select '******' || P.PhaseName || ' - ' || R.Recommendation AS \"ABC\" from tblAssessmentRecommendation R,tblAssessmentPhases P\n\n\nwhere R.PhaseID = P.PhaseID Order BY P.sortOrder DESC),' ') \n\n\n\n \n\n\nError:\n\n\n \n\n\nERROR: more than one row returned by a subquery used as an expression SQL state: 21000\n\n\n \n\n\nRegards,\n\n\nAditya.\n \n \nHi,\n \nI would suggest you to try something like this instead\n \nselect string_agg( '******' || P.PhaseName || ' - ' || R.Recommendation '' ORDER BY P.sortOrder DESC ) AS \"ABC\"\n\nfrom tblAssessmentRecommendation R,tblAssessmentPhases P\nwhere R.PhaseID = P.PhaseID \n \nRegards,\n \nPatrick",
"msg_date": "Thu, 8 Apr 2021 18:25:12 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: str_aggr function not wokring"
},
{
"msg_contents": "Sure!! Thanks for the response. Apologies for multiple questions. Faced\nthis during high priority MSSQL to PostgreSQL migration. Did not see any\nequivalent of XML PATH which would give desired results. Finally was able\nto resolve the issue by rewriting the Proc using WITH and string_aggr in\ncombination. However still facing performance issues in the same. Will\ninvestigate it.\n\nOn Thu, Apr 8, 2021 at 5:08 PM Mike Sofen <[email protected]> wrote:\n\n> You realize that there are a million answers to your questions online?\n> Are you doing any google searches before bothering this list with basic\n> questions? I personally never email this list until I’ve exhausted all\n> searches and extensive trial and error, as do most practitioners. This\n> list is incredibly patient and polite, and...there are limits. Please\n> consider doing more research before asking a question. In your example\n> below, you’re getting a basic subquery error – research how to fix that.\n> Mike\n>\n>\n>\n> *From:* aditya desai <[email protected]>\n> *Sent:* Thursday, April 08, 2021 4:32 AM\n> *To:* Pgsql Performance <[email protected]>\n> *Subject:* str_aggr function not wokring\n>\n>\n>\n> Hi,\n>\n> I need to combine results of multiple rows in one row. I get below error.\n> Could you please help.\n>\n>\n>\n> Query:\n>\n>\n>\n> select string_agg((select '******' || P.PhaseName || ' - ' ||\n> R.Recommendation AS \"ABC\" from tblAssessmentRecommendation\n> R,tblAssessmentPhases P\n>\n> where R.PhaseID = P.PhaseID Order BY P.sortOrder DESC),' ')\n>\n>\n>\n> Error:\n>\n>\n>\n> ERROR: more than one row returned by a subquery used as an expression SQL\n> state: 21000\n>\n>\n>\n> Regards,\n>\n> Aditya.\n>\n\nSure!! Thanks for the response. Apologies for multiple questions. Faced this during high priority MSSQL to PostgreSQL migration. Did not see any equivalent of XML PATH which would give desired results. Finally was able to resolve the issue by rewriting the Proc using WITH and string_aggr in combination. However still facing performance issues in the same. Will investigate it.On Thu, Apr 8, 2021 at 5:08 PM Mike Sofen <[email protected]> wrote:You realize that there are a million answers to your questions online? Are you doing any google searches before bothering this list with basic questions? I personally never email this list until I’ve exhausted all searches and extensive trial and error, as do most practitioners. This list is incredibly patient and polite, and...there are limits. Please consider doing more research before asking a question. In your example below, you’re getting a basic subquery error – research how to fix that. Mike From: aditya desai <[email protected]> Sent: Thursday, April 08, 2021 4:32 AMTo: Pgsql Performance <[email protected]>Subject: str_aggr function not wokring Hi,I need to combine results of multiple rows in one row. I get below error. Could you please help. Query: select string_agg((select '******' || P.PhaseName || ' - ' || R.Recommendation AS \"ABC\" from tblAssessmentRecommendation R,tblAssessmentPhases Pwhere R.PhaseID = P.PhaseID Order BY P.sortOrder DESC),' ') Error: ERROR: more than one row returned by a subquery used as an expression SQL state: 21000 Regards,Aditya.",
"msg_date": "Thu, 8 Apr 2021 18:28:56 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: str_aggr function not wokring"
}
] |
[
{
"msg_contents": "Hi,\nWe are trying to load data around 1Bil records into one table with INSERT statements (not able to use COPY command) and they are been waiting for a lock and the wait_event is \"transactionid\", I didn't find any information in the documents. Queries have been waiting for hours.\nTable DDL'sCREATE TABLE test_load( billg_acct_cid_hash character varying(50) COLLATE pg_catalog.\"default\" NOT NULL, accs_mthd_cid_hash character varying(50) COLLATE pg_catalog.\"default\" NOT NULL, soc character varying(10) COLLATE pg_catalog.\"default\" NOT NULL, soc_desc character varying(100) COLLATE pg_catalog.\"default\", service_type_cd character varying(10) COLLATE pg_catalog.\"default\", soc_start_dt date, soc_end_dt date, product_eff_dt date, product_exp_dt date, curr_ind character varying(1) COLLATE pg_catalog.\"default\", load_dttm timestamp without time zone NOT NULL, updt_dttm timestamp without time zone, md5_chk_sum character varying(100) COLLATE pg_catalog.\"default\", deld_from_src_ind character(1) COLLATE pg_catalog.\"default\", orphan_ind character(1) COLLATE pg_catalog.\"default\", CONSTRAINT test_load_pk PRIMARY KEY (billg_acct_cid_hash, accs_mthd_cid_hash, soc));\nquery results from pg_locks ;\n SELECT COALESCE(blockingl.relation::regclass::text, blockingl.locktype) AS locked_item, now() - blockeda.query_start AS waiting_duration, blockeda.pid AS blocked_pid, left(blockeda.query,7) AS blocked_query, blockedl.mode AS blocked_mode, blockinga.pid AS blocking_pid, left(blockinga.query,7) AS blocking_query, blockingl.mode AS blocking_mode FROM pg_locks blockedl JOIN pg_stat_activity blockeda ON blockedl.pid = blockeda.pid JOIN pg_locks blockingl ON (blockingl.transactionid = blockedl.transactionid OR blockingl.relation = blockedl.relation AND blockingl.locktype = blockedl.locktype) AND blockedl.pid <> blockingl.pid JOIN pg_stat_activity blockinga ON blockingl.pid = blockinga.pid AND blockinga.datid = blockeda.datid WHERE NOT blockedl.granted order by blockeda.query_start\n\"transactionid\" \"18:20:06.068154\" 681216 \"INSERT \" \"ShareLock\" 679840 \"INSERT \" \"ExclusiveLock\"\"transactionid\" \"18:19:05.504781\" 679688 \"INSERT \" \"ShareLock\" 679856 \"INSERT \" \"ExclusiveLock\"\"transactionid\" \"18:18:17.30099\" 679572 \"INSERT \" \"ShareLock\" 679612 \"INSERT \" \"ShareLock\"\"transactionid\" \"18:18:17.30099\" 679572 \"INSERT \" \"ShareLock\" 679580 \"INSERT \" \"ShareLock\"\"transactionid\" \"18:18:17.30099\" 679572 \"INSERT \" \"ShareLock\" 681108 \"INSERT \" \"ExclusiveLock\"\"transactionid\" \"18:14:17.969603\" 681080 \"INSERT \" \"ShareLock\" 681204 \"INSERT \" \"ExclusiveLock\"\"transactionid\" \"18:13:41.531575\" 681112 \"INSERT \" \"ShareLock\" 679636 \"INSERT \" \"ExclusiveLock\"\"transactionid\" \"18:04:16.195069\" 679556 \"INSERT \" \"ShareLock\" 679776 \"INSERT \" \"ExclusiveLock\"\"transactionid\" \"17:58:54.284211\" 679696 \"INSERT \" \"ShareLock\" 678940 \"INSERT \" \"ExclusiveLock\"\"transactionid\" \"17:57:54.220879\" 681144 \"INSERT \" \"ShareLock\" 679792 \"INSERT \" \"ExclusiveLock\"\"transactionid\" \"17:57:28.736147\" 679932 \"INSERT \" \"ShareLock\" 679696 \"INSERT \" \"ExclusiveLock\"\"transactionid\" \"17:53:48.701858\" 679580 \"INSERT \" \"ShareLock\" 679572 \"INSERT \" \"ShareLock\"\n\nquery results from pg_stat_activity ;\n\nSELECT pg_stat_activity.pid, pg_stat_activity.usename, pg_stat_activity.state, now() - pg_stat_activity.query_start AS runing_time, LEFT(pg_stat_activity.query,7) , pg_stat_activity.wait_event FROM pg_stat_activity ORDER BY (now() - pg_stat_activity.query_start) DESC;\n| \n | | | | | |\n| 681216 | postgres | active | 07:32.7 | INSERT | transactionid |\n| 679688 | postgres | active | 06:32.2 | INSERT | transactionid |\n| 679572 | postgres | active | 05:44.0 | INSERT | transactionid |\n| 681080 | postgres | active | 01:44.6 | INSERT | transactionid |\n| 681112 | postgres | active | 01:08.2 | INSERT | transactionid |\n| 679556 | postgres | active | 51:42.9 | INSERT | transactionid |\n| 679696 | postgres | active | 46:20.9 | INSERT | transactionid |\n| 681144 | postgres | active | 45:20.9 | INSERT | transactionid |\n| 679932 | postgres | active | 44:55.4 | INSERT | transactionid |\n| 679580 | postgres | active | 41:15.4 | INSERT | transactionid |\n| 679400 | postgres | active | 39:51.2 | INSERT | transactionid |\n| 679852 | postgres | active | 37:05.3 | INSERT | transactionid |\n| 681188 | postgres | active | 36:23.2 | INSERT | transactionid |\n| 679544 | postgres | active | 35:33.4 | INSERT | transactionid |\n| 675460 | postgres | active | 26:06.8 | INSERT | transactionid |\n\n\n\nselect version ();\nPostgreSQL 11.6, compiled by Visual C++ build 1800, 64-bit\n\nCPU: v32RAM: 320 GBshared_buffers = 64GB\neffective_cache_size = 160 GB\n\nany comments on the issue?\n\nThanks,Rj\nHi,We are trying to load data around 1Bil records into one table with INSERT statements (not able to use COPY command) and they are been waiting for a lock and the wait_event is \"transactionid\", I didn't find any information in the documents. Queries have been waiting for hours.Table DDL'sCREATE TABLE test_load( billg_acct_cid_hash character varying(50) COLLATE pg_catalog.\"default\" NOT NULL, accs_mthd_cid_hash character varying(50) COLLATE pg_catalog.\"default\" NOT NULL, soc character varying(10) COLLATE pg_catalog.\"default\" NOT NULL, soc_desc character varying(100) COLLATE pg_catalog.\"default\", service_type_cd character varying(10) COLLATE pg_catalog.\"default\", soc_start_dt date, soc_end_dt date, product_eff_dt date, product_exp_dt date, curr_ind character varying(1) COLLATE pg_catalog.\"default\", load_dttm timestamp without time zone NOT NULL, updt_dttm timestamp without time zone, md5_chk_sum character varying(100) COLLATE pg_catalog.\"default\", deld_from_src_ind character(1) COLLATE pg_catalog.\"default\", orphan_ind character(1) COLLATE pg_catalog.\"default\", CONSTRAINT test_load_pk PRIMARY KEY (billg_acct_cid_hash, accs_mthd_cid_hash, soc));query results from pg_locks ; SELECT COALESCE(blockingl.relation::regclass::text, blockingl.locktype) AS locked_item, now() - blockeda.query_start AS waiting_duration, blockeda.pid AS blocked_pid, left(blockeda.query,7) AS blocked_query, blockedl.mode AS blocked_mode, blockinga.pid AS blocking_pid, left(blockinga.query,7) AS blocking_query, blockingl.mode AS blocking_mode FROM pg_locks blockedl JOIN pg_stat_activity blockeda ON blockedl.pid = blockeda.pid JOIN pg_locks blockingl ON (blockingl.transactionid = blockedl.transactionid OR blockingl.relation = blockedl.relation AND blockingl.locktype = blockedl.locktype) AND blockedl.pid <> blockingl.pid JOIN pg_stat_activity blockinga ON blockingl.pid = blockinga.pid AND blockinga.datid = blockeda.datid WHERE NOT blockedl.granted order by blockeda.query_start\"transactionid\" \"18:20:06.068154\" 681216 \"INSERT \" \"ShareLock\" 679840 \"INSERT \" \"ExclusiveLock\"\"transactionid\" \"18:19:05.504781\" 679688 \"INSERT \" \"ShareLock\" 679856 \"INSERT \" \"ExclusiveLock\"\"transactionid\" \"18:18:17.30099\" 679572 \"INSERT \" \"ShareLock\" 679612 \"INSERT \" \"ShareLock\"\"transactionid\" \"18:18:17.30099\" 679572 \"INSERT \" \"ShareLock\" 679580 \"INSERT \" \"ShareLock\"\"transactionid\" \"18:18:17.30099\" 679572 \"INSERT \" \"ShareLock\" 681108 \"INSERT \" \"ExclusiveLock\"\"transactionid\" \"18:14:17.969603\" 681080 \"INSERT \" \"ShareLock\" 681204 \"INSERT \" \"ExclusiveLock\"\"transactionid\" \"18:13:41.531575\" 681112 \"INSERT \" \"ShareLock\" 679636 \"INSERT \" \"ExclusiveLock\"\"transactionid\" \"18:04:16.195069\" 679556 \"INSERT \" \"ShareLock\" 679776 \"INSERT \" \"ExclusiveLock\"\"transactionid\" \"17:58:54.284211\" 679696 \"INSERT \" \"ShareLock\" 678940 \"INSERT \" \"ExclusiveLock\"\"transactionid\" \"17:57:54.220879\" 681144 \"INSERT \" \"ShareLock\" 679792 \"INSERT \" \"ExclusiveLock\"\"transactionid\" \"17:57:28.736147\" 679932 \"INSERT \" \"ShareLock\" 679696 \"INSERT \" \"ExclusiveLock\"\"transactionid\" \"17:53:48.701858\" 679580 \"INSERT \" \"ShareLock\" 679572 \"INSERT \" \"ShareLock\"query results from pg_stat_activity ;SELECT pg_stat_activity.pid, pg_stat_activity.usename, pg_stat_activity.state, now() - pg_stat_activity.query_start AS runing_time, LEFT(pg_stat_activity.query,7) , pg_stat_activity.wait_event FROM pg_stat_activity ORDER BY (now() - pg_stat_activity.query_start) DESC;681216postgresactive07:32.7INSERT transactionid679688postgresactive06:32.2INSERT transactionid679572postgresactive05:44.0INSERT transactionid681080postgresactive01:44.6INSERT transactionid681112postgresactive01:08.2INSERT transactionid679556postgresactive51:42.9INSERT transactionid679696postgresactive46:20.9INSERT transactionid681144postgresactive45:20.9INSERT transactionid679932postgresactive44:55.4INSERT transactionid679580postgresactive41:15.4INSERT transactionid679400postgresactive39:51.2INSERT transactionid679852postgresactive37:05.3INSERT transactionid681188postgresactive36:23.2INSERT transactionid679544postgresactive35:33.4INSERT transactionid675460postgresactive26:06.8INSERT transactionidselect version ();PostgreSQL 11.6, compiled by Visual C++ build 1800, 64-bitCPU: v32RAM: 320 GBshared_buffers = 64GBeffective_cache_size = 160 GBany comments on the issue?Thanks,Rj",
"msg_date": "Thu, 8 Apr 2021 20:14:21 +0000 (UTC)",
"msg_from": "Nagaraj Raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "INSERTS waiting with wait_event is \"transactionid\""
},
{
"msg_contents": "On Thu, 2021-04-08 at 20:14 +0000, Nagaraj Raj wrote:\n> We are trying to load data around 1Bil records into one table with INSERT statements\n> (not able to use COPY command) and they are been waiting for a lock and the wait_event\n> is \"transactionid\", I didn't find any information in the documents. Queries have been\n> waiting for hours.\n\nThat means that your statement is stuck behind a row lock.\n\nRow locks are stored on the table row itself and contain the transaction ID.\nSo the process has to wait until the transaction goes away, which is implemented\nas waiting for a lock on the transaction ID.\n\nThere must be a long running transaction that locks a row that is needed for\nthe INSERT. It could be a row in a different table that is referenced by a\nforeign key.\n\nMake that long running transaction go away. Transactions should never last that long.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Fri, 09 Apr 2021 11:15:53 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: INSERTS waiting with wait_event is \"transactionid\""
},
{
"msg_contents": "Hi Laurenz, Thanks for the response. \nYeah understand that, but I'm trying to figure out why it is taking too long. there is foreign key relation to this table. \n\nThanks,Rj On Friday, April 9, 2021, 02:16:08 AM PDT, Laurenz Albe <[email protected]> wrote: \n \n On Thu, 2021-04-08 at 20:14 +0000, Nagaraj Raj wrote:\n> We are trying to load data around 1Bil records into one table with INSERT statements\n> (not able to use COPY command) and they are been waiting for a lock and the wait_event\n> is \"transactionid\", I didn't find any information in the documents. Queries have been\n> waiting for hours.\n\nThat means that your statement is stuck behind a row lock.\n\nRow locks are stored on the table row itself and contain the transaction ID.\nSo the process has to wait until the transaction goes away, which is implemented\nas waiting for a lock on the transaction ID.\n\nThere must be a long running transaction that locks a row that is needed for\nthe INSERT. It could be a row in a different table that is referenced by a\nforeign key.\n\nMake that long running transaction go away. Transactions should never last that long.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n \n\nHi Laurenz, Thanks for the response. Yeah understand that, but I'm trying to figure out why it is taking too long. there is foreign key relation to this table. Thanks,Rj\n\n\n\n On Friday, April 9, 2021, 02:16:08 AM PDT, Laurenz Albe <[email protected]> wrote:\n \n\n\nOn Thu, 2021-04-08 at 20:14 +0000, Nagaraj Raj wrote:> We are trying to load data around 1Bil records into one table with INSERT statements> (not able to use COPY command) and they are been waiting for a lock and the wait_event> is \"transactionid\", I didn't find any information in the documents. Queries have been> waiting for hours.That means that your statement is stuck behind a row lock.Row locks are stored on the table row itself and contain the transaction ID.So the process has to wait until the transaction goes away, which is implementedas waiting for a lock on the transaction ID.There must be a long running transaction that locks a row that is needed forthe INSERT. It could be a row in a different table that is referenced by aforeign key.Make that long running transaction go away. Transactions should never last that long.Yours,Laurenz Albe-- Cybertec | https://www.cybertec-postgresql.com",
"msg_date": "Fri, 9 Apr 2021 18:22:17 +0000 (UTC)",
"msg_from": "Nagaraj Raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: INSERTS waiting with wait_event is \"transactionid\""
}
] |
[
{
"msg_contents": "Hello, apologies for the long post, but I want to make sure I’ve got enough\ndetails to describe the problem for y’all.\n\n\n\nI’ve got a 64-core (Ubuntu 18.04 – 240GB RAM running at GCP) instance\nrunning PG 13.2 and PostGIS 3.1.1 and we’re having troubles getting it to\nrun more than 30 or so large queries at the same time accessing the same\ntables. With 60 threads, each thread is only running at ~30% CPU and no\ndiskIO/IOWait (once the tables become cached).\n\n\n\nBoiling the complex queries down to their simplest form, we test running 60\nof this query simultaneously:\n\n\n\nselect\n\n count(*)\n\nfrom\n\n travel_processing_v5.llc_zone z,\n\n parent_set10.usca_trip_points7 t\n\nwhere t.year_num = 2019 and t.month_num = 9\n\nand st_intersects(t.lock_geom, z.s_geom)\n\nand st_intersects(t.lock_geom, z.e_geom);\n\n\n\nllc_zone = 981 rows (568k disk size) with s_geom and e_geom both of\ndatatype geometry(Multipolygon, 2163)\n\nusca_trip_points7 = 79 million rows (469G disk size) with t.lock_geom\ndatatype geometry(Linestring, 2163)\n\n(more detailed schema/stats can be provided if helpful)\n\n\n\npostgresql.conf is pretty normal for a large system like this (with\nappropriate shared_buffer, work_mem, etc. – can be provided if helpful, too)\n\n\n\nWhat I’m finding in pg_stat_activity when running this is lots of\nwait_events of type ‘LockManager’.\n\nRebuilding with CFLAGS=\" -fno-omit-frame-pointer\"\n--prefix=/usr/local/pgsql_13debug --enable-dtrace CPPFLAGS='-DLOCK_DEBUG'\nand then setting trace_lwlocks yields lots of records looking like:\n\n\n\n[39691] LOG: 39691: LWLockAcquire(LockManager 0x7fab2cc09d80): excl 0\nshared 0 haswaiters 1 waiters 6 rOK 1\n\n\n\nDoes anyone have any advice on how to alleviate LockManager’s LWlock issue?\n\n\n\nThanks for any assistance!\n\n\n\n---Paul\n\n\n\nPaul Friedman\n\nCTO\n\n\n\n677 Harrison St | San Francisco, CA 94107\n\n*M:* (650) 270-7676\n\n*E-mail:* [email protected]",
"msg_date": "Mon, 12 Apr 2021 12:37:42 -0700",
"msg_from": "Paul Friedman <[email protected]>",
"msg_from_op": true,
"msg_subject": "LWLocks by LockManager slowing large DB"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-12 12:37:42 -0700, Paul Friedman wrote:\n> Boiling the complex queries down to their simplest form, we test running 60\n> of this query simultaneously:\n\nHow long does one execution of these queries take (on average)? The\nlikely bottlenecks are very different between running 60 concurrent\nqueries that each complete in 0.1ms and ones that take > 1s.\n\n\nCould you show the results for a query like\nSELECT state, backend_type, wait_event_type, wait_event, count(*) FROM pg_stat_activity GROUP BY 1, 2, 3, 4 ORDER BY count(*) DESC;\n?\n\nWithout knowing the proportion of LockManager wait events compared to\nthe rest it's hard to know what to make of it.\n\n\n> Does anyone have any advice on how to alleviate LockManager’s LWlock issue?\n\nIt'd be useful to get a perf profile for this. Both for cpu time and for\nwhat ends up blocking on kernel-level locks. E.g. something like\n\n# cpu time\nperf record --call-graph dwarf -F 500 -a sleep 5\nperf report --no-children --sort comm,symbol\n\nTo send it to the list you can use something like\nperf report --no-children --sort comm,symbol|head -n 500 > somefile\n\n# kernel level blocking on locks\nperf record --call-graph dwarf -e syscalls:sys_enter_futex -a sleep 3\nperf report --no-children --sort comm,symbol\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 12 Apr 2021 14:57:38 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LWLocks by LockManager slowing large DB"
},
{
"msg_contents": "Thanks for the quick reply!\n\n\n\nThese queries take ~1hr and are the only thing running on the system (all\n60 are launched at the same time and the tables/files are fully-primed into\nmemory so iowaits are basically zero).\n\n\n\nYes, that’s the same query I’ve been running to analyze the locks and this\nis the problem:\n\n\n\nSELECT state, backend_type, wait_event_type, wait_event, count(*) FROM\npg_stat_activity GROUP BY 1, 2, 3, 4 ORDER BY count(*) DESC;\n\n\n\nState backend_type wait_event_type wait_event count\n\nactive client backend LWLock LockManager 28\n\nactive client backend 21\n\n autovacuum launcher Activity AutoVacuumMain 1\n\n logical replication launcher Activity\nLogicalLauncherMain 1\n\n checkpointer Activity CheckpointerMain 1\n\nidle client backend Client ClientRead 1\n\n background writer Activity BgWriterMain 1\n\n walwriter Activity WalWriterMain 1\n\n\n\nThanks again for any advice you have.\n\n\n\n---Paul\n\n\n\nPaul Friedman\n\nCTO\n\n\n\n\n\n677 Harrison St | San Francisco, CA 94107\n\nM: (650) 270-7676\n\nE-mail: [email protected]\n\n\n\n-----Original Message-----\nFrom: Andres Freund <[email protected]>\nSent: Monday, April 12, 2021 2:58 PM\nTo: Paul Friedman <[email protected]>\nCc: [email protected]\nSubject: Re: LWLocks by LockManager slowing large DB\n\n\n\nHi,\n\n\n\nOn 2021-04-12 12:37:42 -0700, Paul Friedman wrote:\n\n> Boiling the complex queries down to their simplest form, we test\n\n> running 60 of this query simultaneously:\n\n\n\nHow long does one execution of these queries take (on average)? The likely\nbottlenecks are very different between running 60 concurrent queries that\neach complete in 0.1ms and ones that take > 1s.\n\n\n\n\n\nCould you show the results for a query like SELECT state, backend_type,\nwait_event_type, wait_event, count(*) FROM pg_stat_activity GROUP BY 1, 2,\n3, 4 ORDER BY count(*) DESC; ?\n\n\n\nWithout knowing the proportion of LockManager wait events compared to the\nrest it's hard to know what to make of it.\n\n\n\n\n\n> Does anyone have any advice on how to alleviate LockManager’s LWlock\nissue?\n\n\n\nIt'd be useful to get a perf profile for this. Both for cpu time and for\nwhat ends up blocking on kernel-level locks. E.g. something like\n\n\n\n# cpu time\n\nperf record --call-graph dwarf -F 500 -a sleep 5 perf report --no-children\n--sort comm,symbol\n\n\n\nTo send it to the list you can use something like perf report --no-children\n--sort comm,symbol|head -n 500 > somefile\n\n\n\n# kernel level blocking on locks\n\nperf record --call-graph dwarf -e syscalls:sys_enter_futex -a sleep 3 perf\nreport --no-children --sort comm,symbol\n\n\n\nGreetings,\n\n\n\nAndres Freund\n\nThanks for the quick reply! These queries take ~1hr and are the only thing running on the system (all 60 are launched at the same time and the tables/files are fully-primed into memory so iowaits are basically zero). Yes, that’s the same query I’ve been running to analyze the locks and this is the problem: SELECT state, backend_type, wait_event_type, wait_event, count(*) FROM pg_stat_activity GROUP BY 1, 2, 3, 4 ORDER BY count(*) DESC; State backend_type wait_event_type wait_event countactive client backend LWLock LockManager 28active client backend 21 autovacuum launcher Activity AutoVacuumMain 1 logical replication launcher Activity LogicalLauncherMain 1 checkpointer Activity CheckpointerMain 1idle client backend Client ClientRead 1 background writer Activity BgWriterMain 1 walwriter Activity WalWriterMain 1 Thanks again for any advice you have. ---Paul Paul FriedmanCTO 677 Harrison St | San Francisco, CA 94107M: (650) 270-7676E-mail: [email protected] -----Original Message-----From: Andres Freund <[email protected]> Sent: Monday, April 12, 2021 2:58 PMTo: Paul Friedman <[email protected]>Cc: [email protected]: Re: LWLocks by LockManager slowing large DB Hi, On 2021-04-12 12:37:42 -0700, Paul Friedman wrote:> Boiling the complex queries down to their simplest form, we test > running 60 of this query simultaneously: How long does one execution of these queries take (on average)? The likely bottlenecks are very different between running 60 concurrent queries that each complete in 0.1ms and ones that take > 1s. Could you show the results for a query like SELECT state, backend_type, wait_event_type, wait_event, count(*) FROM pg_stat_activity GROUP BY 1, 2, 3, 4 ORDER BY count(*) DESC; ? Without knowing the proportion of LockManager wait events compared to the rest it's hard to know what to make of it. > Does anyone have any advice on how to alleviate LockManager’s LWlock issue? It'd be useful to get a perf profile for this. Both for cpu time and for what ends up blocking on kernel-level locks. E.g. something like # cpu timeperf record --call-graph dwarf -F 500 -a sleep 5 perf report --no-children --sort comm,symbol To send it to the list you can use something like perf report --no-children --sort comm,symbol|head -n 500 > somefile # kernel level blocking on locksperf record --call-graph dwarf -e syscalls:sys_enter_futex -a sleep 3 perf report --no-children --sort comm,symbol Greetings, Andres Freund",
"msg_date": "Mon, 12 Apr 2021 15:15:05 -0700",
"msg_from": "Paul Friedman <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: LWLocks by LockManager slowing large DB"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-12 15:15:05 -0700, Paul Friedman wrote:\n> Thanks again for any advice you have.\n\nI think we'd need the perf profiles to be able to dig into this\nfurther.\n\nIt's odd that there are a meaningful amount of LockManager contention in\nyour case - assuming the stats you collected weren't just the first few\nmilliseconds of starting those 60 queries, there shouldn't be any\nadditional \"heavyweight locks\" taken given the duration of your queries.\n\nThe futex profile hopefully will tell us from where that is coming\nfrom...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 12 Apr 2021 15:22:21 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LWLocks by LockManager slowing large DB"
},
{
"msg_contents": "Yes, I ran the query after a couple of minutes. Those are the\nsteady-state numbers.\n\nAlso 'top' shows:\n\ntop - 22:44:26 up 12 days, 23:14, 5 users, load average: 20.99, 21.35,\n19.27\nTasks: 859 total, 26 running, 539 sleeping, 0 stopped, 0 zombie\n%Cpu(s): 34.3 us, 1.6 sy, 0.0 ni, 64.1 id, 0.0 wa, 0.0 hi, 0.0 si,\n0.0 st\nKiB Mem : 24742353+total, 33723356 free, 73160656 used,\n14053952+buff/cache\nKiB Swap: 0 total, 0 free, 0 used. 17132937+avail Mem\n\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\nCOMMAND\n\n30070 postgres 20 0 41.232g 28608 18192 R 53.8 0.0 17:35.74\npostgres\n\n30087 postgres 20 0 41.233g 28408 18180 S 53.8 0.0 17:35.69\npostgres\n\n30055 postgres 20 0 41.233g 28492 18120 R 53.5 0.0 17:41.51\npostgres\n\nNote the postgres processes only running at 53% with the system at 64%\nidle. The 1.7% system time seems indicative of the spinlocks blocking the\nmajor processing.\n\nDo you know what resource the LockManager might be blocking on/protecting\nwith these LWlocks?\n\nAlso, I didn't understand your comment about a 'futex profile', could you\npoint me in the right direction here?\n\nThanks.\n\n---Paul\n\nPaul Friedman\nCTO\n\n\n677 Harrison St | San Francisco, CA 94107\nM: (650) 270-7676\nE-mail: [email protected]\n\n-----Original Message-----\nFrom: Andres Freund <[email protected]>\nSent: Monday, April 12, 2021 3:22 PM\nTo: Paul Friedman <[email protected]>\nCc: [email protected]\nSubject: Re: LWLocks by LockManager slowing large DB\n\nHi,\n\nOn 2021-04-12 15:15:05 -0700, Paul Friedman wrote:\n> Thanks again for any advice you have.\n\nI think we'd need the perf profiles to be able to dig into this further.\n\nIt's odd that there are a meaningful amount of LockManager contention in\nyour case - assuming the stats you collected weren't just the first few\nmilliseconds of starting those 60 queries, there shouldn't be any\nadditional \"heavyweight locks\" taken given the duration of your queries.\n\nThe futex profile hopefully will tell us from where that is coming from...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 12 Apr 2021 15:56:08 -0700",
"msg_from": "Paul Friedman <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: LWLocks by LockManager slowing large DB"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-12 15:56:08 -0700, Paul Friedman wrote:\n> Also, I didn't understand your comment about a 'futex profile', could you\n> point me in the right direction here?\n\nMy earlier mail included a section about running a perf profile showing\nthe callers of the futex() system call, which in turn is what lwlocks\nend up using on linux when the lock is contended.\n\nCheck the second half of:\nhttps://www.postgresql.org/message-id/20210412215738.xytq33wlciljyva5%40alap3.anarazel.de\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 12 Apr 2021 16:03:54 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LWLocks by LockManager slowing large DB"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 14:57 Andres Freund <[email protected]> wrote:\n\n> Without knowing the proportion of LockManager wait events compared to\n> the rest it's hard to know what to make of it.\n\n\nThese OSS tools can be useful to understand the proportion:\n\n- pgCenter\nhttps://github.com/lesovsky/pgcenter\n- pg_wait_sampling (can be used together with POWA monitoring)\nhttps://github.com/postgrespro/pg_wait_sampling\n- pgsentinel\nhttps://github.com/pgsentinel/pgsentinel\n- PASH Viewer (good for visualization, integrates with pgsentinel)\nhttps://github.com/dbacvetkov/PASH-Viewer\n\n>\n\nOn Mon, Apr 12, 2021 at 14:57 Andres Freund <[email protected]> wrote:\nWithout knowing the proportion of LockManager wait events compared to\nthe rest it's hard to know what to make of it.These OSS tools can be useful to understand the proportion:- pgCenter https://github.com/lesovsky/pgcenter- pg_wait_sampling (can be used together with POWA monitoring) https://github.com/postgrespro/pg_wait_sampling- pgsentinel https://github.com/pgsentinel/pgsentinel- PASH Viewer (good for visualization, integrates with pgsentinel) https://github.com/dbacvetkov/PASH-Viewer",
"msg_date": "Mon, 12 Apr 2021 16:33:45 -0700",
"msg_from": "Nikolay Samokhvalov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LWLocks by LockManager slowing large DB"
},
{
"msg_contents": "Thanks for this, I read too quickly!\n\nI've attached the 2 perf reports. From the 2nd one, I can see lots of\ntime waiting for TOAST table locks on the geometry column, but I\ndefinitely don't fully understand the implications or why LockManager\nwould be struggling here.\n\nThanks for the continued help!\n\n---Paul\n\nPaul Friedman\nCTO\n\n\n677 Harrison St | San Francisco, CA 94107\nM: (650) 270-7676\nE-mail: [email protected]\n\n-----Original Message-----\nFrom: Andres Freund <[email protected]>\nSent: Monday, April 12, 2021 4:04 PM\nTo: Paul Friedman <[email protected]>\nCc: [email protected]\nSubject: Re: LWLocks by LockManager slowing large DB\n\nHi,\n\nOn 2021-04-12 15:56:08 -0700, Paul Friedman wrote:\n> Also, I didn't understand your comment about a 'futex profile', could\n> you point me in the right direction here?\n\nMy earlier mail included a section about running a perf profile showing\nthe callers of the futex() system call, which in turn is what lwlocks end\nup using on linux when the lock is contended.\n\nCheck the second half of:\nhttps://www.postgresql.org/message-id/20210412215738.xytq33wlciljyva5%40al\nap3.anarazel.de\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 13 Apr 2021 09:33:48 -0700",
"msg_from": "Paul Friedman <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: LWLocks by LockManager slowing large DB"
},
{
"msg_contents": "Thanks for this – these tools (and the raw selects on pg_stat_activity and\npg_locks) are all showing wait events being created by LockManager waiting\non an LWLock.\n\n\n\n---Paul\n\n\n\nPaul Friedman\n\nCTO\n\n\n\n677 Harrison St | San Francisco, CA 94107\n\n*M:* (650) 270-7676\n\n*E-mail:* [email protected]\n\n\n\n*From:* Nikolay Samokhvalov <[email protected]>\n*Sent:* Monday, April 12, 2021 4:34 PM\n*To:* Andres Freund <[email protected]>\n*Cc:* Paul Friedman <[email protected]>;\[email protected]\n*Subject:* Re: LWLocks by LockManager slowing large DB\n\n\n\n\n\n\n\nOn Mon, Apr 12, 2021 at 14:57 Andres Freund <[email protected]> wrote:\n\nWithout knowing the proportion of LockManager wait events compared to\nthe rest it's hard to know what to make of it.\n\n\n\nThese OSS tools can be useful to understand the proportion:\n\n\n\n- pgCenter\n\nhttps://github.com/lesovsky/pgcenter\n\n- pg_wait_sampling (can be used together with POWA monitoring)\n\nhttps://github.com/postgrespro/pg_wait_sampling\n\n- pgsentinel\n\nhttps://github.com/pgsentinel/pgsentinel\n\n- PASH Viewer (good for visualization, integrates with pgsentinel)\n\nhttps://github.com/dbacvetkov/PASH-Viewer",
"msg_date": "Tue, 13 Apr 2021 09:36:36 -0700",
"msg_from": "Paul Friedman <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: LWLocks by LockManager slowing large DB"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-13 09:33:48 -0700, Paul Friedman wrote:\n> I've attached the 2 perf reports. From the 2nd one, I can see lots of\n> time waiting for TOAST table locks on the geometry column, but I\n> definitely don't fully understand the implications or why LockManager\n> would be struggling here.\n\nOh, that is interesting. For toast tables we do not keep locks held for\nthe duration of the transaction, but release the lock as soon as one\naccess is done. It seems your workload is doing so many toast accesses\nthat the table / index level locking for toast tables gets to be the\nbottleneck.\n\nIt'd be interesting to know if the issue vanishes if you force the lock\non the toast table and its index to be acquired explicitly.\n\nYou can find the toast table names with something like:\n\nSELECT reltoastrelid::regclass\nFROM pg_class\nWHERE oid IN ('travel_processing_v5.llc_zone'::regclass, 'travel_processing_v5.llc_zone'::regclass);\n\nThat should give you two relation names looking like\n\"pg_toast.pg_toast_24576\", just with a different number.\n\nIf you then change your workload to be (with adjusted OIDs of course):\n\nBEGIN;\nSELECT * FROM pg_toast.pg_toast_24576 LIMIT 0;\nSELECT * FROM pg_toast.pg_toast_64454 LIMIT 0;\n<youquery>\nCOMMIT;\n\nDoes the scalability issue vanish?\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Tue, 13 Apr 2021 11:16:46 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LWLocks by LockManager slowing large DB"
},
{
"msg_contents": "YES!!! This completely alleviates the bottleneck and I'm able to run the\nqueries full-throttle. Thank you SO much for your help+insight.\n\nInterestingly, \"lock pg_toast.pg_toast_2233612264 in ACCESS SHARE MODE;\"\nwhich should do the same thing returns an error \" ERROR:\n\"pg_toast_2233612264\" is not a table or view\"\n\nSounds like I should file this as a requested improvement?\n\nThanks again.\n\n---Paul\n\nPaul Friedman\nCTO\n\n\n677 Harrison St | San Francisco, CA 94107\nM: (650) 270-7676\nE-mail: [email protected]\n\n-----Original Message-----\nFrom: Andres Freund <[email protected]>\nSent: Tuesday, April 13, 2021 11:17 AM\nTo: Paul Friedman <[email protected]>\nCc: [email protected]\nSubject: Re: LWLocks by LockManager slowing large DB\n\nHi,\n\nOn 2021-04-13 09:33:48 -0700, Paul Friedman wrote:\n> I've attached the 2 perf reports. From the 2nd one, I can see lots of\n> time waiting for TOAST table locks on the geometry column, but I\n> definitely don't fully understand the implications or why LockManager\n> would be struggling here.\n\nOh, that is interesting. For toast tables we do not keep locks held for\nthe duration of the transaction, but release the lock as soon as one\naccess is done. It seems your workload is doing so many toast accesses\nthat the table / index level locking for toast tables gets to be the\nbottleneck.\n\nIt'd be interesting to know if the issue vanishes if you force the lock on\nthe toast table and its index to be acquired explicitly.\n\nYou can find the toast table names with something like:\n\nSELECT reltoastrelid::regclass\nFROM pg_class\nWHERE oid IN ('travel_processing_v5.llc_zone'::regclass,\n'travel_processing_v5.llc_zone'::regclass);\n\nThat should give you two relation names looking like\n\"pg_toast.pg_toast_24576\", just with a different number.\n\nIf you then change your workload to be (with adjusted OIDs of course):\n\nBEGIN;\nSELECT * FROM pg_toast.pg_toast_24576 LIMIT 0; SELECT * FROM\npg_toast.pg_toast_64454 LIMIT 0; <youquery> COMMIT;\n\nDoes the scalability issue vanish?\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Tue, 13 Apr 2021 11:47:06 -0700",
"msg_from": "Paul Friedman <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: LWLocks by LockManager slowing large DB"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-13 11:47:06 -0700, Paul Friedman wrote:\n> YES!!! This completely alleviates the bottleneck and I'm able to run the\n> queries full-throttle. Thank you SO much for your help+insight.\n\nCool. And damn: I can't immediately think of a way to optimize this to\nnot require this kind of hack in the future.\n\n\n> Interestingly, \"lock pg_toast.pg_toast_2233612264 in ACCESS SHARE MODE;\"\n> which should do the same thing returns an error \" ERROR:\n> \"pg_toast_2233612264\" is not a table or view\"\n>\n> Sounds like I should file this as a requested improvement?\n\nThe ability to lock a toast table? Yea, it might be worth doing that. I\nseem to recall this being discussed not too long ago...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 13 Apr 2021 13:48:06 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LWLocks by LockManager slowing large DB"
},
{
"msg_contents": "On 2021-Apr-13, Andres Freund wrote:\n\n> > Sounds like I should file this as a requested improvement?\n> \n> The ability to lock a toast table? Yea, it might be worth doing that. I\n> seem to recall this being discussed not too long ago...\n\nYep, commit 59ab4ac32460 reverted by eeda7f633809. There were some\nissues with the semantics of locking views. It didn't seem\ninsurmountable, but I didn't get around to it.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Use it up, wear it out, make it do, or do without\"\n\n\n",
"msg_date": "Tue, 13 Apr 2021 16:54:59 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LWLocks by LockManager slowing large DB"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2021-04-13 11:47:06 -0700, Paul Friedman wrote:\n>> YES!!! This completely alleviates the bottleneck and I'm able to run the\n>> queries full-throttle. Thank you SO much for your help+insight.\n\n> Cool. And damn: I can't immediately think of a way to optimize this to\n> not require this kind of hack in the future.\n\nMaybe the same thing we do with user tables, ie not give up the lock\nwhen we close a toast rel? As long as the internal lock counters\nare 64-bit, we'd not have to worry about overflowing them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Apr 2021 18:29:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LWLocks by LockManager slowing large DB"
},
{
"msg_contents": "> For toast tables we do not keep locks held for the duration of the\ntransaction, > but release the lock as soon as one access is done.\n...\n> The ability to lock a toast table? Yea, it might be worth doing that. I\nseem to > recall this being discussed not too long ago...\n\nActually, the requested improvement I was thinking of was to have the\nlocks on the toast table somehow have the same lifespan as the locks on\nthe main table to avoid this problem to begin with.\n\n---Paul\n\nPaul Friedman\nCTO\n\n\n677 Harrison St | San Francisco, CA 94107\nM: (650) 270-7676\nE-mail: [email protected]\n\n-----Original Message-----\nFrom: Andres Freund <[email protected]>\nSent: Tuesday, April 13, 2021 1:48 PM\nTo: Paul Friedman <[email protected]>\nCc: [email protected]\nSubject: Re: LWLocks by LockManager slowing large DB\n\nHi,\n\nOn 2021-04-13 11:47:06 -0700, Paul Friedman wrote:\n> YES!!! This completely alleviates the bottleneck and I'm able to run\n> the queries full-throttle. Thank you SO much for your help+insight.\n\nCool. And damn: I can't immediately think of a way to optimize this to not\nrequire this kind of hack in the future.\n\n\n> Interestingly, \"lock pg_toast.pg_toast_2233612264 in ACCESS SHARE MODE;\"\n> which should do the same thing returns an error \" ERROR:\n> \"pg_toast_2233612264\" is not a table or view\"\n>\n> Sounds like I should file this as a requested improvement?\n\nThe ability to lock a toast table? Yea, it might be worth doing that. I\nseem to recall this being discussed not too long ago...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 13 Apr 2021 15:45:12 -0700",
"msg_from": "Paul Friedman <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: LWLocks by LockManager slowing large DB"
},
{
"msg_contents": "I wrote:\n> Andres Freund <[email protected]> writes:\n>> Cool. And damn: I can't immediately think of a way to optimize this to\n>> not require this kind of hack in the future.\n\n> Maybe the same thing we do with user tables, ie not give up the lock\n> when we close a toast rel? As long as the internal lock counters\n> are 64-bit, we'd not have to worry about overflowing them.\n\nLike this? This passes check-world, modulo the one very-unsurprising\nregression test change. I've not tried to do any performance testing.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 13 Apr 2021 19:16:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LWLocks by LockManager slowing large DB"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-13 19:16:46 -0400, Tom Lane wrote:\n> > Maybe the same thing we do with user tables, ie not give up the lock\n> > when we close a toast rel? As long as the internal lock counters\n> > are 64-bit, we'd not have to worry about overflowing them.\n\nWell, I was assuming we'd not want to do that, but I am generally on\nboard with the concept (and think our early lock release in a bunch of\nplaces is problematic).\n\n\n> Like this? This passes check-world, modulo the one very-unsurprising\n> regression test change. I've not tried to do any performance testing.\n\nI wonder if there's a realistic chance it could create additional\ndeadlocks that don't exist right now?\n\nWould it be a problem that we'd still release the locks on catalog\ntables early, but not on its toast table?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 13 Apr 2021 18:01:58 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LWLocks by LockManager slowing large DB"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2021-04-13 19:16:46 -0400, Tom Lane wrote:\n>> Like this? This passes check-world, modulo the one very-unsurprising\n>> regression test change. I've not tried to do any performance testing.\n\n> I wonder if there's a realistic chance it could create additional\n> deadlocks that don't exist right now?\n\nNot on user tables, because we'd always be holding at least as much\nof a lock on the parent table. However ...\n\n> Would it be a problem that we'd still release the locks on catalog\n> tables early, but not on its toast table?\n\n... hmm, not sure. I can't immediately think of a scenario where\nit'd be problematic (or any more problematic than DDL on a catalog\nwould be anyway). But that doesn't mean there isn't one.\n\nThe concerns that had come to my mind were more along the lines\nof things like pg_dump requiring a larger footprint in the shared\nlock table. We could alleviate that by increasing the default\nvalue of max_locks_per_transaction, perhaps.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Apr 2021 23:04:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LWLocks by LockManager slowing large DB"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-13 23:04:50 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2021-04-13 19:16:46 -0400, Tom Lane wrote:\n> >> Like this? This passes check-world, modulo the one very-unsurprising\n> >> regression test change. I've not tried to do any performance testing.\n> \n> > I wonder if there's a realistic chance it could create additional\n> > deadlocks that don't exist right now?\n> \n> Not on user tables, because we'd always be holding at least as much\n> of a lock on the parent table. However ...\n\nI suspect that's not strictly *always* the case due to some corner cases\naround a variable to a toast value in plpgsql surviving subtransactions\netc...\n\n\n> The concerns that had come to my mind were more along the lines\n> of things like pg_dump requiring a larger footprint in the shared\n> lock table. We could alleviate that by increasing the default\n> value of max_locks_per_transaction, perhaps.\n\nProbably worth doing one of these releases independently - especially\nwith partitioning the current value strikes me as being on the too low\nside.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 13 Apr 2021 20:48:16 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LWLocks by LockManager slowing large DB"
},
{
"msg_contents": "Hello\n\nOn 2021-Apr-13, Andres Freund wrote:\n\n> > The concerns that had come to my mind were more along the lines\n> > of things like pg_dump requiring a larger footprint in the shared\n> > lock table. We could alleviate that by increasing the default\n> > value of max_locks_per_transaction, perhaps.\n> \n> Probably worth doing one of these releases independently - especially\n> with partitioning the current value strikes me as being on the too low\n> side.\n\nMaybe it would make sense to scale the default up with shared_buffers,\nwhich nowadays we seem to use as a proxy for server size? (While also\nbeing about total memory consumption)\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"How amazing is that? I call it a night and come back to find that a bug has\nbeen identified and patched while I sleep.\" (Robert Davidson)\n http://archives.postgresql.org/pgsql-sql/2006-03/msg00378.php\n\n\n",
"msg_date": "Wed, 14 Apr 2021 09:20:46 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LWLocks by LockManager slowing large DB"
}
] |
[
{
"msg_contents": "Hey all,\n\nI've been pulling my hair out over this for days now, as I'm trying to\nbuild a low latency application. Databases should be fast, but I can not\nwork out why so much latency is added between the actual database process\nand the application code. For simple queries, that should take less than a\nmillisecond, this mystery latency is by far the biggest performance hit.\n\nFor example, I have a very simple table on a local Postgres database, which\ntakes 324us to query. This is absolutely fine.\n\npostgres=# EXPLAIN (ANALYZE, COSTS OFF, TIMING OFF)\n select count(\"ID\") from example t;\n-[ RECORD 1 ]------------------------------------------------------\nQUERY PLAN | Aggregate (actual rows=1 loops=1)\n-[ RECORD 2 ]------------------------------------------------------\nQUERY PLAN | -> Seq Scan on example t (actual rows=4119 loops=1)\n-[ RECORD 3 ]------------------------------------------------------\nQUERY PLAN | Planning Time: 0.051 ms\n-[ RECORD 4 ]------------------------------------------------------\nQUERY PLAN | Execution Time: 0.324 ms\n\nBut then if I want to connect to the database in Python, getting the actual\ndata takes over 2500us! The loopback interface is not the problem here, I\nsanity checked that (it adds 100us at a stretch).\n\nIn [1]: %time cursor.execute(\"select count(\\\"ID\\\") from example;\");r =\ncursor.fetchmany()\nCPU times: user 316 µs, sys: 1.12 ms, total: 1.44 ms\nWall time: 2.73 ms\n\nInvestigating further, I opened up WireShark to see how long the packets\nthemselves take.\n\n[image: Wireshark screenshot] <https://i.stack.imgur.com/ItoSu.png>\n\n From this I can tell that Postgres took 2594us to send the data. This kind\nof overhead for an loopback IPC call is way beyond what I would expect. Why\nis this? And how can I optimise this away?\n\nThis is a problem for the software ecosystem, because people start thinking\ndatabases are too slow to do simple lookups, and they start caching queries\nin Redis that take 300us to execute, but 3ms to actually fetch back to the\nclient. It adds up to an awful lot of latency.\n\n\nAs an aside, I've been really miffed by how bad database IPC performance\nhas been in general. Oracle has a problem where each ~1500 bytes makes\nreceiving the packet 800us slower (on a gigabit network).\n\n\nI'd really love to know what's happening here.\n\n\nRollo\n\nHey all,I've been pulling my hair out over this for days now, as I'm trying to build a low latency application. Databases should be fast, but I can not work out why so much latency is added between the actual database process and the application code. For simple queries, that should take less than a millisecond, this mystery latency is by far the biggest performance hit.For example, I have a very simple table on a local Postgres database, which takes 324us to query. This is absolutely fine.postgres=# EXPLAIN (ANALYZE, COSTS OFF, TIMING OFF)\n select count(\"ID\") from example t;\n-[ RECORD 1 ]------------------------------------------------------\nQUERY PLAN | Aggregate (actual rows=1 loops=1)\n-[ RECORD 2 ]------------------------------------------------------\nQUERY PLAN | -> Seq Scan on example t (actual rows=4119 loops=1)\n-[ RECORD 3 ]------------------------------------------------------\nQUERY PLAN | Planning Time: 0.051 ms\n-[ RECORD 4 ]------------------------------------------------------\nQUERY PLAN | Execution Time: 0.324 ms\nBut then if I want to connect to the database in Python, getting the actual data takes over 2500us! The loopback interface is not the problem here, I sanity checked that (it adds 100us at a stretch).In [1]: %time cursor.execute(\"select count(\\\"ID\\\") from example;\");r = cursor.fetchmany()\nCPU times: user 316 µs, sys: 1.12 ms, total: 1.44 ms\nWall time: 2.73 ms\nInvestigating further, I opened up WireShark to see how long the packets themselves take.From this I can tell that Postgres took 2594us to send the data. This kind of overhead for an loopback IPC call is way beyond what I would expect. Why is this? And how can I optimise this away?This is a problem for the software ecosystem, because people start thinking databases are too slow to do simple lookups, and they start caching queries in Redis that take 300us to execute, but 3ms to actually fetch back to the client. It adds up to an awful lot of latency.As an aside, I've been really miffed by how bad database IPC performance has been in general. Oracle has a problem where each ~1500 bytes makes receiving the packet 800us slower (on a gigabit network).I'd really love to know what's happening here.Rollo",
"msg_date": "Thu, 15 Apr 2021 00:10:40 +0100",
"msg_from": "Rollo Konig-Brock <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why is there a tenfold difference between Postgres's alleged query\n execution time and packet transmission time?"
},
{
"msg_contents": "Rollo Konig-Brock <[email protected]> writes:\n> I've been pulling my hair out over this for days now, as I'm trying to\n> build a low latency application. Databases should be fast, but I can not\n> work out why so much latency is added between the actual database process\n> and the application code. For simple queries, that should take less than a\n> millisecond, this mystery latency is by far the biggest performance hit.\n\nWell, for sub-millisecond queries, I'm afraid that EXPLAIN's numbers\nomit a lot of the overhead that you have to think about for such short\nqueries. For instance:\n\n* Query parsing (both the grammar and parse analysis). You could get\na handle on how much this is relative to what EXPLAIN knows about by\nenabling log_parser_stats, log_planner_stats, and log_executor_stats.\nDepending on workload, you *might* be able to ameliorate these costs\nby using prepared queries, although that cure can easily be worse\nthan the disease.\n\n* I/O conversions, notably both formatting of output data and charset\nencoding conversions. You can possibly ameliorate these by using\nbinary output and making sure that the client and server use the same\nencoding.\n\n* SSL encryption. This is probably not enabled on a local loopback\nconnection, but it doesn't hurt to check.\n\n* Backend catalog cache filling. It doesn't pay to make a connection\nfor just one or a few queries, because a newly started backend\nprocess won't really be up to speed until it's populated its caches\nwith catalog data that's relevant to your queries. I think most\n(not all) of this cost is incurred during parse analysis, which\nwould help to hide it from EXPLAIN-based investigation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 16 Apr 2021 09:27:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is there a tenfold difference between Postgres's alleged\n query execution time and packet transmission time?"
}
] |
[
{
"msg_contents": "Hi,\n\nI currently have a strange behavior once statistics are collected. This is the statement (I don't know the application, the statement is as it is):\n\nexplain (analyze, buffers) select distinct standardzi4_.code as col_0_0_,\n person1_.personalnummer as col_1_0_,\n person1_.funktion as col_2_0_,\n person1_.vorgesetzter as col_3_0_,\n person1_.ga_nr as col_5_0_,\n person1_.gueltig_ab as col_6_0_\nfrom pia_01.pesz_person_standardziel personstan0_\n left outer join pia_01.pes_person_zielvergabe personziel3_ on personstan0_.pes_id = personziel3_.id\n left outer join pia_01.stz_standardziel standardzi4_ on personstan0_.stz_id = standardzi4_.id\n cross join pia_01.per_person person1_\n cross join pia_01.pess_person_stufe personstan2_\n cross join pia_01.zid_zieldefinition zieldefini5_\n cross join pia_01.stuv_stufe_vorgabe stufevorga8_\nwhere personziel3_.zid_id = zieldefini5_.id\n and personstan2_.stuv_id = stufevorga8_.id\n and zieldefini5_.jahr=2021\n and (person1_.id in\n (select persons6_.per_id from pia_01.pesr_zielvergabe_person persons6_ where personziel3_.id = persons6_.pes_id))\n and (personstan0_.pess_id is null)\n and (personstan2_.id in\n (select stufen7_.id from pia_01.pess_person_stufe stufen7_ where personstan0_.id = stufen7_.pesz_id))\n and stufevorga8_.default_prozent_sollwert = 100\n and personziel3_.finale_vis_status = 'ANGENOMMEN';\n\nWithout any statistics the query runs quite fine (this is after a fresh import of a dump, no statistics yet):\n\n\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=1549875.41..1550295.03 rows=23978 width=32) (actual time=14990.816..14990.842 rows=80 loops=1)\n Buffers: shared hit=24422449\n -> Sort (cost=1549875.41..1549935.36 rows=23978 width=32) (actual time=14990.815..14990.824 rows=80 loops=1)\n Sort Key: person1_.personalnummer, standardzi4_.code, person1_.funktion, person1_.vorgesetzter, person1_.ga_nr, person1_.gueltig_ab\n Sort Method: quicksort Memory: 31kB\n Buffers: shared hit=24422449\n -> Nested Loop (cost=28.90..1548131.08 rows=23978 width=32) (actual time=13496.302..14990.767 rows=80 loops=1)\n Join Filter: (SubPlan 1)\n Rows Removed by Join Filter: 859840\n Buffers: shared hit=24422449\n -> Seq Scan on per_person person1_ (cost=0.00..250.49 rows=10749 width=38) (actual time=0.025..1.119 rows=10749 loops=1)\n Buffers: shared hit=143\n -> Materialize (cost=28.90..678088.61 rows=4 width=10) (actual time=0.019..1.252 rows=80 loops=10749)\n Buffers: shared hit=21842546\n -> Nested Loop Left Join (cost=28.90..678088.59 rows=4 width=10) (actual time=198.423..13419.226 rows=80 loops=1)\n Buffers: shared hit=21842546\n -> Nested Loop (cost=28.76..678087.75 rows=4 width=16) (actual time=198.414..13418.767 rows=80 loops=1)\n Join Filter: (SubPlan 2)\n Rows Removed by Join Filter: 5143360\n Buffers: shared hit=21842386\n -> Nested Loop (cost=9.43..5319.71 rows=1 width=24) (actual time=69.778..90.956 rows=80 loops=1)\n Join Filter: (personziel3_.id = personstan0_.pes_id)\n Rows Removed by Join Filter: 435696\n Buffers: shared hit=18053\n -> Index Scan using dwe30 on pesz_person_standardziel personstan0_ (cost=0.29..5300.69 rows=321 width=24) (actual time=0.037..25.543 rows=54472 loops=1)\n Filter: (pess_id IS NULL)\n Rows Removed by Filter: 9821\n Buffers: shared hit=18031\n -> Materialize (cost=9.14..14.22 rows=1 width=8) (actual time=0.000..0.000 rows=8 loops=54472)\n Buffers: shared hit=22\n -> Nested Loop (cost=9.14..14.21 rows=1 width=8) (actual time=0.534..0.564 rows=8 loops=1)\n Buffers: shared hit=22\n -> Seq Scan on zid_zieldefinition zieldefini5_ (cost=0.00..1.05 rows=1 width=8) (actual time=0.012..0.014 rows=1 loops=1)\n Filter: (jahr = 2021)\n Rows Removed by Filter: 3\n Buffers: shared hit=1\n -> Bitmap Heap Scan on pes_person_zielvergabe personziel3_ (cost=9.14..13.15 rows=1 width=16) (actual time=0.514..0.537 rows=8 loops=1)\n Recheck Cond: ((zid_id = zieldefini5_.id) AND ((finale_vis_status)::text = 'ANGENOMMEN'::text))\n Heap Blocks: exact=7\n Buffers: shared hit=21\n -> BitmapAnd (cost=9.14..9.14 rows=1 width=0) (actual time=0.498..0.499 rows=0 loops=1)\n Buffers: shared hit=14\n -> Bitmap Index Scan on dwe33 (cost=0.00..4.44 rows=21 width=0) (actual time=0.137..0.137 rows=1109 loops=1)\n Index Cond: (zid_id = zieldefini5_.id)\n Buffers: shared hit=5\n -> Bitmap Index Scan on dwe35 (cost=0.00..4.44 rows=21 width=0) (actual time=0.345..0.345 rows=1867 loops=1)\n Index Cond: ((finale_vis_status)::text = 'ANGENOMMEN'::text)\n Buffers: shared hit=9\n -> Nested Loop (cost=19.33..1493.56 rows=892 width=8) (actual time=0.928..14.326 rows=64293 loops=80)\n Buffers: shared hit=157600\n -> Seq Scan on stuv_stufe_vorgabe stufevorga8_ (cost=0.00..1.12 rows=1 width=8) (actual time=0.001..0.007 rows=4 loops=80)\n Filter: (default_prozent_sollwert = 100)\n Rows Removed by Filter: 6\n Buffers: shared hit=80\n -> Bitmap Heap Scan on pess_person_stufe personstan2_ (cost=19.33..1483.51 rows=892 width=16) (actual time=0.465..2.179 rows=16073 loops=320)\n Recheck Cond: (stuv_id = stufevorga8_.id)\n Heap Blocks: exact=142560\n Buffers: shared hit=157520\n -> Bitmap Index Scan on dwe32 (cost=0.00..19.11 rows=892 width=0) (actual time=0.427..0.427 rows=16073 loops=320)\n Index Cond: (stuv_id = stufevorga8_.id)\n Buffers: shared hit=14960\n SubPlan 2\n -> Bitmap Heap Scan on pess_person_stufe stufen7_ (cost=19.33..1483.51 rows=892 width=8) (actual time=0.001..0.002 rows=3 loops=5143440)\n Recheck Cond: (personstan0_.id = pesz_id)\n Heap Blocks: exact=6236413\n Buffers: shared hit=21666733\n -> Bitmap Index Scan on dwe41 (cost=0.00..19.11 rows=892 width=0) (actual time=0.001..0.001 rows=3 loops=5143440)\n Index Cond: (pesz_id = personstan0_.id)\n Buffers: shared hit=15430320\n -> Index Scan using dwe26 on stz_standardziel standardzi4_ (cost=0.14..0.21 rows=1 width=10) (actual time=0.003..0.003 rows=1 loops=80)\n Index Cond: (id = personstan0_.stz_id)\n Buffers: shared hit=160\n SubPlan 1\n -> Bitmap Heap Scan on pesr_zielvergabe_person persons6_ (cost=4.50..35.86 rows=28 width=8) (actual time=0.001..0.001 rows=1 loops=859920)\n Recheck Cond: (personziel3_.id = pes_id)\n Heap Blocks: exact=859920\n Buffers: shared hit=2579760\n -> Bitmap Index Scan on dwe40 (cost=0.00..4.49 rows=28 width=0) (actual time=0.001..0.001 rows=1 loops=859920)\n Index Cond: (pes_id = personziel3_.id)\n Buffers: shared hit=1719840\n Planning Time: 3.796 ms\n Execution Time: 14991.063 ms\n\n\nAs soon as statistics are there (default configuration of 100) it takes ages to complete and the plan changes to something like this:\n\nSort (cost=39436681358.77..39436681466.76 rows=43197 width=19) (actual time=6650173.775..6650173.787 rows=80 loops=1)\n Sort Key: person1_.personalnummer\n Sort Method: quicksort Memory: 31kB\n Buffers: shared hit=14615335936 read=549, temp read=24 written=196\n I/O Timings: read=11.777\n -> HashAggregate (cost=39436677600.92..39436678032.89 rows=43197 width=19) (actual time=6650173.529..6650173.760 rows=80 loops=1)\n Group Key: person1_.personalnummer, standardzi4_.code, person1_.funktion, person1_.vorgesetzter, person1_.ga_nr, person1_.gueltig_ab\n Buffers: shared hit=14615335936 read=549, temp read=24 written=196\n I/O Timings: read=11.777\n -> Hash Join (cost=7114329.95..21693291098.12 rows=1182892433520 width=19) (actual time=4836630.324..6650173.340 rows=80 loops=1)\n Hash Cond: (personstan0_.pes_id = personziel3_.id)\n Buffers: shared hit=14615335936 read=549, temp read=24 written=196\n I/O Timings: read=11.777\n -> Nested Loop (cost=0.85..6566973644.72 rows=1944203730 width=10) (actual time=2.004..6649787.421 rows=54472 loops=1)\n Join Filter: (SubPlan 2)\n Rows Removed by Join Filter: 3502113824\n Buffers: shared hit=14615065561 read=549\n I/O Timings: read=11.777\n -> Nested Loop Left Join (cost=0.43..10803.16 rows=54472 width=18) (actual time=0.027..429.204 rows=54472 loops=1)\n Buffers: shared hit=112968 read=142\n I/O Timings: read=3.081\n -> Index Scan using dbi21 on pesz_person_standardziel personstan0_ (cost=0.29..2092.80 rows=54472 width=24) (actual time=0.017..132.607 rows=54472 loops=1)\n Filter: (pess_id IS NULL)\n Rows Removed by Filter: 9821\n Buffers: shared hit=4024 read=142\n I/O Timings: read=3.081\n -> Index Scan using dbi22 on stz_standardziel standardzi4_ (cost=0.14..0.16 rows=1 width=10) (actual time=0.003..0.003 rows=1 loops=54472)\n Index Cond: (id = personstan0_.stz_id)\n Buffers: shared hit=108944\n -> Materialize (cost=0.42..3691.48 rows=71384 width=8) (actual time=0.000..3.681 rows=64293 loops=54472)\n Buffers: shared hit=1970\n -> Nested Loop (cost=0.42..3334.56 rows=71384 width=8) (actual time=0.027..19.700 rows=64293 loops=1)\n Buffers: shared hit=1970\n -> Seq Scan on stuv_stufe_vorgabe stufevorga8_ (cost=0.00..1.12 rows=4 width=8) (actual time=0.007..0.014 rows=4 loops=1)\n Filter: (default_prozent_sollwert = 100)\n Rows Removed by Filter: 6\n Buffers: shared hit=1\n -> Index Scan using dbi24 on pess_person_stufe personstan2_ (cost=0.42..635.07 rows=19829 width=16) (actual time=0.009..3.139 rows=16073 loops=4)\n Index Cond: (stuv_id = stufevorga8_.id)\n Buffers: shared hit=1969\n SubPlan 2\n -> Index Scan using dbi31 on pess_person_stufe stufen7_ (cost=0.42..2.92 rows=3 width=8) (actual time=0.001..0.001 rows=3 loops=3502168296)\n Index Cond: (pesz_id = personstan0_.id)\n Buffers: shared hit=14614950623 read=407\n I/O Timings: read=8.696\n -> Hash (cost=7065823.25..7065823.25 rows=2508548 width=25) (actual time=134.212..134.215 rows=9 loops=1)\n Buckets: 65536 Batches: 64 Memory Usage: 556kB\n Buffers: shared hit=270375\n -> Nested Loop (cost=0.28..7065823.25 rows=2508548 width=25) (actual time=26.297..133.908 rows=9 loops=1)\n Join Filter: (SubPlan 1)\n Rows Removed by Join Filter: 85983\n Buffers: shared hit=270375\n -> Nested Loop (cost=0.28..201.31 rows=467 width=8) (actual time=1.708..1.911 rows=8 loops=1)\n Join Filter: (personziel3_.zid_id = zieldefini5_.id)\n Rows Removed by Join Filter: 1859\n Buffers: shared hit=1508\n -> Index Scan using dbi35 on pes_person_zielvergabe personziel3_ (cost=0.28..172.26 rows=1867 width=16) (actual time=0.014..1.047 rows=1867 loops=1)\n Filter: ((finale_vis_status)::text = 'ANGENOMMEN'::text)\n Rows Removed by Filter: 2256\n Buffers: shared hit=1507\n -> Materialize (cost=0.00..1.05 rows=1 width=8) (actual time=0.000..0.000 rows=1 loops=1867)\n Buffers: shared hit=1\n -> Seq Scan on zid_zieldefinition zieldefini5_ (cost=0.00..1.05 rows=1 width=8) (actual time=0.007..0.007 rows=1 loops=1)\n Filter: (jahr = 2021)\n Rows Removed by Filter: 3\n Buffers: shared hit=1\n -> Materialize (cost=0.00..304.24 rows=10749 width=25) (actual time=0.002..1.463 rows=10749 loops=8)\n Buffers: shared hit=143\n -> Seq Scan on per_person person1_ (cost=0.00..250.49 rows=10749 width=25) (actual time=0.008..3.364 rows=10749 loops=1)\n Buffers: shared hit=143\n SubPlan 1\n -> Index Scan using dbi36 on pesr_zielvergabe_person persons6_ (cost=0.28..2.50 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=85992)\n Index Cond: (pes_id = personziel3_.id)\n Buffers: shared hit=268724\nPlanning Time: 1.680 ms\nExecution Time: 6650174.623 ms\n\nHave a look a the row estimates starting in the HasAggregate node. The behavior is the same for 9.6.20, 12.5 and 14devel. Is this a known issue?\n\nRegards\nDaniel\n\n",
"msg_date": "Thu, 15 Apr 2021 13:17:52 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strange behavior once statistics are there"
},
{
"msg_contents": "\"Daniel Westermann (DWE)\" <[email protected]> writes:\n> I currently have a strange behavior once statistics are collected. This is the statement (I don't know the application, the statement is as it is):\n\nI think your problem is with the subplan conditions, ie\n\n> and (person1_.id in\n> (select persons6_.per_id from pia_01.pesr_zielvergabe_person persons6_ where personziel3_.id = persons6_.pes_id))\n> ...\n> and (personstan2_.id in\n> (select stufen7_.id from pia_01.pess_person_stufe stufen7_ where personstan0_.id = stufen7_.pesz_id))\n\nThese essentially create weird join conditions (between person1_ and\npersonziel3_ in the first case or personstan2_ and personstan0_ in\nthe second case) that the planner has no idea how to estimate.\nIt just throws up its hands and uses a default selectivity of 0.5,\nwhich is nowhere near reality in this case.\n\nYou accidentally got an acceptable (not great) plan anyway without\nstatistics, but not so much with statistics. Worse yet, the subplans\nhave to be implemented essentially as nestloop joins.\n\nI'd suggest trying to flatten these to be regular joins, ie\ntry to bring up persons6_ and stufen7_ into the main JOIN nest.\nIt looks like persons6_.pes_id might be unique, meaning that you\ndon't really need the IN behavior in the first case so flattening\nit should be straightforward. The other one is visibly not unique,\nbut since you're using \"select distinct\" at the top level anyway,\ngetting duplicate rows might not be a problem (unless there are\na lot of duplicates?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 15 Apr 2021 11:00:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange behavior once statistics are there"
},
{
"msg_contents": "From: Tom Lane <[email protected]>\nSent: Thursday, April 15, 2021 17:00\nTo: Daniel Westermann (DWE) <[email protected]>\nCc: [email protected] <[email protected]>\nSubject: Re: Strange behavior once statistics are there \n \n>I'd suggest trying to flatten these to be regular joins, ie\n>try to bring up persons6_ and stufen7_ into the main JOIN nest.\n>It looks like persons6_.pes_id might be unique, meaning that you\n>don't really need the IN behavior in the first case so flattening\n>it should be straightforward. The other one is visibly not unique,\n>but since you're using \"select distinct\" at the top level anyway,\n>getting duplicate rows might not be a problem (unless there are\n>a lot of duplicates?)\n\nThank you, Tom\n\n",
"msg_date": "Fri, 16 Apr 2021 06:14:42 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Strange behavior once statistics are there"
}
] |
[
{
"msg_contents": "Hi,\n \nIs there any way to set time that CURRENT_TIMESTAMP and/or now() will give next time? (We need it only for testing purposes so if there is any hack, cheat, etc. It will be fine)\n \n \n \n \nHi, Is there any way to set time that CURRENT_TIMESTAMP and/or now() will give next time? (We need it only for testing purposes so if there is any hack, cheat, etc. It will be fine)",
"msg_date": "Thu, 15 Apr 2021 16:45:44 +0300",
"msg_from": "=?UTF-8?B?V2Fyc3RvbmVAbGlzdC5ydQ==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?B?SXMgdGhlcmUgYSB3YXkgdG8gY2hhbmdlIGN1cnJlbnQgdGltZT8=?="
},
{
"msg_contents": "On Thu, Apr 15, 2021 at 04:45:44PM +0300, [email protected] wrote:\n> Hi,\n> \n> Is there any way to set time that CURRENT_TIMESTAMP and/or now() will give next\n> time? (We need it only for testing purposes so if there is any hack, cheat,\n> etc. It will be fine)\n\nNo, it gets the time from the operating system.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 15 Apr 2021 09:58:23 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is there a way to change current time?"
},
{
"msg_contents": "On Thu, Apr 15, 2021 at 09:58:23AM -0400, Bruce Momjian wrote:\n> On Thu, Apr 15, 2021 at 04:45:44PM +0300, [email protected] wrote:\n> > Hi,\n> > \n> > Is there any way to set time that CURRENT_TIMESTAMP and/or now() will give next\n> > time? (We need it only for testing purposes so if there is any hack, cheat,\n> > etc. It will be fine)\n> \n> No, it gets the time from the operating system.\n\nYou could overload now():\n\npostgres=# CREATE DATABASE pryzbyj;\npostgres=# \\c pryzbyj\npryzbyj=# CREATE SCHEMA pryzbyj;\npryzbyj=# CREATE FUNCTION pryzbyj.now() RETURNS timestamp LANGUAGE SQL AS $$ SELECT 'today'::timestamp $$;\npryzbyj=# ALTER ROLE pryzbyj SET search_path=pryzbyj,public,pg_catalog;\npryzbyj=# SELECT now();\nnow | 2021-04-15 00:00:00\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 15 Apr 2021 09:01:56 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is there a way to change current time?"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> On Thu, Apr 15, 2021 at 04:45:44PM +0300, [email protected] wrote:\n>> Is there any way to set time that CURRENT_TIMESTAMP and/or now() will give next\n>> time? (We need it only for testing purposes so if there is any hack, cheat,\n>> etc. It will be fine)\n\n> No, it gets the time from the operating system.\n\nI think there are OS-level solutions for this on some operating systems.\nIf nothing else, you could manually set the system clock; but what you'd\nreally want is for the phony time to be visible to just some processes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 15 Apr 2021 10:13:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is there a way to change current time?"
},
{
"msg_contents": "Hi\n\nčt 15. 4. 2021 v 15:45 odesílatel [email protected] <[email protected]>\nnapsal:\n\n> Hi,\n>\n> Is there any way to set time that CURRENT_TIMESTAMP and/or now() will give\n> next time? (We need it only for testing purposes so if there is any hack,\n> cheat, etc. It will be fine)\n>\n\nThis is a bad way - don't use now() in queries - use a variable instead.\nLater you can use now as an argument or some constant value.\n\nRegards\n\nPavel\n\n\n>\n>\n>\n>\n>\n\nHičt 15. 4. 2021 v 15:45 odesílatel [email protected] <[email protected]> napsal:\nHi, Is there any way to set time that CURRENT_TIMESTAMP and/or now() will give next time? (We need it only for testing purposes so if there is any hack, cheat, etc. It will be fine)This is a bad way - don't use now() in queries - use a variable instead. Later you can use now as an argument or some constant value.RegardsPavel",
"msg_date": "Thu, 15 Apr 2021 16:14:27 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is there a way to change current time?"
},
{
"msg_contents": "select pg_sleep(1);\n\nчт, 15 апр. 2021 г. в 16:45, [email protected] <[email protected]>:\n\n> Hi,\n>\n> Is there any way to set time that CURRENT_TIMESTAMP and/or now() will give\n> next time? (We need it only for testing purposes so if there is any hack,\n> cheat, etc. It will be fine)\n>\n>\n>\n>\n>\n\n\n-- \nEvgeny Pazhitnov\n\nselect pg_sleep(1);чт, 15 апр. 2021 г. в 16:45, [email protected] <[email protected]>:\nHi, Is there any way to set time that CURRENT_TIMESTAMP and/or now() will give next time? (We need it only for testing purposes so if there is any hack, cheat, etc. It will be fine) \n-- Evgeny Pazhitnov",
"msg_date": "Thu, 15 Apr 2021 17:37:17 +0300",
"msg_from": "Eugene Pazhitnov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is there a way to change current time?"
}
] |
[
{
"msg_contents": "Hello,\n\n \n\nWe want to start migration to POSTGRESQL13 from MSSQL SERVER but we\ncouldnt find any solution for oledb drivers. Please help us to find a\nsolution or any workaround if possible.\n\n \n\n \n\n \n\nThanks.\n\nMustafa\n\n\nHello, We want to start migration to POSTGRESQL13 from MSSQL SERVER but we couldnt find any solution for oledb drivers. Please help us to find a solution or any workaround if possible. Thanks.Mustafa",
"msg_date": "Fri, 16 Apr 2021 13:32:56 +0300",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "OLEDB for PostgreSQL"
},
{
"msg_contents": "Please look at this:\nhttps://www.postgresql.org/ftp/odbc/versions/\n\n\nCum, 2021-04-16 tarihinde 13:32 +0300 saatinde,\[email protected] yazdı:\n> Hello,\n> \n> We want to start migration to POSTGRESQL13 from MSSQL SERVER but we\n> couldnt find any solution for oledb drivers. Please help us to find a\n> solution or any workaround if possible.\n> \n> \n> \n> Thanks.\n> Mustafa\n\n\nPlease look at this:https://www.postgresql.org/ftp/odbc/versions/Cum, 2021-04-16 tarihinde 13:32 +0300 saatinde, [email protected] yazdı:Hello, We want to start migration to POSTGRESQL13 from MSSQL SERVER but we couldnt find any solution for oledb drivers. Please help us to find a solution or any workaround if possible. Thanks.Mustafa",
"msg_date": "Fri, 16 Apr 2021 15:31:26 +0300",
"msg_from": "Google FezaCakir <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OLEDB for PostgreSQL"
},
{
"msg_contents": "[email protected] schrieb am 16.04.2021 um 12:32:\n> We want to start migration to POSTGRESQL13 from MSSQL SERVER but we\n> couldnt find any solution for oledb drivers. Please help us to find a\n> solution or any workaround if possible.\n\n\nI don't think there is a free OleDB provider, but there seems to be at least one commercial: https://www.pgoledb.com/\n\nI don't really know the whole OleDB/.Net/ODBC world (it seems, Microsoft changes its strategy around that every other year) - but there is a free .Net provider: https://www.npgsql.org/ not sure if that is of any help\n\nAnd obviously there is an ODBC driver.\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Fri, 16 Apr 2021 16:25:17 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OLEDB for PostgreSQL"
},
{
"msg_contents": "On Fri, Apr 16, 2021 at 10:25 AM Thomas Kellerer <[email protected]> wrote:\n\n> [email protected] schrieb am 16.04.2021 um 12:32:\n> > We want to start migration to POSTGRESQL13 from MSSQL SERVER but we\n> > couldnt find any solution for oledb drivers. Please help us to find a\n> > solution or any workaround if possible.\n>\n> I don't think there is a free OleDB provider, but there seems to be at\n> least one commercial: https://www.pgoledb.com/\n>\n> I don't really know the whole OleDB/.Net/ODBC world (it seems, Microsoft\n> changes its strategy around that every other year) - but there is a free\n> .Net provider: https://www.npgsql.org/ not sure if that is of any help\n>\n> And obviously there is an ODBC driver.\n>\n\nBack in the day, this was possible via the \"Microsoft OLE DB Provider For\nODBC drivers\" and using the Postgres ODBC driver. Is the generic\nMicrosoft-provided OLE DB provider for ODBC not still around?\n\n-- \nJonah H. Harris\n\nOn Fri, Apr 16, 2021 at 10:25 AM Thomas Kellerer <[email protected]> wrote:[email protected] schrieb am 16.04.2021 um 12:32:\n> We want to start migration to POSTGRESQL13 from MSSQL SERVER but we\n> couldnt find any solution for oledb drivers. Please help us to find a\n> solution or any workaround if possible.\nI don't think there is a free OleDB provider, but there seems to be at least one commercial: https://www.pgoledb.com/\n\nI don't really know the whole OleDB/.Net/ODBC world (it seems, Microsoft changes its strategy around that every other year) - but there is a free .Net provider: https://www.npgsql.org/ not sure if that is of any help\n\nAnd obviously there is an ODBC driver.Back in the day, this was possible via the \"Microsoft OLE DB Provider For ODBC drivers\" and using the Postgres ODBC driver. Is the generic Microsoft-provided OLE DB provider for ODBC not still around?-- Jonah H. Harris",
"msg_date": "Fri, 16 Apr 2021 17:42:16 -0400",
"msg_from": "\"Jonah H. Harris\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OLEDB for PostgreSQL"
},
{
"msg_contents": "On Fri, Apr 16, 2021 at 05:42:16PM -0400, Jonah H. Harris wrote:\n> \n> Back in the day, this was possible via the \"Microsoft OLE DB Provider For\n> ODBC drivers\" and using the Postgres ODBC driver. Is the generic\n> Microsoft-provided OLE DB provider for ODBC not still around?\n> \n> -- \n> Jonah H. Harris\n\nLooks like it still is:\n\nhttps://docs.microsoft.com/en-us/sql/ado/guide/appendixes/microsoft-ole-db-provider-for-odbc?view=sql-server-ver15\n\nRegards,\nKen\n\n\n",
"msg_date": "Fri, 16 Apr 2021 16:52:37 -0500",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OLEDB for PostgreSQL"
},
{
"msg_contents": "Hi Mustafa,\nYou can look into the SQLine tool. We recently used to migrate MSSQL to\nPostgreSQL. For procedures/functions etc you need to have good amount of\nunderstanding of TSQL and PL-PGSQL. SQLine will convert 60-80% of you TSQL\ncode. Some manual effort is required at the end.\n\nRegards,\nAditya.\n\nOn Fri, Apr 16, 2021 at 4:03 PM <[email protected]> wrote:\n\n> Hello,\n>\n>\n>\n> We want to start migration to POSTGRESQL13 from MSSQL SERVER but we\n> couldnt find any solution for oledb drivers. Please help us to find a\n> solution or any workaround if possible.\n>\n>\n>\n>\n>\n>\n>\n> Thanks.\n>\n> Mustafa\n>\n\nHi Mustafa,You can look into the SQLine tool. We recently used to migrate MSSQL to PostgreSQL. For procedures/functions etc you need to have good amount of understanding of TSQL and PL-PGSQL. SQLine will convert 60-80% of you TSQL code. Some manual effort is required at the end.Regards,Aditya.On Fri, Apr 16, 2021 at 4:03 PM <[email protected]> wrote:Hello, We want to start migration to POSTGRESQL13 from MSSQL SERVER but we couldnt find any solution for oledb drivers. Please help us to find a solution or any workaround if possible. Thanks.Mustafa",
"msg_date": "Mon, 19 Apr 2021 18:40:29 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OLEDB for PostgreSQL"
},
{
"msg_contents": "I have been using SSIS for migrating data only for non-aws stuff.\n\nIf your target is on AWS, you can use SCT nd DMS.\n\nYou might also want to look into the link....https://youtu.be/YKJub0zVztE\n\n\nRegards\nPavan\n\n\nOn Mon, Apr 19, 2021, 8:10 AM aditya desai <[email protected]> wrote:\n\n> Hi Mustafa,\n> You can look into the SQLine tool. We recently used to migrate MSSQL to\n> PostgreSQL. For procedures/functions etc you need to have good amount of\n> understanding of TSQL and PL-PGSQL. SQLine will convert 60-80% of you TSQL\n> code. Some manual effort is required at the end.\n>\n> Regards,\n> Aditya.\n>\n> On Fri, Apr 16, 2021 at 4:03 PM <[email protected]> wrote:\n>\n>> Hello,\n>>\n>>\n>>\n>> We want to start migration to POSTGRESQL13 from MSSQL SERVER but we\n>> couldnt find any solution for oledb drivers. Please help us to find a\n>> solution or any workaround if possible.\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>> Thanks.\n>>\n>> Mustafa\n>>\n>\n\nI have been using SSIS for migrating data only for non-aws stuff.If your target is on AWS, you can use SCT nd DMS.You might also want to look into the link....https://youtu.be/YKJub0zVztERegardsPavanOn Mon, Apr 19, 2021, 8:10 AM aditya desai <[email protected]> wrote:Hi Mustafa,You can look into the SQLine tool. We recently used to migrate MSSQL to PostgreSQL. For procedures/functions etc you need to have good amount of understanding of TSQL and PL-PGSQL. SQLine will convert 60-80% of you TSQL code. Some manual effort is required at the end.Regards,Aditya.On Fri, Apr 16, 2021 at 4:03 PM <[email protected]> wrote:Hello, We want to start migration to POSTGRESQL13 from MSSQL SERVER but we couldnt find any solution for oledb drivers. Please help us to find a solution or any workaround if possible. Thanks.Mustafa",
"msg_date": "Tue, 20 Apr 2021 20:45:16 -0500",
"msg_from": "Pavan Pusuluri <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OLEDB for PostgreSQL"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 16968\nLogged by: Eugen Konkov\nEmail address: [email protected]\nPostgreSQL version: 13.1\nOperating system: Linux Mint 19.3\nDescription: \n\nTLDR;\r\nIf I refer to same column by different ways planner may or may not recognize\noptimization\r\n\r\nselect * from order_total_suma() ots where agreement_id = 3943; \n-- fast\r\nselect * from order_total_suma() ots where (ots.o).agreement_id = 3943; --\nslow\r\n\r\nWhere `order_total_suma` is sql function:\r\n\r\n\t\tSELECT\r\n\t\t sum( ocd.item_suma ) OVER( PARTITION BY (ocd.o).agreement_id \n ) AS agreement_suma,\r\n\t\t sum( ocd.item_suma ) OVER( PARTITION BY (ocd.o).agreement_id,\n(ocd.o).id ) AS order_suma,\r\n\t\t sum( ocd.item_cost ) OVER( PARTITION BY (ocd.o).agreement_id,\n(ocd.o).id, (ocd.ic).consumed_period ) AS group_cost,\r\n\t\t sum( ocd.item_suma ) OVER( PARTITION BY (ocd.o).agreement_id,\n(ocd.o).id, (ocd.ic).consumed_period ) AS group_suma,\r\n\t\t max( (ocd.ic).consumed ) OVER( PARTITION BY (ocd.o).agreement_id,\n(ocd.o).id, (ocd.ic).consumed_period ) AS consumed,\r\n\t\t ocd.item_qty, ocd.item_price, ocd.item_cost, ocd.item_suma,\r\n\t\t ocd.o, ocd.c, ocd.p, ocd.ic,\r\n\t\t (ocd.o).id as order_id,\r\n\t\t (ocd.o).agreement_id as agreement_id\r\n\t\tFROM order_cost_details( _target_range ) ocd\r\n\r\nProblem is window function, because ID can not go through. But this occur\nnot always.\r\nWhen I filter by field I partition result by then optimization occur \r\nBUT only when I create an alias for this field and do filtering via this\nalias.\r\n\r\nExpected: apply optimization not only when I do `WHERE agreement_id = XXX`\n\r\nbut and for `WHERE (ots.o).agreement_id = XXX`\r\n\r\nThank you.",
"msg_date": "Fri, 16 Apr 2021 19:18:45 +0000",
"msg_from": "PG Bug reporting form <[email protected]>",
"msg_from_op": true,
"msg_subject": "BUG #16968: Planner does not recognize optimization"
},
{
"msg_contents": "Now I attarch plans for both queries.\n\ntucha=> \\out f2\ntucha=> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS, FORMAT JSON) select * from order_total_suma() ots where agreement_id = 3943;\ntucha=> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS, FORMAT JSON) select * from order_total_suma() ots where (ots.o).agreement_id = 3943;\ntucha=> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) select * from order_total_suma() ots where agreement_id = 3943;\ntucha=> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) select * from order_total_suma() ots where (ots.o).agreement_id = 3943;\n\n\n\nFriday, April 16, 2021, 10:18:45 PM, you wrote:\n\n> The following bug has been logged on the website:\n\n> Bug reference: 16968\n> Logged by: Eugen Konkov\n> Email address: [email protected]\n> PostgreSQL version: 13.1\n> Operating system: Linux Mint 19.3\n> Description: \n\n> TLDR;\n> If I refer to same column by different ways planner may or may not recognize\n> optimization\n\n> select * from order_total_suma() ots where agreement_id = 3943; \n> -- fast\n> select * from order_total_suma() ots where (ots.o).agreement_id = 3943; --\n> slow\n\n> Where `order_total_suma` is sql function:\n\n> SELECT\n> sum( ocd.item_suma ) OVER( PARTITION BY (ocd.o).agreement_id\n> ) AS agreement_suma,\n> sum( ocd.item_suma ) OVER( PARTITION BY (ocd.o).agreement_id,\n> (ocd.o).id ) AS order_suma,\n> sum( ocd.item_cost ) OVER( PARTITION BY (ocd.o).agreement_id,\n> (ocd.o).id, (ocd.ic).consumed_period ) AS group_cost,\n> sum( ocd.item_suma ) OVER( PARTITION BY (ocd.o).agreement_id,\n> (ocd.o).id, (ocd.ic).consumed_period ) AS group_suma,\n> max( (ocd.ic).consumed ) OVER( PARTITION BY (ocd.o).agreement_id,\n> (ocd.o).id, (ocd.ic).consumed_period ) AS consumed,\n> ocd.item_qty, ocd.item_price, ocd.item_cost, ocd.item_suma,\n> ocd.o, ocd.c, ocd.p, ocd.ic,\n> (ocd.o).id as order_id,\n> (ocd.o).agreement_id as agreement_id\n> FROM order_cost_details( _target_range ) ocd\n\n> Problem is window function, because ID can not go through. But this occur\n> not always.\n> When I filter by field I partition result by then optimization occur \n> BUT only when I create an alias for this field and do filtering via this\n> alias.\n\n> Expected: apply optimization not only when I do `WHERE agreement_id = XXX`\n\n> but and for `WHERE (ots.o).agreement_id = XXX`\n\n> Thank you.\n\n\n\n\n-- \nBest regards,\nEugen Konkov",
"msg_date": "Fri, 16 Apr 2021 22:27:35 +0300",
"msg_from": "Eugen Konkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16968: Planner does not recognize optimization"
},
{
"msg_contents": "Now I create minimal reproducible test case.\nhttps://dbfiddle.uk/?rdbms=postgres_13&fiddle=761a00fb599789d3db31b120851d6341\n\nOptimization is not applyed when I filter/partition by column using composite type name.\n\nLooking at this comparison table, we can see that optimization work only when I refer to column using alias:\n\n (t.ag).ag_id as agreement_id -- making an alias\n\nPARTITION | FILTER | IS USED?\n------------------------------\nALIAS | ORIG | NO\nALIAS | ALIAS | YES\nORIG | ALIAS | NO\nORIG | ORIG | NO\n\nlink to original problem with EXPLAIN ANALYZE: https://stackoverflow.com/q/67492673/4632019\nLinks to similar problems:\nhttps://stackoverflow.com/a/26237464/4632019\nhttps://stackoverflow.com/q/65780112/4632019\n\n\nFriday, April 16, 2021, 10:27:35 PM, you wrote:\n\n\nNow I attarch plans for both queries.\n\ntucha=> \\out f2\ntucha=> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS, FORMAT JSON) select * from order_total_suma() ots where agreement_id = 3943;\ntucha=> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS, FORMAT JSON) select * from order_total_suma() ots where (ots.o).agreement_id = 3943;\ntucha=> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) select * from order_total_suma() ots where agreement_id = 3943;\ntucha=> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) select * from order_total_suma() ots where (ots.o).agreement_id = 3943;\n\n\n\nFriday, April 16, 2021, 10:18:45 PM, you wrote:\n\n> The following bug has been logged on the website:\n\n> Bug reference: 16968\n> Logged by: Eugen Konkov\n> Email address: [email protected]\n> PostgreSQL version: 13.1\n> Operating system: Linux Mint 19.3\n> Description: \n\n> TLDR;\n> If I refer to same column by different ways planner may or may not recognize\n> optimization\n\n> select * from order_total_suma() ots where agreement_id = 3943; \n> -- fast\n> select * from order_total_suma() ots where (ots.o).agreement_id = 3943; --\n> slow\n\n> Where `order_total_suma` is sql function:\n\n> SELECT\n> sum( ocd.item_suma ) OVER( PARTITION BY (ocd.o).agreement_id\n> ) AS agreement_suma,\n> sum( ocd.item_suma ) OVER( PARTITION BY (ocd.o).agreement_id,\n> (ocd.o).id ) AS order_suma,\n> sum( ocd.item_cost ) OVER( PARTITION BY (ocd.o).agreement_id,\n> (ocd.o).id, (ocd.ic).consumed_period ) AS group_cost,\n> sum( ocd.item_suma ) OVER( PARTITION BY (ocd.o).agreement_id,\n> (ocd.o).id, (ocd.ic).consumed_period ) AS group_suma,\n> max( (ocd.ic).consumed ) OVER( PARTITION BY (ocd.o).agreement_id,\n> (ocd.o).id, (ocd.ic).consumed_period ) AS consumed,\n> ocd.item_qty, ocd.item_price, ocd.item_cost, ocd.item_suma,\n> ocd.o, ocd.c, ocd.p, ocd.ic,\n> (ocd.o).id as order_id,\n> (ocd.o).agreement_id as agreement_id\n> FROM order_cost_details( _target_range ) ocd\n\n> Problem is window function, because ID can not go through. But this occur\n> not always.\n> When I filter by field I partition result by then optimization occur\n> BUT only when I create an alias for this field and do filtering via this\n> alias.\n\n> Expected: apply optimization not only when I do `WHERE agreement_id = XXX`\n\n> but and for `WHERE (ots.o).agreement_id = XXX`\n\n> Thank you.\n\n\n\n\n--\nBest regards,\nEugen Konkov\n\n\n\n-- \nBest regards,\nEugen Konkov\nRe: BUG #16968: Planner does not recognize optimization\n\n\nNow I create minimal reproducible test case.\nhttps://dbfiddle.uk/?rdbms=postgres_13&fiddle=761a00fb599789d3db31b120851d6341\n\nOptimization is not applyed when I filter/partition by column using composite type name.\n\nLooking at this comparison table, we can see that optimization work only when I refer to column using alias:\n\n (t.ag).ag_id as agreement_id -- making an alias\n\nPARTITION | FILTER | IS USED?\n------------------------------\nALIAS | ORIG | NO\nALIAS | ALIAS | YES\nORIG | ALIAS | NO\nORIG | ORIG | NO\n\nlink to original problem with EXPLAIN ANALYZE: https://stackoverflow.com/q/67492673/4632019\nLinks to similar problems:\nhttps://stackoverflow.com/a/26237464/4632019\nhttps://stackoverflow.com/q/65780112/4632019\n\n\nFriday, April 16, 2021, 10:27:35 PM, you wrote:\n\n\n\n\n\nNow I attarch plans for both queries.\n\ntucha=> \\out f2\ntucha=> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS, FORMAT JSON) select * from order_total_suma() ots where agreement_id = 3943;\ntucha=> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS, FORMAT JSON) select * from order_total_suma() ots where (ots.o).agreement_id = 3943;\ntucha=> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) select * from order_total_suma() ots where agreement_id = 3943;\ntucha=> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) select * from order_total_suma() ots where (ots.o).agreement_id = 3943;\n\n\n\nFriday, April 16, 2021, 10:18:45 PM, you wrote:\n\n> The following bug has been logged on the website:\n\n> Bug reference: 16968\n> Logged by: Eugen Konkov\n> Email address: [email protected]\n> PostgreSQL version: 13.1\n> Operating system: Linux Mint 19.3\n> Description: \n\n> TLDR;\n> If I refer to same column by different ways planner may or may not recognize\n> optimization\n\n> select * from order_total_suma() ots where agreement_id = 3943; \n> -- fast\n> select * from order_total_suma() ots where (ots.o).agreement_id = 3943; --\n> slow\n\n> Where `order_total_suma` is sql function:\n\n> SELECT\n> sum( ocd.item_suma ) OVER( PARTITION BY (ocd.o).agreement_id\n> ) AS agreement_suma,\n> sum( ocd.item_suma ) OVER( PARTITION BY (ocd.o).agreement_id,\n> (ocd.o).id ) AS order_suma,\n> sum( ocd.item_cost ) OVER( PARTITION BY (ocd.o).agreement_id,\n> (ocd.o).id, (ocd.ic).consumed_period ) AS group_cost,\n> sum( ocd.item_suma ) OVER( PARTITION BY (ocd.o).agreement_id,\n> (ocd.o).id, (ocd.ic).consumed_period ) AS group_suma,\n> max( (ocd.ic).consumed ) OVER( PARTITION BY (ocd.o).agreement_id,\n> (ocd.o).id, (ocd.ic).consumed_period ) AS consumed,\n> ocd.item_qty, ocd.item_price, ocd.item_cost, ocd.item_suma,\n> ocd.o, ocd.c, ocd.p, ocd.ic,\n> (ocd.o).id as order_id,\n> (ocd.o).agreement_id as agreement_id\n> FROM order_cost_details( _target_range ) ocd\n\n> Problem is window function, because ID can not go through. But this occur\n> not always.\n> When I filter by field I partition result by then optimization occur\n> BUT only when I create an alias for this field and do filtering via this\n> alias.\n\n> Expected: apply optimization not only when I do `WHERE agreement_id = XXX`\n\n> but and for `WHERE (ots.o).agreement_id = XXX`\n\n> Thank you.\n\n\n\n\n--\nBest regards,\nEugen Konkov\n\n\n\n\n\n--\nBest regards,\nEugen Konkov",
"msg_date": "Thu, 13 May 2021 16:35:13 +0300",
"msg_from": "Eugen Konkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16968: Planner does not recognize optimization"
},
{
"msg_contents": "On Fri, 14 May 2021 at 02:38, Eugen Konkov <[email protected]> wrote:\n> Now I create minimal reproducible test case.\n> https://dbfiddle.uk/?rdbms=postgres_13&fiddle=761a00fb599789d3db31b120851d6341\n>\n> Optimization is not applyed when I filter/partition by column using composite type name.\n\nYou probably already know this part, but let me explain it just in\ncase it's not clear.\n\nThe pushdown of the qual from the top-level query into the subquery,\nor function, in this case, is only legal when the qual references a\ncolumn that's in the PARTITION BY clause of all window functions in\nthe subquery. The reason for this is, if we filter rows before\ncalling the window function, then it could affect which rows are in\nsee in the window's frame. If it did filter, that could cause\nincorrect results. We can relax the restriction a bit if we can\neliminate entire partitions at once. The window function results are\nindependent between partitions, so we can allow qual pushdowns that\nare in all PARTITION BY clauses.\n\nAs for the reason you're having trouble getting this to work, it's\ndown to the way you're using whole-row vars in your targetlist.\n\nA slightly simplified case which shows this problem is:\n\ncreate table ab(a int, b int);\nexplain select * from (select ab as wholerowvar,row_number() over\n(partition by a) from ab) ab where (ab.wholerowvar).a=1;\n\nThe reason it does not work is down to how this is implemented\ninternally. The details are, transformGroupClause() not assigning a\nressortgroupref to the whole-row var. It's unable to because there is\nno way to track which actual column within the whole row var is in the\npartition by clause. When it comes to the code that tries to push the\nqual down into the subquery, check_output_expressions() checks if the\ncolumn in the subquery is ok to accept push downs or not. One of the\nchecks is to see if the query has windowing functions and to ensure\nthat the column is in all the PARTITION BY clauses of each windowing\nfunction. That check is done by checking if a ressortgroupref is\nassigned and matches a tleSortGroupRef in the PARTITION BY clause. In\nthis case, it does not match. We didn't assign any ressortgroupref to\nthe whole-row var.\n\nUnfortunately, whole-row vars are a bit to 2nd class citizen when it\ncomes to the query planner. Also, it would be quite a bit of effort to\nmake the planner push down the qual in this case. We'd need some sort\nof ability to assign ressortgroupref to a particular column within a\nwhole-row var and we'd need to adjust the code to check for that when\ndoing subquery pushdowns to allow it to mention which columns within\nwhole-row vars can legally accept pushdowns. I imagine that's\nunlikely to be fixed any time soon. Whole-row vars just don't seem to\nbe used commonly enough to warrant going to the effort of making this\nstuff work.\n\nTo work around this, you should include a reference to the actual\ncolumn in the targetlist of the subquery, or your function, in this\ncase, and ensure you use that same column in the PARTITION BY clause.\nYou'll then need to write that column in your condition that you need\npushed into the subquery. I'm sorry if that messes up your design.\nHowever, I imagine this is not the only optimisation that you'll miss\nout on by doing things the way you are.\n\nDavid\n\n\n",
"msg_date": "Fri, 14 May 2021 11:52:33 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16968: Planner does not recognize optimization"
},
{
"msg_contents": "Thank you for detailed explanation. I glad to hear that I can use aliases and this will be recognized and optimization is applied. >We'd need some sort of ability to assign ressortgroupref to a particular column within awhole-row varCould it be possible to create hidden alias in same way as I did that manually? Algorithm seems not complex:1. User refer column from composite type/whole-row: (o).agreement_id2. Create hidden column at select: _o_agreement_id3. Replace other references to (o).agreement_id by _o_agreement_id4. Process query as usual after replacements 14.05.2021, 02:52, \"David Rowley\" <[email protected]>:On Fri, 14 May 2021 at 02:38, Eugen Konkov <[email protected]> wrote: Now I create minimal reproducible test case. https://dbfiddle.uk/?rdbms=postgres_13&fiddle=761a00fb599789d3db31b120851d6341 Optimization is not applyed when I filter/partition by column using composite type name.You probably already know this part, but let me explain it just incase it's not clear.The pushdown of the qual from the top-level query into the subquery,or function, in this case, is only legal when the qual references acolumn that's in the PARTITION BY clause of all window functions inthe subquery. The reason for this is, if we filter rows beforecalling the window function, then it could affect which rows are insee in the window's frame. If it did filter, that could causeincorrect results. We can relax the restriction a bit if we caneliminate entire partitions at once. The window function results areindependent between partitions, so we can allow qual pushdowns thatare in all PARTITION BY clauses.As for the reason you're having trouble getting this to work, it'sdown to the way you're using whole-row vars in your targetlist.A slightly simplified case which shows this problem is:create table ab(a int, b int);explain select * from (select ab as wholerowvar,row_number() over(partition by a) from ab) ab where (ab.wholerowvar).a=1;The reason it does not work is down to how this is implementedinternally. The details are, transformGroupClause() not assigning aressortgroupref to the whole-row var. It's unable to because there isno way to track which actual column within the whole row var is in thepartition by clause. When it comes to the code that tries to push thequal down into the subquery, check_output_expressions() checks if thecolumn in the subquery is ok to accept push downs or not. One of thechecks is to see if the query has windowing functions and to ensurethat the column is in all the PARTITION BY clauses of each windowingfunction. That check is done by checking if a ressortgroupref isassigned and matches a tleSortGroupRef in the PARTITION BY clause. Inthis case, it does not match. We didn't assign any ressortgroupref tothe whole-row var.Unfortunately, whole-row vars are a bit to 2nd class citizen when itcomes to the query planner. Also, it would be quite a bit of effort tomake the planner push down the qual in this case. We'd need some sortof ability to assign ressortgroupref to a particular column within awhole-row var and we'd need to adjust the code to check for that whendoing subquery pushdowns to allow it to mention which columns withinwhole-row vars can legally accept pushdowns. I imagine that'sunlikely to be fixed any time soon. Whole-row vars just don't seem tobe used commonly enough to warrant going to the effort of making thisstuff work.To work around this, you should include a reference to the actualcolumn in the targetlist of the subquery, or your function, in thiscase, and ensure you use that same column in the PARTITION BY clause.You'll then need to write that column in your condition that you needpushed into the subquery. I'm sorry if that messes up your design.However, I imagine this is not the only optimisation that you'll missout on by doing things the way you are.David",
"msg_date": "Fri, 14 May 2021 15:39:31 +0300",
"msg_from": "KES <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16968: Planner does not recognize optimization"
},
{
"msg_contents": "Hello David,\n\nI found a case when `not assigning a ressortgroupref to the whole-row var` cause\nwrong window function calculations.\n\nI use same query. The difference come when I wrap my query into\nfunction. (see full queries in attachment)\n\n1.\nSELECT *\nFROM agreement_totals( tstzrange( '2020-07-01', '2020-08-01' ) )\nWHERE agreement_id = 161::int AND (o).period_id = 10::int\n\n2.\nSELECT *\n sum( .... ) over wagreement\nFROM ....\nWHERE agreement_id = 161::int AND (o).period_id = 10::int\nWINDOW wagreement AS ( PARTITION BY agreement_id )\n\nFor first query window function calculates SUM over all agreements,\nthen some are filtered out by (o).period_id condition.\n\nBut for second query agreements with \"wrong\" (o).period_id are filtered out,\nthen SUM is calculated.\n\nI suppose here is problem with `not assigning a ressortgroupref to the whole-row var`\nwhich cause different calculation when I try to filter: (o).period_id\n\nI will also attach plans for both queries.\n\n\nFriday, May 14, 2021, 2:52:33 AM, you wrote:\n\n> On Fri, 14 May 2021 at 02:38, Eugen Konkov <[email protected]> wrote:\n>> Now I create minimal reproducible test case.\n>> https://dbfiddle.uk/?rdbms=postgres_13&fiddle=761a00fb599789d3db31b120851d6341\n\n>> Optimization is not applyed when I filter/partition by column using composite type name.\n\n> You probably already know this part, but let me explain it just in\n> case it's not clear.\n\n> The pushdown of the qual from the top-level query into the subquery,\n> or function, in this case, is only legal when the qual references a\n> column that's in the PARTITION BY clause of all window functions in\n> the subquery. The reason for this is, if we filter rows before\n> calling the window function, then it could affect which rows are in\n> see in the window's frame. If it did filter, that could cause\n> incorrect results. We can relax the restriction a bit if we can\n> eliminate entire partitions at once. The window function results are\n> independent between partitions, so we can allow qual pushdowns that\n> are in all PARTITION BY clauses.\n\n> As for the reason you're having trouble getting this to work, it's\n> down to the way you're using whole-row vars in your targetlist.\n\n> A slightly simplified case which shows this problem is:\n\n> create table ab(a int, b int);\n> explain select * from (select ab as wholerowvar,row_number() over\n> (partition by a) from ab) ab where (ab.wholerowvar).a=1;\n\n> The reason it does not work is down to how this is implemented\n> internally. The details are, transformGroupClause() not assigning a\n> ressortgroupref to the whole-row var. It's unable to because there is\n> no way to track which actual column within the whole row var is in the\n> partition by clause. When it comes to the code that tries to push the\n> qual down into the subquery, check_output_expressions() checks if the\n> column in the subquery is ok to accept push downs or not. One of the\n> checks is to see if the query has windowing functions and to ensure\n> that the column is in all the PARTITION BY clauses of each windowing\n> function. That check is done by checking if a ressortgroupref is\n> assigned and matches a tleSortGroupRef in the PARTITION BY clause. In\n> this case, it does not match. We didn't assign any ressortgroupref to\n> the whole-row var.\n\n> Unfortunately, whole-row vars are a bit to 2nd class citizen when it\n> comes to the query planner. Also, it would be quite a bit of effort to\n> make the planner push down the qual in this case. We'd need some sort\n> of ability to assign ressortgroupref to a particular column within a\n> whole-row var and we'd need to adjust the code to check for that when\n> doing subquery pushdowns to allow it to mention which columns within\n> whole-row vars can legally accept pushdowns. I imagine that's\n> unlikely to be fixed any time soon. Whole-row vars just don't seem to\n> be used commonly enough to warrant going to the effort of making this\n> stuff work.\n\n> To work around this, you should include a reference to the actual\n> column in the targetlist of the subquery, or your function, in this\n> case, and ensure you use that same column in the PARTITION BY clause.\n> You'll then need to write that column in your condition that you need\n> pushed into the subquery. I'm sorry if that messes up your design.\n> However, I imagine this is not the only optimisation that you'll miss\n> out on by doing things the way you are.\n\n> David\n\n\n\n-- \nBest regards,\nEugen Konkov",
"msg_date": "Sat, 15 May 2021 17:34:16 +0300",
"msg_from": "Eugen Konkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16968: Planner does not recognize optimization"
},
{
"msg_contents": "On Sun, 16 May 2021 at 02:34, Eugen Konkov <[email protected]> wrote:\n> I found a case when `not assigning a ressortgroupref to the whole-row var` cause\n> wrong window function calculations.\n>\n> I use same query. The difference come when I wrap my query into\n> function. (see full queries in attachment)\n>\n> 1.\n> SELECT *\n> FROM agreement_totals( tstzrange( '2020-07-01', '2020-08-01' ) )\n> WHERE agreement_id = 161::int AND (o).period_id = 10::int\n>\n> 2.\n> SELECT *\n> sum( .... ) over wagreement\n> FROM ....\n> WHERE agreement_id = 161::int AND (o).period_id = 10::int\n> WINDOW wagreement AS ( PARTITION BY agreement_id )\n>\n> For first query window function calculates SUM over all agreements,\n> then some are filtered out by (o).period_id condition.\n\nThis is unrelated to the optimisation that you were asking about before.\n\nAll that's going on here is that WHERE is evaluated before SELECT.\nThis means that your filtering is done before the window functions are\nexecuted. This is noted in the docs in [1]:\n\n> The rows considered by a window function are those of the “virtual table” produced by the query's FROM clause as filtered by its WHERE, GROUP BY, and HAVING clauses if any. For example, a row removed because it does not meet the WHERE condition is not seen by any window function. A query can contain multiple window functions that slice up the data in different ways using different OVER clauses, but they all act on the same collection of rows defined by this virtual table.\n\nIf you want to filter rows after the window functions are evaluated\nthen you'll likely want to use a subquery.\n\nDavid\n\n[1] https://www.postgresql.org/docs/13/tutorial-window.html\n\n\n",
"msg_date": "Sun, 16 May 2021 02:52:47 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16968: Planner does not recognize optimization"
},
{
"msg_contents": "On Sat, 15 May 2021 at 00:39, KES <[email protected]> wrote:\n>\n> Thank you for detailed explanation. I glad to hear that I can use aliases and this will be recognized and optimization is applied.\n>\n> >We'd need some sort of ability to assign ressortgroupref to a particular column within a\n> whole-row var\n> Could it be possible to create hidden alias in same way as I did that manually?\n>\n> Algorithm seems not complex:\n> 1. User refer column from composite type/whole-row: (o).agreement_id\n> 2. Create hidden column at select: _o_agreement_id\n> 3. Replace other references to (o).agreement_id by _o_agreement_id\n> 4. Process query as usual after replacements\n\nInternally Postgresql does use a hidden column for columns that are\nrequired for calculations which are not in the SELECT list. e.g ones\nthat are in the GROUP BY / ORDER BY, or in your case a window\nfunction's PARTITION BY. We call these \"resjunk\" columns. The problem\nis you can't reference those from the parent query. If you explicitly\nhad listed that column in the SELECT clause, it won't cost you\nanything more since the planner will add it regardless and just hide\nit from you. When you add it yourself you'll be able to use it in the\nsubquery and you'll be able to filter out the partitions that you\ndon't want.\n\nI really think you're driving yourself down a difficult path by\nexpecting queries with whole-row vars to be optimised just as well as\nusing select * or explicitly listing the columns.\n\nDavid\n\n\n",
"msg_date": "Sun, 16 May 2021 02:59:41 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16968: Planner does not recognize optimization"
},
{
"msg_contents": "Hello David,\n\nSaturday, May 15, 2021, 5:52:47 PM, you wrote:\n\n> On Sun, 16 May 2021 at 02:34, Eugen Konkov <[email protected]> wrote:\n>> I found a case when `not assigning a ressortgroupref to the whole-row var` cause\n>> wrong window function calculations.\n>>\n>> I use same query. The difference come when I wrap my query into\n>> function. (see full queries in attachment)\n>>\n>> 1.\n>> SELECT *\n>> FROM agreement_totals( tstzrange( '2020-07-01', '2020-08-01' ) )\n>> WHERE agreement_id = 161::int AND (o).period_id = 10::int\n>>\n>> 2.\n>> SELECT *\n>> sum( .... ) over wagreement\n>> FROM ....\n>> WHERE agreement_id = 161::int AND (o).period_id = 10::int\n>> WINDOW wagreement AS ( PARTITION BY agreement_id )\n>>\n>> For first query window function calculates SUM over all agreements,\n>> then some are filtered out by (o).period_id condition.\n\n> This is unrelated to the optimisation that you were asking about before.\n\n> All that's going on here is that WHERE is evaluated before SELECT.\n> This means that your filtering is done before the window functions are\n> executed. This is noted in the docs in [1]:\n\n>> The rows considered by a window function are those of the “virtual table” produced by the query's FROM clause as filtered by its WHERE, GROUP BY, and HAVING clauses if any. For example, a row removed because it does not meet the WHERE condition is not seen by any window function. A query can contain multiple window functions that slice up the data in different ways using different OVER clauses, but they all act on the same collection of rows defined by this virtual table.\n\n> If you want to filter rows after the window functions are evaluated\n> then you'll likely want to use a subquery.\n\n> David\n\n> [1] https://www.postgresql.org/docs/13/tutorial-window.html\n\n\nSorry, I miss that WHERE works first and after it\nwindow function.\n\n>This is unrelated to the optimisation that you were asking about before.\nSo, yes, unrelated.\n\nThank you for your answers.\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n",
"msg_date": "Sat, 15 May 2021 19:10:44 +0300",
"msg_from": "Eugen Konkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16968: Planner does not recognize optimization"
},
{
"msg_contents": "Hello David,\n\n> I really think you're driving yourself down a difficult path by\n> expecting queries with whole-row vars to be optimised just as well as\n> using select * or explicitly listing the columns.\nYes, I was expect that. I use whole-row because do not want repeat all\n10+ columns at select. I do not use (row1).*, (row2).*, because rows\ncould have same columns. eg: row1.name, row2.name both will be\nnamed as 'name' and then I can not distinguish them.\nSo I select whole-row and put myself into problems ((\n\nIt would be nice if (row1).** will be expanded to: row1_id, row1_name\netc. But this is other question which I already ask at different\nthread.\n\n\n\nSaturday, May 15, 2021, 5:59:41 PM, you wrote:\n\n> On Sat, 15 May 2021 at 00:39, KES <[email protected]> wrote:\n>>\n>> Thank you for detailed explanation. I glad to hear that I can use aliases and this will be recognized and optimization is applied.\n>>\n>> >We'd need some sort of ability to assign ressortgroupref to a particular column within a\n>> whole-row var\n>> Could it be possible to create hidden alias in same way as I did that manually?\n>>\n>> Algorithm seems not complex:\n>> 1. User refer column from composite type/whole-row: (o).agreement_id\n>> 2. Create hidden column at select: _o_agreement_id\n>> 3. Replace other references to (o).agreement_id by _o_agreement_id\n>> 4. Process query as usual after replacements\n\n> Internally Postgresql does use a hidden column for columns that are\n> required for calculations which are not in the SELECT list. e.g ones\n> that are in the GROUP BY / ORDER BY, or in your case a window\n> function's PARTITION BY. We call these \"resjunk\" columns. The problem\n> is you can't reference those from the parent query. If you explicitly\n> had listed that column in the SELECT clause, it won't cost you\n> anything more since the planner will add it regardless and just hide\n> it from you. When you add it yourself you'll be able to use it in the\n> subquery and you'll be able to filter out the partitions that you\n> don't want.\n\n> I really think you're driving yourself down a difficult path by\n> expecting queries with whole-row vars to be optimised just as well as\n> using select * or explicitly listing the columns.\n\n> David\n\n\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n",
"msg_date": "Sat, 15 May 2021 19:10:48 +0300",
"msg_from": "Eugen Konkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16968: Planner does not recognize optimization"
}
] |
[
{
"msg_contents": "Hello,\n\nI need to partition a table on an integer column, which represents the\nmonth of an event, so 12 distinct values.\nI am just wondering if any of you has experience about which is the best\nway to go with such a use case, in particular which method pick up between\nrange, list and hash.\n\nThank you very much in avance for your help\nMimo\n\nHello,I need to partition a table on an integer column, which represents the month of an event, so 12 distinct values.I am just wondering if any of you has experience about which is the best way to go with such a use case, in particular which method pick up between range, list and hash.Thank you very much in avance for your helpMimo",
"msg_date": "Sun, 18 Apr 2021 20:07:35 +0200",
"msg_from": "Il Mimo di Creta <[email protected]>",
"msg_from_op": true,
"msg_subject": "Most proper partitioning form on an integer column"
},
{
"msg_contents": "On Sun, Apr 18, 2021 at 08:07:35PM +0200, Il Mimo di Creta wrote:\n> I need to partition a table on an integer column, which represents the\n> month of an event, so 12 distinct values.\n> I am just wondering if any of you has experience about which is the best\n> way to go with such a use case, in particular which method pick up between\n> range, list and hash.\n\nThe partition key you should choose is the one which optimizes your queries -\n(loading and/or reporting).\n\nHow many months of data (tables) will you have ?\nWhat does a typical insert/load query look like ?\nWhat does a typical report query look like ?\nWhat does a typical query look like to \"prune\" old data ?\n\nI think having a separate column for \"month\" may be a bad idea. Consider a\ntimestamptz column instead, and use EXTRACT('month') (or a view or a GENERATED\ncolumn). See here for a query that worked poorly for exactly that reason:\nhttps://www.postgresql.org/message-id/[email protected]\n\nThen, I think you'd use RANGE partitioning on the timestamp column by month:\nFOR VALUES FROM ('2021-04-18 04:00:00-08') TO ('2021-04-18 05:00:00-08')\n\nOtherwise, you might still want to also include the year in the partition key.\nEither with multiple columns in the key (PARTITION BY RANGE (year, month)), or\nsub-partitioning. Otherwise, you have no good way to prune old data - avoiding\nDELETE is a major benefit to partitioning.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 18 Apr 2021 14:28:12 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Most proper partitioning form on an integer column"
}
] |
[
{
"msg_contents": "Hi all,\nI'm unable to find (apparently) a way to find out a possible value to\nstart with for effective_io_concurrency.\nI suspect that benchmarking, e.g., using bonnie++ or sysbench and\ntesting with different values of concurrency could help to determine\nthe max number of concurrent request, (tps, lower latency, ecc.).\nIs thjs correct or is there another suggested way?\n\nThanks,\nLuca\n\n\n",
"msg_date": "Thu, 22 Apr 2021 21:45:15 +0200",
"msg_from": "Luca Ferrari <[email protected]>",
"msg_from_op": true,
"msg_subject": "hint in determining effective_io_concurrency"
},
{
"msg_contents": "On Thu, Apr 22, 2021 at 09:45:15PM +0200, Luca Ferrari wrote:\n> Hi all,\n> I'm unable to find (apparently) a way to find out a possible value to\n> start with for effective_io_concurrency.\n> I suspect that benchmarking, e.g., using bonnie++ or sysbench and\n> testing with different values of concurrency could help to determine\n> the max number of concurrent request, (tps, lower latency, ecc.).\n> Is thjs correct or is there another suggested way?\n\nI recommend 256 for SSDs or other RAM-like fsync systems, and maybe\nmaybe 16 for magnetic.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 22 Apr 2021 15:52:32 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hint in determining effective_io_concurrency"
},
{
"msg_contents": "On Thu, Apr 22, 2021 at 9:52 PM Bruce Momjian <[email protected]> wrote:\n>\n> On Thu, Apr 22, 2021 at 09:45:15PM +0200, Luca Ferrari wrote:\n> > Hi all,\n> > I'm unable to find (apparently) a way to find out a possible value to\n> > start with for effective_io_concurrency.\n> > I suspect that benchmarking, e.g., using bonnie++ or sysbench and\n> > testing with different values of concurrency could help to determine\n> > the max number of concurrent request, (tps, lower latency, ecc.).\n> > Is thjs correct or is there another suggested way?\n>\n> I recommend 256 for SSDs or other RAM-like fsync systems, and maybe\n> maybe 16 for magnetic.\n\n\nThanks Bruce, this is a very good starting point.\nBut is there a rationale about those numbers? I mean, if I change the\nstorage system, how should I set a correct number?\n\nLuca\n\n\n",
"msg_date": "Thu, 22 Apr 2021 21:54:56 +0200",
"msg_from": "Luca Ferrari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hint in determining effective_io_concurrency"
},
{
"msg_contents": "On Thu, Apr 22, 2021 at 09:54:56PM +0200, Luca Ferrari wrote:\n> On Thu, Apr 22, 2021 at 9:52 PM Bruce Momjian <[email protected]> wrote:\n> >\n> > On Thu, Apr 22, 2021 at 09:45:15PM +0200, Luca Ferrari wrote:\n> > > Hi all,\n> > > I'm unable to find (apparently) a way to find out a possible value to\n> > > start with for effective_io_concurrency.\n> > > I suspect that benchmarking, e.g., using bonnie++ or sysbench and\n> > > testing with different values of concurrency could help to determine\n> > > the max number of concurrent request, (tps, lower latency, ecc.).\n> > > Is thjs correct or is there another suggested way?\n> >\n> > I recommend 256 for SSDs or other RAM-like fsync systems, and maybe\n> > maybe 16 for magnetic.\n> \n> \n> Thanks Bruce, this is a very good starting point.\n> But is there a rationale about those numbers? I mean, if I change the\n> storage system, how should I set a correct number?\n\nUh, you need to study the queue length of the device to see how many\nconcurrent requests it can process.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 22 Apr 2021 15:58:27 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hint in determining effective_io_concurrency"
},
{
"msg_contents": "On Thu, Apr 22, 2021 at 03:52:32PM -0400, Bruce Momjian wrote:\n> On Thu, Apr 22, 2021 at 09:45:15PM +0200, Luca Ferrari wrote:\n> > Hi all,\n> > I'm unable to find (apparently) a way to find out a possible value to\n> > start with for effective_io_concurrency.\n> > I suspect that benchmarking, e.g., using bonnie++ or sysbench and\n> > testing with different values of concurrency could help to determine\n> > the max number of concurrent request, (tps, lower latency, ecc.).\n> > Is thjs correct or is there another suggested way?\n> \n> I recommend 256 for SSDs or other RAM-like fsync systems, and maybe\n> maybe 16 for magnetic.\n\nNote that the interpretation of this GUC changed in v13.\nhttps://www.postgresql.org/docs/13/release-13.html\n|Change the way non-default effective_io_concurrency values affect concurrency (Thomas Munro)\n|Previously, this value was adjusted before setting the number of concurrent requests. The value is now used directly. Conversion of old values to new ones can be done using:\n|SELECT round(sum(OLDVALUE / n::float)) AS newvalue FROM generate_series(1, OLDVALUE) s(n);\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 22 Apr 2021 15:15:06 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hint in determining effective_io_concurrency"
},
{
"msg_contents": "On Thu, Apr 22, 2021 at 10:15 PM Justin Pryzby <[email protected]> wrote:\n> Note that the interpretation of this GUC changed in v13.\n> https://www.postgresql.org/docs/13/release-13.html\n> |Change the way non-default effective_io_concurrency values affect concurrency (Thomas Munro)\n> |Previously, this value was adjusted before setting the number of concurrent requests. The value is now used directly. Conversion of old values to new ones can be done using:\n> |SELECT round(sum(OLDVALUE / n::float)) AS newvalue FROM generate_series(1, OLDVALUE) s(n);\n>\n\nYeah, I know, thanks.\nHowever, I'm still curious about which tools to use to get info about\nthe storage queue/concurrency.\n\nLuca\n\n\n",
"msg_date": "Thu, 22 Apr 2021 22:22:59 +0200",
"msg_from": "Luca Ferrari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hint in determining effective_io_concurrency"
},
{
"msg_contents": "On Thu, Apr 22, 2021 at 10:22:59PM +0200, Luca Ferrari wrote:\n> On Thu, Apr 22, 2021 at 10:15 PM Justin Pryzby <[email protected]> wrote:\n> > Note that the interpretation of this GUC changed in v13.\n> > https://www.postgresql.org/docs/13/release-13.html\n> > |Change the way non-default effective_io_concurrency values affect concurrency (Thomas Munro)\n> > |Previously, this value was adjusted before setting the number of concurrent requests. The value is now used directly. Conversion of old values to new ones can be done using:\n> > |SELECT round(sum(OLDVALUE / n::float)) AS newvalue FROM generate_series(1, OLDVALUE) s(n);\n> \n> Yeah, I know, thanks.\n> However, I'm still curious about which tools to use to get info about\n> the storage queue/concurrency.\n\nI think you'd run something like iostat -dkx 1 and watch avgqu-sz.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 22 Apr 2021 15:27:39 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hint in determining effective_io_concurrency"
},
{
"msg_contents": "On Thu, Apr 22, 2021 at 03:27:39PM -0500, Justin Pryzby wrote:\n> On Thu, Apr 22, 2021 at 10:22:59PM +0200, Luca Ferrari wrote:\n> > On Thu, Apr 22, 2021 at 10:15 PM Justin Pryzby <[email protected]> wrote:\n> > > Note that the interpretation of this GUC changed in v13.\n> > > https://www.postgresql.org/docs/13/release-13.html\n> > > |Change the way non-default effective_io_concurrency values affect concurrency (Thomas Munro)\n> > > |Previously, this value was adjusted before setting the number of concurrent requests. The value is now used directly. Conversion of old values to new ones can be done using:\n> > > |SELECT round(sum(OLDVALUE / n::float)) AS newvalue FROM generate_series(1, OLDVALUE) s(n);\n> > \n> > Yeah, I know, thanks.\n> > However, I'm still curious about which tools to use to get info about\n> > the storage queue/concurrency.\n> \n> I think you'd run something like iostat -dkx 1 and watch avgqu-sz.\n\nFYI, avgqu-sz, as far as I know, is the OS queue size, not the device\nqueue size:\n\n aqu-sz\n The average queue length of the requests that were issued to the device.\n Note: In previous versions, this field was known as avgqu-sz.\n\nIs that requests that were issued to the device from user-space, or\nrequests that were issued to the device by the kernel? I can't tell if:\n\n\thttps://www.circonus.com/2017/07/monitoring-queue-sizes/\n\nclarifies this, but this says it is the former:\n\n\thttps://coderwall.com/p/utc42q/understanding-iostat\n\t\n\tThe avgqu-sz metric is an important value. Its name is rather poorly\n\tchosen as it does not in fact show the number of operations queued but\n\tnot yet serviced. Instead, it shows the number of operations that were\n\teither queued or being serviced. Ideally you want to have an idea of the\n\tvalue of this metric during normal operations for use as a reference\n\twhen trouble occurs. Single digit numbers with the occasional double\n\tdigit spike are safe(ish) values. Triple digit numbers are generally\n\tnot.\n\nIt think it is the former, or pending OS I/O, whether that I/O request\nhas been sent to the device, or is waiting to be sent. For example, if\nthe device queue sizxe is 8, and the OS avgqu-sz is 12, it means 8 have\nbeen sent to the device, and four are pending to be send to the device,\nor at least that is how I understand it. Therefore, I am unclear if\navgqu-sz helps here.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 22 Apr 2021 16:51:57 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hint in determining effective_io_concurrency"
},
{
"msg_contents": "On Thu, Apr 22, 2021 at 2:55 PM Luca Ferrari <[email protected]> wrote:\n>\n> On Thu, Apr 22, 2021 at 9:52 PM Bruce Momjian <[email protected]> wrote:\n> >\n> > On Thu, Apr 22, 2021 at 09:45:15PM +0200, Luca Ferrari wrote:\n> > > Hi all,\n> > > I'm unable to find (apparently) a way to find out a possible value to\n> > > start with for effective_io_concurrency.\n> > > I suspect that benchmarking, e.g., using bonnie++ or sysbench and\n> > > testing with different values of concurrency could help to determine\n> > > the max number of concurrent request, (tps, lower latency, ecc.).\n> > > Is thjs correct or is there another suggested way?\n> >\n> > I recommend 256 for SSDs or other RAM-like fsync systems, and maybe\n> > maybe 16 for magnetic.\n>\n>\n> Thanks Bruce, this is a very good starting point.\n> But is there a rationale about those numbers? I mean, if I change the\n> storage system, how should I set a correct number?\n\nSee thread, https://postgrespro.com/list/thread-id/2069516\n\nThe setting only impacts certain scan operations, it's not a gamechanger.\n\nmerlin\n\n\n",
"msg_date": "Wed, 7 Jul 2021 16:42:40 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hint in determining effective_io_concurrency"
},
{
"msg_contents": "On Wed, Jul 7, 2021 at 11:42 PM Merlin Moncure <[email protected]> wrote:\n> See thread, https://postgrespro.com/list/thread-id/2069516\n>\n> The setting only impacts certain scan operations, it's not a gamechanger.\n\nUseful, thanks.\n\nLuca\n\n\n",
"msg_date": "Thu, 8 Jul 2021 08:50:30 +0200",
"msg_from": "Luca Ferrari <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hint in determining effective_io_concurrency"
}
] |
[
{
"msg_contents": "I'm curious, really. I use btrfs as my filesystem on my home systems and am setting up a server as I near releasing my project. I planned to use btrfs on the server, but it got me thinking about PostgreSQL 13. Does anyone know if it would have a major performance impact?\n\nSimon.",
"msg_date": "Sat, 24 Apr 2021 18:27:08 +0000",
"msg_from": "Simon Connah <[email protected]>",
"msg_from_op": true,
"msg_subject": "Does btrfs on Linux have a negative performance impact for PostgreSQL\n 13?"
},
{
"msg_contents": "On Sat, Apr 24, 2021 at 06:27:08PM +0000, Simon Connah wrote:\n> I'm curious, really. I use btrfs as my filesystem on my home systems and am setting up a server as I near releasing my project. I planned to use btrfs on the server, but it got me thinking about PostgreSQL 13. Does anyone know if it would have a major performance impact?\n\nIs there some reason the question is specific to postgres13 , or did you just\nsay that because it's your development target for your project.\n\nI think it almost certainly depends more on your project than on postgres 13.\n\nIt may well be that performance is better under btrfs, maybe due to compression\nor COW. But you'd have to test what you're doing to find out - and maybe write\nup the results.\n\nAlso, it's very possible that btfs performs better for (say) report queries,\nbut worse for data loading. Maybe you care more about reporting, but that's\nnot true for everyone.\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 24 Apr 2021 13:45:48 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does btrfs on Linux have a negative performance impact for\n PostgreSQL 13?"
},
{
"msg_contents": "On Sat, Apr 24, 2021 at 01:45:48PM -0500, Justin Pryzby wrote:\n> On Sat, Apr 24, 2021 at 06:27:08PM +0000, Simon Connah wrote:\n> > I'm curious, really. I use btrfs as my filesystem on my home systems and am setting up a server as I near releasing my project. I planned to use btrfs on the server, but it got me thinking about PostgreSQL 13. Does anyone know if it would have a major performance impact?\n> \n> Is there some reason the question is specific to postgres13 , or did you just\n> say that because it's your development target for your project.\n> \n> I think it almost certainly depends more on your project than on postgres 13.\n> \n> It may well be that performance is better under btrfs, maybe due to compression\n> or COW. But you'd have to test what you're doing to find out - and maybe write\n> up the results.\n> \n> Also, it's very possible that btfs performs better for (say) report queries,\n> but worse for data loading. Maybe you care more about reporting, but that's\n> not true for everyone.\n\nMy question is whether btrfs is reliable enough or write-durable enough\nfor Postgres. I would need a pretty good reason to not use ext4 or xfs.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Sat, 24 Apr 2021 15:00:43 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does btrfs on Linux have a negative performance impact for\n PostgreSQL 13?"
},
{
"msg_contents": "\n\n> On Apr 24, 2021, at 11:27, Simon Connah <[email protected]> wrote:\n> \n> I'm curious, really. I use btrfs as my filesystem on my home systems and am setting up a server as I near releasing my project. I planned to use btrfs on the server, but it got me thinking about PostgreSQL 13. Does anyone know if it would have a major performance impact?\n\nThis is a few years old, but Tomas Vondra did a presentation comparing major Linux file systems for PostgreSQL:\n\n\thttps://www.slideshare.net/fuzzycz/postgresql-on-ext4-xfs-btrfs-and-zfs\n\n",
"msg_date": "Sat, 24 Apr 2021 12:02:50 -0700",
"msg_from": "Christophe Pettus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does btrfs on Linux have a negative performance impact for\n PostgreSQL 13?"
},
{
"msg_contents": "\nOn 4/24/21 9:02 PM, Christophe Pettus wrote:\n> \n> \n>> On Apr 24, 2021, at 11:27, Simon Connah <[email protected]> wrote:\n>>\n>> I'm curious, really. I use btrfs as my filesystem on my home systems and am setting up a server as I near releasing my project. I planned to use btrfs on the server, but it got me thinking about PostgreSQL 13. Does anyone know if it would have a major performance impact?\n> \n> This is a few years old, but Tomas Vondra did a presentation comparing major Linux file systems for PostgreSQL:\n> \n> \thttps://www.slideshare.net/fuzzycz/postgresql-on-ext4-xfs-btrfs-and-zfs\n> \n\nThat talk was ages ago, though. The general conclusions may be still \nvalid, but maybe btrfs improved a lot - I haven't done any testing since \nthen. Not sure about durability, but there are companies using btrfs so \nperhaps it's fine - not sure.\n\nArguably, a lot of this also depends on the exact workload - the issues \nI saw with btrfs were with OLTP stress test, it could have performed \nmuch better with other workloads.\n\n\nregards\nTomas\n\n\n",
"msg_date": "Sat, 24 Apr 2021 21:52:44 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does btrfs on Linux have a negative performance impact for\n PostgreSQL 13?"
},
{
"msg_contents": "On 24/04/21, Christophe Pettus ([email protected]) wrote:\n> > On Apr 24, 2021, at 11:27, Simon Connah <[email protected]> wrote:\n> > \n> > I'm curious, really. I use btrfs as my filesystem on my home systems and am setting up a server as I near releasing my project. I planned to use btrfs on the server, but it got me thinking about PostgreSQL 13. Does anyone know if it would have a major performance impact?\n> \n> This is a few years old, but Tomas Vondra did a presentation comparing major Linux file systems for PostgreSQL:\n> \n> \thttps://www.slideshare.net/fuzzycz/postgresql-on-ext4-xfs-btrfs-and-zfs\n\nI guess btrfs and zfs on linux performance might have improved somewhat since Tomas' analysis in 2015.\n\nPersonally I've been a keen personal user of btrfs for over 5 years for its snapshot support, transparent compression and bitrot detection. However I can't think of a reason to use it for a production server. It's slower than ext4 and xfs, postgresql's dumping and streaming are probably better bets than snapshots for backup, and relatively few others are likely to be comfortable administering it. But maybe I'm missing something.\n\nPhoronix run btrfs benchmarks from time-to-time. See https://www.phoronix.com/scan.php?page=search&q=Btrfs\n\nRory\n\n\n",
"msg_date": "Sat, 24 Apr 2021 21:03:38 +0100",
"msg_from": "Rory Campbell-Lange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does btrfs on Linux have a negative performance impact for\n PostgreSQL 13?"
},
{
"msg_contents": "‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n\nOn Saturday, April 24th, 2021 at 21:03, Rory Campbell-Lange <[email protected]> wrote:\n\n> On 24/04/21, Christophe Pettus ([email protected]) wrote:\n> \n\n> > > On Apr 24, 2021, at 11:27, Simon Connah [email protected] wrote:\n> > > \n\n> > > I'm curious, really. I use btrfs as my filesystem on my home systems and am setting up a server as I near releasing my project. I planned to use btrfs on the server, but it got me thinking about PostgreSQL 13. Does anyone know if it would have a major performance impact?\n> > \n\n> > This is a few years old, but Tomas Vondra did a presentation comparing major Linux file systems for PostgreSQL:\n> > \n\n> > https://www.slideshare.net/fuzzycz/postgresql-on-ext4-xfs-btrfs-and-zfs\n> \n\n> I guess btrfs and zfs on linux performance might have improved somewhat since Tomas' analysis in 2015.\n> \n\n> Personally I've been a keen personal user of btrfs for over 5 years for its snapshot support, transparent compression and bitrot detection. However I can't think of a reason to use it for a production server. It's slower than ext4 and xfs, postgresql's dumping and streaming are probably better bets than snapshots for backup, and relatively few others are likely to be comfortable administering it. But maybe I'm missing something.\n> \n\n> Phoronix run btrfs benchmarks from time-to-time. See https://www.phoronix.com/scan.php?page=search&q=Btrfs\n> \n\n> Rory\n\nI should have been more explicit in my original post. I use OpenSUSE Tumbleweed and btrfs is the default filesystem. I also believe SUSE Linux Enterprise Server uses btrfs as their default which seems to indicate that at least one enterprise-grade Linux distribution trusts btrfs for their default install.\n\nI'd love to do some performance testing myself but the only way I could do it would be inside virtual machines which would make the numbers meaningless. I'll certainly do some reading on the subject and look at that presentation that was mentioned.",
"msg_date": "Sun, 25 Apr 2021 08:29:04 +0000",
"msg_from": "Simon Connah <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Does btrfs on Linux have a negative performance impact for\n PostgreSQL 13?"
},
{
"msg_contents": "Hi Bruce, everybody,\n\ncompression ?\n\nI am currently working on a project to move an oracle db to postgres.\nThe db is 15 TB.\nwith Oracle compression it does use 5 TB of disk space.\n\nIf we cannot compress the whole thing, the project loses its economic base.\n(added 10 TB for prod, 10TB for pre-prod, 10TB for testing dev, ...)\n\nwe do test zfs, and we will give a try to btrfs.\n\nany suggestion ?\n\nthanks\n\nMarc MILLAS\nSenior Architect\n+33607850334\nwww.mokadb.com\n\n\n\nOn Sat, Apr 24, 2021 at 9:00 PM Bruce Momjian <[email protected]> wrote:\n\n> On Sat, Apr 24, 2021 at 01:45:48PM -0500, Justin Pryzby wrote:\n> > On Sat, Apr 24, 2021 at 06:27:08PM +0000, Simon Connah wrote:\n> > > I'm curious, really. I use btrfs as my filesystem on my home systems\n> and am setting up a server as I near releasing my project. I planned to use\n> btrfs on the server, but it got me thinking about PostgreSQL 13. Does\n> anyone know if it would have a major performance impact?\n> >\n> > Is there some reason the question is specific to postgres13 , or did you\n> just\n> > say that because it's your development target for your project.\n> >\n> > I think it almost certainly depends more on your project than on\n> postgres 13.\n> >\n> > It may well be that performance is better under btrfs, maybe due to\n> compression\n> > or COW. But you'd have to test what you're doing to find out - and\n> maybe write\n> > up the results.\n> >\n> > Also, it's very possible that btfs performs better for (say) report\n> queries,\n> > but worse for data loading. Maybe you care more about reporting, but\n> that's\n> > not true for everyone.\n>\n> My question is whether btrfs is reliable enough or write-durable enough\n> for Postgres. I would need a pretty good reason to not use ext4 or xfs.\n>\n> --\n> Bruce Momjian <[email protected]> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n>\n>\n>\n\nHi Bruce, everybody,compression ?I am currently working on a project to move an oracle db to postgres.The db is 15 TB.with Oracle compression it does use 5 TB of disk space.If we cannot compress the whole thing, the project loses its economic base. (added 10 TB for prod, 10TB for pre-prod, 10TB for testing dev, ...)we do test zfs, and we will give a try to btrfs.any suggestion ?thanksMarc MILLASSenior Architect+33607850334www.mokadb.comOn Sat, Apr 24, 2021 at 9:00 PM Bruce Momjian <[email protected]> wrote:On Sat, Apr 24, 2021 at 01:45:48PM -0500, Justin Pryzby wrote:\n> On Sat, Apr 24, 2021 at 06:27:08PM +0000, Simon Connah wrote:\n> > I'm curious, really. I use btrfs as my filesystem on my home systems and am setting up a server as I near releasing my project. I planned to use btrfs on the server, but it got me thinking about PostgreSQL 13. Does anyone know if it would have a major performance impact?\n> \n> Is there some reason the question is specific to postgres13 , or did you just\n> say that because it's your development target for your project.\n> \n> I think it almost certainly depends more on your project than on postgres 13.\n> \n> It may well be that performance is better under btrfs, maybe due to compression\n> or COW. But you'd have to test what you're doing to find out - and maybe write\n> up the results.\n> \n> Also, it's very possible that btfs performs better for (say) report queries,\n> but worse for data loading. Maybe you care more about reporting, but that's\n> not true for everyone.\n\nMy question is whether btrfs is reliable enough or write-durable enough\nfor Postgres. I would need a pretty good reason to not use ext4 or xfs.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Mon, 26 Apr 2021 14:47:32 +0200",
"msg_from": "Marc Millas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does btrfs on Linux have a negative performance impact for\n PostgreSQL 13?"
},
{
"msg_contents": "On 26/04/21, Marc Millas ([email protected]) wrote:\n> compression ?\n> \n> I am currently working on a project to move an oracle db to postgres.\n> The db is 15 TB.\n> with Oracle compression it does use 5 TB of disk space.\n> \n> If we cannot compress the whole thing, the project loses its economic base.\n> (added 10 TB for prod, 10TB for pre-prod, 10TB for testing dev, ...)\n> \n> we do test zfs, and we will give a try to btrfs.\n\nI've been using btrfs with lzo compression for several years on my\npersonal laptop and some non-critical backup systems with no trouble.\n(In fact btrfs has helped us recover from some disk failures really\nwell.) While I run postgresql on my machine it is for light testing\npurposes so I wouldn't want to comment on its suitability for\nproduction.\n\nThere are some differences reported here between lzo and zlib\ncompression performance for Postgresql:\nhttps://sudonull.com/post/96976-PostgreSQL-and-btrfs-elephant-on-an-oil-diet\n\nzstd compression support for btrfs is reported on by Phoronix here:\nhttps://www.phoronix.com/scan.php?page=article&item=btrfs-zstd-compress&num=2\n\nThe compression page of the btrfs wiki is here:\nhttps://btrfs.wiki.kernel.org/index.php/Compression\n\nYou might want to armor yourself for possible problems by reading the\nDebian btrfs wiki page: https://wiki.debian.org/Btrfs\n\nIf you test your workload please let us know your results.\n\nRory\n\n\n\n",
"msg_date": "Mon, 26 Apr 2021 14:43:07 +0100",
"msg_from": "Rory Campbell-Lange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does btrfs on Linux have a negative performance impact for\n PostgreSQL 13?"
},
{
"msg_contents": "Interesting !\nthe debian blog saying that\n\" but only users who are interested in debugging future bugs should use\ntransparent compression at this time\"\nmakes me feel its less urgent to test this option...\n\n\nMarc MILLAS\nSenior Architect\n+33607850334\nwww.mokadb.com\n\n\n\nOn Mon, Apr 26, 2021 at 3:43 PM Rory Campbell-Lange <[email protected]>\nwrote:\n\n> On 26/04/21, Marc Millas ([email protected]) wrote:\n> > compression ?\n> >\n> > I am currently working on a project to move an oracle db to postgres.\n> > The db is 15 TB.\n> > with Oracle compression it does use 5 TB of disk space.\n> >\n> > If we cannot compress the whole thing, the project loses its economic\n> base.\n> > (added 10 TB for prod, 10TB for pre-prod, 10TB for testing dev, ...)\n> >\n> > we do test zfs, and we will give a try to btrfs.\n>\n> I've been using btrfs with lzo compression for several years on my\n> personal laptop and some non-critical backup systems with no trouble.\n> (In fact btrfs has helped us recover from some disk failures really\n> well.) While I run postgresql on my machine it is for light testing\n> purposes so I wouldn't want to comment on its suitability for\n> production.\n>\n> There are some differences reported here between lzo and zlib\n> compression performance for Postgresql:\n>\n> https://sudonull.com/post/96976-PostgreSQL-and-btrfs-elephant-on-an-oil-diet\n>\n> zstd compression support for btrfs is reported on by Phoronix here:\n>\n> https://www.phoronix.com/scan.php?page=article&item=btrfs-zstd-compress&num=2\n>\n> The compression page of the btrfs wiki is here:\n> https://btrfs.wiki.kernel.org/index.php/Compression\n>\n> You might want to armor yourself for possible problems by reading the\n> Debian btrfs wiki page: https://wiki.debian.org/Btrfs\n>\n> If you test your workload please let us know your results.\n>\n> Rory\n>\n>\n\nInteresting !the debian blog saying that\"\n\nbut only users who are interested in debugging future bugs should use transparent compression at this time\"makes me feel its less urgent to test this option... Marc MILLASSenior Architect+33607850334www.mokadb.comOn Mon, Apr 26, 2021 at 3:43 PM Rory Campbell-Lange <[email protected]> wrote:On 26/04/21, Marc Millas ([email protected]) wrote:\n> compression ?\n> \n> I am currently working on a project to move an oracle db to postgres.\n> The db is 15 TB.\n> with Oracle compression it does use 5 TB of disk space.\n> \n> If we cannot compress the whole thing, the project loses its economic base.\n> (added 10 TB for prod, 10TB for pre-prod, 10TB for testing dev, ...)\n> \n> we do test zfs, and we will give a try to btrfs.\n\nI've been using btrfs with lzo compression for several years on my\npersonal laptop and some non-critical backup systems with no trouble.\n(In fact btrfs has helped us recover from some disk failures really\nwell.) While I run postgresql on my machine it is for light testing\npurposes so I wouldn't want to comment on its suitability for\nproduction.\n\nThere are some differences reported here between lzo and zlib\ncompression performance for Postgresql:\nhttps://sudonull.com/post/96976-PostgreSQL-and-btrfs-elephant-on-an-oil-diet\n\nzstd compression support for btrfs is reported on by Phoronix here:\nhttps://www.phoronix.com/scan.php?page=article&item=btrfs-zstd-compress&num=2\n\nThe compression page of the btrfs wiki is here:\nhttps://btrfs.wiki.kernel.org/index.php/Compression\n\nYou might want to armor yourself for possible problems by reading the\nDebian btrfs wiki page: https://wiki.debian.org/Btrfs\n\nIf you test your workload please let us know your results.\n\nRory",
"msg_date": "Mon, 26 Apr 2021 18:30:43 +0200",
"msg_from": "Marc Millas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does btrfs on Linux have a negative performance impact for\n PostgreSQL 13?"
}
] |
[
{
"msg_contents": "Hi!\n\nMy question is: is it possible to optimize function order execution?\n\nHere's explanation:\n\nI have a bunch of queries that have volatile quals, some more than one. For example:\n\nSELECT *\n FROM clients\n WHERE some_func(client_id)\n AND some_other_func(client_id) \n\nNow, I know that having volatile function quals is not a good practice, but alas, it is what it is.\n\nIn this contrived example, some_func filters about 50% of clients, whereas some_other_func around 5%.\n\nIf PostgreSQL would execute them \"serially\", some_other_func would only run for 50% of the clients, cutting execution time. What I've seen is that volatile functions execute always.\n\nIs the reason this happen because the function can modify the result from the outer query?\n\nLuis R. Weck \n\n\n",
"msg_date": "Tue, 27 Apr 2021 15:52:37 -0300 (BRT)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Order of execution"
},
{
"msg_contents": "Hello,\n\nLe 27/04/2021 à 20:52, [email protected] a écrit :\n> My question is: is it possible to optimize function order execution?\n\nI guess you could change the cost of one of the functions.\n\n\nI personally rewrite my queries but I don't know if it's good practice:\n\nWITH pre AS (\n SELECT client_id\n FROM clients\n WHERE some_other_func(client_id)\n)\nSELECT *\nFROM clients\nJOIN pre USING(client_id)\nWHERE some_func(client_id)\n\nBest regards,\n\nJC\n\n\n",
"msg_date": "Tue, 27 Apr 2021 21:49:59 +0200",
"msg_from": "Jean-Christophe Boggio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Order of execution"
}
] |
[
{
"msg_contents": "Hi,\nOne of the procs which accept tabletype as parameter gives below error\nwhile being called from Application. Could not find a concrete solution for\nthis. Can someone help?\n\ncall PROCEDURE ABC (p_optiontable optiontype)\n\n\n\nBelow is the error while executing proc -\n\n“the clr type system.data.datatable isn't natively supported by npgsql or\nyour postgresql. to use it with a postgresql composite you need to specify\ndatatypename or to map it, please refer to the documentation.”\n\n\nRegards,\n\nAditya.\n\nHi,One of the procs which accept tabletype as parameter gives below error while being called from Application. Could not find a concrete solution for this. Can someone help?call PROCEDURE ABC (p_optiontable optiontype)\n \nBelow is the error while executing proc - \n“the\nclr type system.data.datatable isn't natively supported by npgsql or your\npostgresql. to use it with a postgresql composite you need to specify\ndatatypename or to map it, please refer to the documentation.”Regards,Aditya.",
"msg_date": "Thu, 29 Apr 2021 14:52:23 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Error while calling proc with table type from Application"
},
{
"msg_contents": "On Thu, Apr 29, 2021 at 02:52:23PM +0530, aditya desai wrote:\n> Hi,\n> One of the procs which accept tabletype as parameter gives below error\n> while being called from Application. Could not find a concrete solution for\n> this. Can someone help?\n> \n> call PROCEDURE ABC (p_optiontable optiontype)\n\nWhat is PROCEDURE ABC ? If you created it, send its definition with your problem report.\n\n> Below is the error while executing proc -\n\nHow are you executing it? This seems like an error from npgsl, not postgres.\nIt may be a client-side error, and it may be that the query isn't even being\nsent to the server at that point.\n\n> “the clr type system.data.datatable isn't natively supported by npgsql or\n> your postgresql. to use it with a postgresql composite you need to specify\n> datatypename or to map it, please refer to the documentation.”\n\nDid you do this ?\nhttps://www.npgsql.org/doc/types/enums_and_composites.html\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 29 Apr 2021 08:02:36 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Error while calling proc with table type from Application\n (npgsql)"
},
{
"msg_contents": "Hi Justin,\nThanks for your response. We have a user defined type created as below and\nwe need to pass this user defined parameter to a procedure from .net code.\nBasically the procedure needs to accept multiple rows as parameters(user\ndefined table type). This happened seamlessly in SQL Server but while doing\nit in Postgres after migration we get the error mentioned in the above\nchain. Is theere any way we can achieve this?\n\nCREATE TYPE public.optiontype AS (\nprojectid integer,\noptionid integer,\nphaseid integer,\nremarks text\n);\n\nRegards,\nAditya.\n\n\n\nOn Thu, Apr 29, 2021 at 6:32 PM Justin Pryzby <[email protected]> wrote:\n\n> On Thu, Apr 29, 2021 at 02:52:23PM +0530, aditya desai wrote:\n> > Hi,\n> > One of the procs which accept tabletype as parameter gives below error\n> > while being called from Application. Could not find a concrete solution\n> for\n> > this. Can someone help?\n> >\n> > call PROCEDURE ABC (p_optiontable optiontype)\n>\n> What is PROCEDURE ABC ? If you created it, send its definition with your\n> problem report.\n>\n> > Below is the error while executing proc -\n>\n> How are you executing it? This seems like an error from npgsl, not\n> postgres.\n> It may be a client-side error, and it may be that the query isn't even\n> being\n> sent to the server at that point.\n>\n> > “the clr type system.data.datatable isn't natively supported by npgsql or\n> > your postgresql. to use it with a postgresql composite you need to\n> specify\n> > datatypename or to map it, please refer to the documentation.”\n>\n> Did you do this ?\n> https://www.npgsql.org/doc/types/enums_and_composites.html\n>\n> --\n> Justin\n>\n\nHi Justin,Thanks for your response. We have a user defined type created as below and we need to pass this user defined parameter to a procedure from .net code. Basically the procedure needs to accept multiple rows as parameters(user defined table type). This happened seamlessly in SQL Server but while doing it in Postgres after migration we get the error mentioned in the above chain. Is theere any way we can achieve this?CREATE TYPE public.optiontype AS ( projectid integer, optionid integer, phaseid integer, remarks text);Regards,Aditya.On Thu, Apr 29, 2021 at 6:32 PM Justin Pryzby <[email protected]> wrote:On Thu, Apr 29, 2021 at 02:52:23PM +0530, aditya desai wrote:\n> Hi,\n> One of the procs which accept tabletype as parameter gives below error\n> while being called from Application. Could not find a concrete solution for\n> this. Can someone help?\n> \n> call PROCEDURE ABC (p_optiontable optiontype)\n\nWhat is PROCEDURE ABC ? If you created it, send its definition with your problem report.\n\n> Below is the error while executing proc -\n\nHow are you executing it? This seems like an error from npgsl, not postgres.\nIt may be a client-side error, and it may be that the query isn't even being\nsent to the server at that point.\n\n> “the clr type system.data.datatable isn't natively supported by npgsql or\n> your postgresql. to use it with a postgresql composite you need to specify\n> datatypename or to map it, please refer to the documentation.”\n\nDid you do this ?\nhttps://www.npgsql.org/doc/types/enums_and_composites.html\n\n-- \nJustin",
"msg_date": "Fri, 30 Apr 2021 18:32:44 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Error while calling proc with table type from Application\n (npgsql)"
},
{
"msg_contents": "aditya desai <[email protected]>\n6:32 PM (19 minutes ago)\nto Justin, Pgsql\nHi Justin,\nThanks for your response. We have a user defined type created as below and\nwe need to pass this user defined parameter to a procedure from .net code.\nBasically the procedure needs to accept multiple rows as parameters(user\ndefined table type). This happened seamlessly in SQL Server but while doing\nit in Postgres after migration we get the error mentioned in the above\nchain. Is theere any way we can achieve this?\n\nCREATE TYPE public.optiontype AS (\nprojectid integer,\noptionid integer,\nphaseid integer,\nremarks text\n);\n\nAlso here is a sample procedure.\n\n CREATE OR REPLACE procedure SaveAssessmentInfo\n (\n\n p_Optiontable OptionType\n )\n\nLANGUAGE 'plpgsql'\n\nAS $BODY$\n\nBEGIN\n\n insert into tempOptions\n select * from p_Optiontable;\n\nEND\n\n\n END;\n $BODY$;\n\nRegards,\nAditya.\n\nOn Fri, Apr 30, 2021 at 6:32 PM aditya desai <[email protected]> wrote:\n\n> Hi Justin,\n> Thanks for your response. We have a user defined type created as below\n> and we need to pass this user defined parameter to a procedure from .net\n> code. Basically the procedure needs to accept multiple rows as\n> parameters(user defined table type). This happened seamlessly in SQL Server\n> but while doing it in Postgres after migration we get the error mentioned\n> in the above chain. Is theere any way we can achieve this?\n>\n> CREATE TYPE public.optiontype AS (\n> projectid integer,\n> optionid integer,\n> phaseid integer,\n> remarks text\n> );\n>\n> Regards,\n> Aditya.\n>\n>\n>\n> On Thu, Apr 29, 2021 at 6:32 PM Justin Pryzby <[email protected]>\n> wrote:\n>\n>> On Thu, Apr 29, 2021 at 02:52:23PM +0530, aditya desai wrote:\n>> > Hi,\n>> > One of the procs which accept tabletype as parameter gives below error\n>> > while being called from Application. Could not find a concrete solution\n>> for\n>> > this. Can someone help?\n>> >\n>> > call PROCEDURE ABC (p_optiontable optiontype)\n>>\n>> What is PROCEDURE ABC ? If you created it, send its definition with your\n>> problem report.\n>>\n>> > Below is the error while executing proc -\n>>\n>> How are you executing it? This seems like an error from npgsl, not\n>> postgres.\n>> It may be a client-side error, and it may be that the query isn't even\n>> being\n>> sent to the server at that point.\n>>\n>> > “the clr type system.data.datatable isn't natively supported by npgsql\n>> or\n>> > your postgresql. to use it with a postgresql composite you need to\n>> specify\n>> > datatypename or to map it, please refer to the documentation.”\n>>\n>> Did you do this ?\n>> https://www.npgsql.org/doc/types/enums_and_composites.html\n>>\n>> --\n>> Justin\n>>\n>\n\naditya desai <[email protected]>6:32 PM (19 minutes ago)to Justin, PgsqlHi Justin,Thanks for your response. We have a user defined type created as below and we need to pass this user defined parameter to a procedure from .net code. Basically the procedure needs to accept multiple rows as parameters(user defined table type). This happened seamlessly in SQL Server but while doing it in Postgres after migration we get the error mentioned in the above chain. Is theere any way we can achieve this?CREATE TYPE public.optiontype AS ( projectid integer, optionid integer, phaseid integer, remarks text);Also here is a sample procedure. CREATE OR REPLACE procedure SaveAssessmentInfo ( p_Optiontable OptionType ) LANGUAGE 'plpgsql'AS $BODY$ BEGIN insert into tempOptions select * from p_Optiontable;END END; $BODY$;Regards,Aditya.On Fri, Apr 30, 2021 at 6:32 PM aditya desai <[email protected]> wrote:Hi Justin,Thanks for your response. We have a user defined type created as below and we need to pass this user defined parameter to a procedure from .net code. Basically the procedure needs to accept multiple rows as parameters(user defined table type). This happened seamlessly in SQL Server but while doing it in Postgres after migration we get the error mentioned in the above chain. Is theere any way we can achieve this?CREATE TYPE public.optiontype AS ( projectid integer, optionid integer, phaseid integer, remarks text);Regards,Aditya.On Thu, Apr 29, 2021 at 6:32 PM Justin Pryzby <[email protected]> wrote:On Thu, Apr 29, 2021 at 02:52:23PM +0530, aditya desai wrote:\n> Hi,\n> One of the procs which accept tabletype as parameter gives below error\n> while being called from Application. Could not find a concrete solution for\n> this. Can someone help?\n> \n> call PROCEDURE ABC (p_optiontable optiontype)\n\nWhat is PROCEDURE ABC ? If you created it, send its definition with your problem report.\n\n> Below is the error while executing proc -\n\nHow are you executing it? This seems like an error from npgsl, not postgres.\nIt may be a client-side error, and it may be that the query isn't even being\nsent to the server at that point.\n\n> “the clr type system.data.datatable isn't natively supported by npgsql or\n> your postgresql. to use it with a postgresql composite you need to specify\n> datatypename or to map it, please refer to the documentation.”\n\nDid you do this ?\nhttps://www.npgsql.org/doc/types/enums_and_composites.html\n\n-- \nJustin",
"msg_date": "Fri, 30 Apr 2021 18:52:50 +0530",
"msg_from": "aditya desai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Error while calling proc with table type from Application\n (npgsql)"
},
{
"msg_contents": "are you referring to this\nSQL SERVER User Defined Table Type and Table Valued Parameters - SqlSkull\n<https://sqlskull.com/2020/01/04/sql-server-user-defined-table-type-and-table-valued-parameters/>\n\nI have not used/heard a similar *select * from custom_type*\nso just trying to help :)\n\n// src table\npostgres=# \\d tt\n Table \"public.tt\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n x | integer | | |\n y | text | | |\n\n// dst table\npostgres=# \\d tt_clone\n Table \"public.tt_clone\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n x | integer | | |\n y | text | | |\n\n// function to insert into dst table tt_clone from src tt\n//tables input as tt table rowtype\nCREATE OR REPLACE function tt_fn (myrow tt) returns void\nLANGUAGE 'plpgsql'\nAS $BODY$\nBEGIN\n\n insert into tt_clone(x, y)\n select myrow.x, myrow.y;\n\nEND;\n$BODY$;\nCREATE FUNCTION\n\npostgres=# select tt_fn(tt.*) from tt;\n tt_fn\n-------\n\n\n(2 rows)\n\n\nI am not sure how to do that with \"call proc\" for stored procedures.\n\ni think it may have to be tweaked to either make the input param variadic\nor an array and loop through rows\nlike this\nHow to pass multiple rows to PostgreSQL function? - Stack Overflow\n<https://stackoverflow.com/questions/43811093/how-to-pass-multiple-rows-to-postgresql-function>\n\ni guess some should be able to help given there is more context to your\nquery.\n\nThanks,\nVijay\n\n\nOn Fri, 30 Apr 2021 at 18:53, aditya desai <[email protected]> wrote:\n\n>\n> aditya desai <[email protected]>\n> 6:32 PM (19 minutes ago)\n> to Justin, Pgsql\n> Hi Justin,\n> Thanks for your response. We have a user defined type created as below\n> and we need to pass this user defined parameter to a procedure from .net\n> code. Basically the procedure needs to accept multiple rows as\n> parameters(user defined table type). This happened seamlessly in SQL Server\n> but while doing it in Postgres after migration we get the error mentioned\n> in the above chain. Is theere any way we can achieve this?\n>\n> CREATE TYPE public.optiontype AS (\n> projectid integer,\n> optionid integer,\n> phaseid integer,\n> remarks text\n> );\n>\n> Also here is a sample procedure.\n>\n> CREATE OR REPLACE procedure SaveAssessmentInfo\n> (\n>\n> p_Optiontable OptionType\n> )\n>\n> LANGUAGE 'plpgsql'\n>\n> AS $BODY$\n>\n> BEGIN\n>\n> insert into tempOptions\n> select * from p_Optiontable;\n>\n> END\n>\n>\n> END;\n> $BODY$;\n>\n> Regards,\n> Aditya.\n>\n> On Fri, Apr 30, 2021 at 6:32 PM aditya desai <[email protected]> wrote:\n>\n>> Hi Justin,\n>> Thanks for your response. We have a user defined type created as below\n>> and we need to pass this user defined parameter to a procedure from .net\n>> code. Basically the procedure needs to accept multiple rows as\n>> parameters(user defined table type). This happened seamlessly in SQL Server\n>> but while doing it in Postgres after migration we get the error mentioned\n>> in the above chain. Is theere any way we can achieve this?\n>>\n>> CREATE TYPE public.optiontype AS (\n>> projectid integer,\n>> optionid integer,\n>> phaseid integer,\n>> remarks text\n>> );\n>>\n>> Regards,\n>> Aditya.\n>>\n>>\n>>\n>> On Thu, Apr 29, 2021 at 6:32 PM Justin Pryzby <[email protected]>\n>> wrote:\n>>\n>>> On Thu, Apr 29, 2021 at 02:52:23PM +0530, aditya desai wrote:\n>>> > Hi,\n>>> > One of the procs which accept tabletype as parameter gives below error\n>>> > while being called from Application. Could not find a concrete\n>>> solution for\n>>> > this. Can someone help?\n>>> >\n>>> > call PROCEDURE ABC (p_optiontable optiontype)\n>>>\n>>> What is PROCEDURE ABC ? If you created it, send its definition with\n>>> your problem report.\n>>>\n>>> > Below is the error while executing proc -\n>>>\n>>> How are you executing it? This seems like an error from npgsl, not\n>>> postgres.\n>>> It may be a client-side error, and it may be that the query isn't even\n>>> being\n>>> sent to the server at that point.\n>>>\n>>> > “the clr type system.data.datatable isn't natively supported by npgsql\n>>> or\n>>> > your postgresql. to use it with a postgresql composite you need to\n>>> specify\n>>> > datatypename or to map it, please refer to the documentation.”\n>>>\n>>> Did you do this ?\n>>> https://www.npgsql.org/doc/types/enums_and_composites.html\n>>>\n>>> --\n>>> Justin\n>>>\n>>\n\nare you referring to this SQL SERVER User Defined Table Type and Table Valued Parameters - SqlSkullI have not used/heard a similar select * from custom_type so just trying to help :)// src tablepostgres=# \\d tt Table \"public.tt\" Column | Type | Collation | Nullable | Default--------+---------+-----------+----------+--------- x | integer | | | y | text | | |// dst tablepostgres=# \\d tt_clone Table \"public.tt_clone\" Column | Type | Collation | Nullable | Default--------+---------+-----------+----------+--------- x | integer | | | y | text | | |// function to insert into dst table tt_clone from src tt//tables input as tt table rowtypeCREATE OR REPLACE function tt_fn (myrow tt) returns voidLANGUAGE 'plpgsql'AS $BODY$BEGIN insert into tt_clone(x, y) select myrow.x, myrow.y;END;$BODY$;CREATE FUNCTIONpostgres=# select tt_fn(tt.*) from tt; tt_fn-------(2 rows)I am not sure how to do that with \"call proc\" for stored procedures.i think it may have to be tweaked to either make the input param variadic or an array and loop through rowslike this How to pass multiple rows to PostgreSQL function? - Stack Overflowi guess some should be able to help given there is more context to your query.Thanks,VijayOn Fri, 30 Apr 2021 at 18:53, aditya desai <[email protected]> wrote:aditya desai <[email protected]>6:32 PM (19 minutes ago)to Justin, PgsqlHi Justin,Thanks for your response. We have a user defined type created as below and we need to pass this user defined parameter to a procedure from .net code. Basically the procedure needs to accept multiple rows as parameters(user defined table type). This happened seamlessly in SQL Server but while doing it in Postgres after migration we get the error mentioned in the above chain. Is theere any way we can achieve this?CREATE TYPE public.optiontype AS ( projectid integer, optionid integer, phaseid integer, remarks text);Also here is a sample procedure. CREATE OR REPLACE procedure SaveAssessmentInfo ( p_Optiontable OptionType ) LANGUAGE 'plpgsql'AS $BODY$ BEGIN insert into tempOptions select * from p_Optiontable;END END; $BODY$;Regards,Aditya.On Fri, Apr 30, 2021 at 6:32 PM aditya desai <[email protected]> wrote:Hi Justin,Thanks for your response. We have a user defined type created as below and we need to pass this user defined parameter to a procedure from .net code. Basically the procedure needs to accept multiple rows as parameters(user defined table type). This happened seamlessly in SQL Server but while doing it in Postgres after migration we get the error mentioned in the above chain. Is theere any way we can achieve this?CREATE TYPE public.optiontype AS ( projectid integer, optionid integer, phaseid integer, remarks text);Regards,Aditya.On Thu, Apr 29, 2021 at 6:32 PM Justin Pryzby <[email protected]> wrote:On Thu, Apr 29, 2021 at 02:52:23PM +0530, aditya desai wrote:\n> Hi,\n> One of the procs which accept tabletype as parameter gives below error\n> while being called from Application. Could not find a concrete solution for\n> this. Can someone help?\n> \n> call PROCEDURE ABC (p_optiontable optiontype)\n\nWhat is PROCEDURE ABC ? If you created it, send its definition with your problem report.\n\n> Below is the error while executing proc -\n\nHow are you executing it? This seems like an error from npgsl, not postgres.\nIt may be a client-side error, and it may be that the query isn't even being\nsent to the server at that point.\n\n> “the clr type system.data.datatable isn't natively supported by npgsql or\n> your postgresql. to use it with a postgresql composite you need to specify\n> datatypename or to map it, please refer to the documentation.”\n\nDid you do this ?\nhttps://www.npgsql.org/doc/types/enums_and_composites.html\n\n-- \nJustin",
"msg_date": "Fri, 30 Apr 2021 23:13:18 +0530",
"msg_from": "Vijaykumar Jain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Error while calling proc with table type from Application\n (npgsql)"
}
] |
[
{
"msg_contents": "Hi!\n\nWhich postgresql logging parameters should I activate to log the “number of tuples returned” for a query?\nI would like to debug some dynamicly generated queries in the system that are returning a absurd number of tuples (> 2,6mi of records).\n\nThanks,\n\nER\n\nEnviado do Email<https://go.microsoft.com/fwlink/?LinkId=550986> para Windows 10\n\n\n\n\n\n\n\n\n\n\nHi!\n \nWhich postgresql logging parameters should I activate to log the “number of tuples returned” for a query?\nI would like to debug some dynamicly generated queries in the system that are returning a absurd number of tuples (> 2,6mi of records).\n \nThanks,\n \nER\n \nEnviado do \nEmail para Windows 10",
"msg_date": "Thu, 29 Apr 2021 15:01:29 +0000",
"msg_from": "Edson Richter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Log number of tuples returned"
},
{
"msg_contents": "On Thu, Apr 29, 2021 at 03:01:29PM +0000, Edson Richter wrote:\n> Which postgresql logging parameters should I activate to log the “number of tuples returned” for a query?\n> I would like to debug some dynamicly generated queries in the system that are returning a absurd number of tuples (> 2,6mi of records).\n\nYou can load the auto_explain extension and set:\nauto_explain.log_analyze\n\nAnd then the \"rows\" are logged: \n Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.002..0.004 rows=1 loops=1) \n\nhttps://www.postgresql.org/docs/current/auto-explain.html\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 29 Apr 2021 10:08:26 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Log number of tuples returned"
},
{
"msg_contents": "De: Justin Pryzby<mailto:[email protected]>\nEnviado:quinta-feira, 29 de abril de 2021 12:08\nPara: Edson Richter<mailto:[email protected]>\nCc:[email protected]<mailto:[email protected]>\nAssunto: Re: Log number of tuples returned\n\nOn Thu, Apr 29, 2021 at 03:01:29PM +0000, Edson Richter wrote:\n> Which postgresql logging parameters should I activate to log the “number of tuples returned” for a query?\n> I would like to debug some dynamicly generated queries in the system that are returning a absurd number of tuples (> 2,6mi of records).\n\nYou can load the auto_explain extension and set:\nauto_explain.log_analyze\n\nAnd then the \"rows\" are logged:\n Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.002..0.004 rows=1 loops=1)\n\nhttps://www.postgresql.org/docs/current/auto-explain.html\n\n--\nJustin\n\n\nPerfect, Thanks!\n\nER.\n\n\n\n\n\n\n\n\n\n\nDe: Justin Pryzby\nEnviado:quinta-feira, 29 de abril de 2021 12:08\nPara: Edson Richter\nCc:[email protected]\nAssunto: Re: Log number of tuples returned\n \nOn Thu, Apr 29, 2021 at 03:01:29PM +0000, Edson Richter wrote:\n> Which postgresql logging parameters should I activate to log the “number of tuples returned” for a query?\n> I would like to debug some dynamicly generated queries in the system that are returning a absurd number of tuples (> 2,6mi of records).\n\nYou can load the auto_explain extension and set:\nauto_explain.log_analyze\n\nAnd then the \"rows\" are logged: \n\n Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.002..0.004 rows=1 loops=1) \n\n\nhttps://www.postgresql.org/docs/current/auto-explain.html\n\n-- \nJustin\n \n \nPerfect, Thanks!\n \nER.",
"msg_date": "Thu, 29 Apr 2021 15:31:04 +0000",
"msg_from": "Edson Richter <[email protected]>",
"msg_from_op": true,
"msg_subject": "RES: Log number of tuples returned"
}
] |
[
{
"msg_contents": "PreparedStatement: 15s\nRaw query with embedded params: 1s\nSee issue on github with query and explain analyze:\nhttps://github.com/pgjdbc/pgjdbc/issues/2145 \n<https://github.com/pgjdbc/pgjdbc/issues/2145> \n\n\n\nPreparedStatement: 15sRaw query with embedded params: 1sSee issue on github with query and \nexplain analyze:https://github.com/pgjdbc/pgjdbc/issues/2145",
"msg_date": "Sun, 02 May 2021 19:45:26 +0000",
"msg_from": "Alex <[email protected]>",
"msg_from_op": true,
"msg_subject": "15x slower PreparedStatement vs raw query"
},
{
"msg_contents": "On Sun, May 02, 2021 at 07:45:26PM +0000, Alex wrote:\n> PreparedStatement: 15s\n> Raw query with embedded params: 1s\n> See issue on github with query and explain analyze:\n> https://github.com/pgjdbc/pgjdbc/issues/2145\n\n| ..PostgreSQL Version? 12\n|Prepared statement\n|...\n|Planning Time: 11.596 ms\n|Execution Time: 14799.266 ms\n|\n|Raw statement\n|Planning Time: 22.685 ms\n|Execution Time: 1012.992 ms\n\nThe prepared statemnt has 2x faster planning time, which is what it's meant to\nimprove.\n\nThe execution time is slower, and I think you can improve it with this.\nhttps://www.postgresql.org/docs/12/runtime-config-query.html#GUC-PLAN-CACHE_MODE\n|plan_cache_mode (enum)\n| Prepared statements (either explicitly prepared or implicitly generated, for example by PL/pgSQL) can be executed using custom or generic plans. Custom plans are made afresh for each execution using its specific set of parameter values, while generic plans do not rely on the parameter values and can be re-used across executions. Thus, use of a generic plan saves planning time, but if the ideal plan depends strongly on the parameter values then a generic plan may be inefficient. The choice between these options is normally made automatically, but it can be overridden with plan_cache_mode. The allowed values are auto (the default), force_custom_plan and force_generic_plan. This setting is considered when a cached plan is to be executed, not when it is prepared. For more information see PREPARE.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 3 May 2021 15:18:12 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15x slower PreparedStatement vs raw query"
},
{
"msg_contents": "Shouldn't this process be automatic based on some heuristics?\nSaving 10ms planning but costing 14s execution is catastrophic.\nFor example, using some statistics to limit planner time to some percent of \nof previous executions. \nThis way, if query is fast, planning is fast, but if query is slow, more \nplanning can save huge execution time.\nThis is a better general usage option and should be enabled by default, and \nusers who want fast planning should set the variable to use the generic \nplan.\nJustin Pryzby wrote:\nOn Sun, May 02, 2021 at 07:45:26PM +0000, Alex wrote:\nPreparedStatement: 15s\nRaw query with embedded params: 1s\nSee issue on github with query and explain analyze:\nhttps://github.com/pgjdbc/pgjdbc/issues/2145 \n<https://github.com/pgjdbc/pgjdbc/issues/2145> \n| ..PostgreSQL Version? 12\n|Prepared statement\n|...\n|Planning Time: 11.596 ms\n|Execution Time: 14799.266 ms\n|\n|Raw statement\n|Planning Time: 22.685 ms\n|Execution Time: 1012.992 ms\nThe prepared statemnt has 2x faster planning time, which is what it's meant \nto\nimprove.\nThe execution time is slower, and I think you can improve it with this.\nhttps://www.postgresql.org/docs/12/runtime-config-query.html#GUC-PLAN-CACHE_MODE \n<https://www.postgresql.org/docs/12/runtime-config-query.html#GUC-PLAN-CACHE_MODE> \n\n|plan_cache_mode (enum)\n| Prepared statements (either explicitly prepared or implicitly generated, \nfor example by PL/pgSQL) can be executed using custom or generic plans. \nCustom plans are made afresh for each execution using its specific set of \nparameter values, while generic plans do not rely on the parameter values \nand can be re-used across executions. Thus, use of a generic plan saves \nplanning time, but if the ideal plan depends strongly on the parameter \nvalues then a generic plan may be inefficient. The choice between these \noptions is normally made automatically, but it can be overridden with \nplan_cache_mode. The allowed values are auto (the default), \nforce_custom_plan and force_generic_plan. This setting is considered when a \ncached plan is to be executed, not when it is prepared. For more \ninformation see PREPARE.\n-- \nJustin\n\n\n\nShouldn't this process be automatic based on some \nheuristics? Saving 10ms planning but costing 14s execution is \ncatastrophic.For example, \nusing some statistics to limit planner time to some percent of of previous \nexecutions. This way, if query is fast, planning \nis fast, but if query is slow, more planning can save huge execution \ntime.This is a better general usage option and should \nbe enabled by default, and users who want fast planning should set the \nvariable to use the generic plan.Justin Pryzby wrote:On Sun, May 02, \n2021 at 07:45:26PM +0000, Alex wrote: PreparedStatement: 15s Raw \nquery with embedded params: 1s See issue on github with query \nand explain analyze: https://github.com/pgjdbc/pgjdbc/issues/2145| \n..PostgreSQL Version? 12|Prepared \nstatement|...|Planning Time: 11.596 \nms|Execution Time: 14799.266 ms||Raw \nstatement|Planning Time: 22.685 ms|Execution Time: \n1012.992 msThe prepared statemnt has 2x faster \nplanning time, which is what it's meant \ntoimprove.The execution time is \nslower, and I think you can improve it with this.https://www.postgresql.org/docs/12/runtime-config-query.html#GUC-PLAN-CACHE_MODE|plan_cache_mode \n(enum)| Prepared statements (either explicitly \nprepared or implicitly generated, for example by PL/pgSQL) can be executed \nusing custom or generic plans. Custom plans are made afresh for each \nexecution using its specific set of parameter values, while generic plans \ndo not rely on the parameter values and can be re-used across executions. \nThus, use of a generic plan saves planning time, but if the ideal plan \ndepends strongly on the parameter values then a generic plan may be \ninefficient. The choice between these options is normally made \nautomatically, but it can be overridden with plan_cache_mode. The allowed \nvalues are auto (the default), force_custom_plan and force_generic_plan. \nThis setting is considered when a cached plan is to be executed, not when \nit is prepared. For more information see \nPREPARE.-- \nJustin",
"msg_date": "Tue, 04 May 2021 09:21:32 +0000",
"msg_from": "Alex <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 15x slower PreparedStatement vs raw query"
},
{
"msg_contents": "On Tue, May 4, 2021 at 6:05 AM Alex <[email protected]> wrote:\n\n> Shouldn't this process be automatic based on some heuristics?\n>\n> Saving 10ms planning but costing 14s execution is catastrophic.\n>\n> For example, using some statistics to limit planner time to some percent\n> of of previous executions.\n> This way, if query is fast, planning is fast, but if query is slow, more\n> planning can save huge execution time.\n> This is a better general usage option and should be enabled by default,\n> and users who want fast planning should set the variable to use the generic\n> plan.\n>\n>\n>\n\"fast\" and \"slow\" are relative things. There are many queries that I would\nbe overjoyed with if they completed in 5 _minutes_. And others where they\nhave to complete within 100ms or something is really wrong. We don't\nreally know what the execution time is until the query actually executes.\nPlanning is a guess for the best approach.\n\nAnother factor is whether the data is in cache or out on disk. Sometimes\nyou don't really know until you try to go get it. That can significantly\nchange query performance and plans - especially if some of the tables in a\nquery with a lot of joins are in cache and some aren't and maybe some have\nto be swapped out to pick up others.\n\nIf you are running the same dozen queries with different but similarly\nscoped parameters over and over, one would hope that the system would\nslowly tune itself to be highly optimized for those dozen queries. That is\na pretty narrow use case for a powerful general purpose relational database\nthough.\n\nOn Tue, May 4, 2021 at 6:05 AM Alex <[email protected]> wrote:\nShouldn't this process be automatic based on some \nheuristics? Saving 10ms planning but costing 14s execution is \ncatastrophic.For example, \nusing some statistics to limit planner time to some percent of of previous \nexecutions. This way, if query is fast, planning \nis fast, but if query is slow, more planning can save huge execution \ntime.This is a better general usage option and should \nbe enabled by default, and users who want fast planning should set the \nvariable to use the generic plan.\"fast\" and \"slow\" are relative things. There are many queries that I would be overjoyed with if they completed in 5 _minutes_. And others where they have to complete within 100ms or something is really wrong. We don't really know what the execution time is until the query actually executes. Planning is a guess for the best approach.Another factor is whether the data is in cache or out on disk. Sometimes you don't really know until you try to go get it. That can significantly change query performance and plans - especially if some of the tables in a query with a lot of joins are in cache and some aren't and maybe some have to be swapped out to pick up others.If you are running the same dozen queries with different but similarly scoped parameters over and over, one would hope that the system would slowly tune itself to be highly optimized for those dozen queries. That is a pretty narrow use case for a powerful general purpose relational database though.",
"msg_date": "Tue, 4 May 2021 08:12:38 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15x slower PreparedStatement vs raw query"
},
{
"msg_contents": "\"Powerful general purpose relational database\" but not smart... \nI propose a feature to use information from previously executed queries to \nadjust the query plan time accordingly.\nReusing the same generic plan may and will lead to very long execution \ntimes.\nRick Otten wrote:\nOn Tue, May 4, 2021 at 6:05 AM Alex <[email protected] \n<mailto:[email protected]> > wrote:\nShouldn't this process be automatic based on some heuristics?\nSaving 10ms planning but costing 14s execution is catastrophic.\nFor example, using some statistics to limit planner time to some percent of \nof previous executions. \nThis way, if query is fast, planning is fast, but if query is slow, more \nplanning can save huge execution time.\nThis is a better general usage option and should be enabled by default, and \nusers who want fast planning should set the variable to use the generic \nplan.\n\"fast\" and \"slow\" are relative things. There are many queries that I would \nbe overjoyed with if they completed in 5 _minutes_. And others where they \nhave to complete within 100ms or something is really wrong. We don't really \nknow what the execution time is until the query actually executes. Planning \nis a guess for the best approach.\nAnother factor is whether the data is in cache or out on disk. Sometimes \nyou don't really know until you try to go get it. That can significantly \nchange query performance and plans - especially if some of the tables in a \nquery with a lot of joins are in cache and some aren't and maybe some have \nto be swapped out to pick up others.\nIf you are running the same dozen queries with different but similarly \nscoped parameters over and over, one would hope that the system would \nslowly tune itself to be highly optimized for those dozen queries. That is \na pretty narrow use case for a powerful general purpose relational database \nthough.\n\n\n\n\"Powerful general purpose relational database\" but not \nsmart... I propose a \nfeature to use information from previously executed queries to adjust the \nquery plan time accordingly.Reusing the same generic \nplan may and will lead to very long execution times.Rick Otten wrote:On Tue, May 4, 2021 \nat 6:05 AM Alex <[email protected]> \nwrote:\nShouldn't this process be automatic based on \nsome \nheuristics? Saving 10ms planning but costing 14s execution is \ncatastrophic.For example, \nusing some statistics to limit planner time to some percent of of previous \nexecutions. This way, if query is fast, planning \n\nis fast, but if query is slow, more planning can save huge execution \ntime.This is a better general usage option and should \n\nbe enabled by default, and users who want fast planning should set the \nvariable to use the generic plan.\"fast\" \nand \"slow\" are relative things. There are many queries that I would \nbe overjoyed with if they completed in 5 _minutes_. And others where \nthey have to complete within 100ms or something is really wrong. We \ndon't really know what the execution time is until the query actually \nexecutes. Planning is a guess for the best \napproach.Another factor is whether the data is in \ncache or out on disk. Sometimes you don't really know until you try \nto go get it. That can significantly change query performance and \nplans - especially if some of the tables in a query with a lot of joins are \nin cache and some aren't and maybe some have to be swapped out to pick up \nothers.If you are running the same dozen queries \nwith different but similarly scoped parameters over and over, one would \nhope that the system would slowly tune itself to be highly optimized for \nthose dozen queries. That is a pretty narrow use case for a powerful \ngeneral purpose relational database \nthough.",
"msg_date": "Tue, 04 May 2021 13:59:16 +0000",
"msg_from": "Alex <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 15x slower PreparedStatement vs raw query"
},
{
"msg_contents": "On Tue, 2021-05-04 at 13:59 +0000, Alex wrote:\n> \"Powerful general purpose relational database\" but not smart... \n\nToo smart can easily become slow...\n\n> I propose a feature to use information from previously executed queries to adjust the query plan time accordingly.\n> Reusing the same generic plan may and will lead to very long execution times.\n\nAI can go wrong too, and I personally would be worried that such cases\nare very hard to debug...\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Tue, 04 May 2021 17:22:04 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15x slower PreparedStatement vs raw query"
},
{
"msg_contents": "I am not an expert on this, But I would like to take a shot :)\n\nIs it possible to share your prepared statement and parameter types.\nI mean\n\nsomething like this\n\nPREPARE usrrptplan (int) AS\n SELECT * FROM users u, logs l WHERE u.usrid=$1 AND u.usrid=l.usrid\n AND l.date = $2;\nEXECUTE usrrptplan(1, current_date);\n\n\nIt's just that sometimes the datatypes of the prepared statement params are\nnot the same as the datatype of the field in the join and as a result it\nmay add some overhead.\nPostgreSQL - general - bpchar, text and indexes (postgresql-archive.org)\n<https://www.postgresql-archive.org/bpchar-text-and-indexes-td5888846.html>\nThere was one more thread where a person has similar issues, which was\nsorted by using the relevant field type in the prepared field.\n\n\nbPMA | explain.depesz.com <https://explain.depesz.com/s/bPMA> -> slow\n(prepared) Row 8\nTsNn | explain.depesz.com <https://explain.depesz.com/s/TsNn> -> fast\n(direct) Row 8\nIt seems the join filters in the prepared version are doing a lot of work\non the fields massaging the fields that may add the cost overhead,\n\n\nAlso, if the above does not work, can you try the below plan GUC to check\nif you see any improvements.\n\nTech preview: How PostgreSQL 12 handles prepared plans - CYBERTEC\n(cybertec-postgresql.com)\n<https://www.cybertec-postgresql.com/en/tech-preview-how-postgresql-12-handles-prepared-plans/>\n\nThanks,\nVijay\n\nOn Tue, 4 May 2021 at 20:52, Laurenz Albe <[email protected]> wrote:\n\n> On Tue, 2021-05-04 at 13:59 +0000, Alex wrote:\n> > \"Powerful general purpose relational database\" but not smart...\n>\n> Too smart can easily become slow...\n>\n> > I propose a feature to use information from previously executed queries\n> to adjust the query plan time accordingly.\n> > Reusing the same generic plan may and will lead to very long execution\n> times.\n>\n> AI can go wrong too, and I personally would be worried that such cases\n> are very hard to debug...\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n>\n>\n>\n\n-- \nThanks,\nVijay\nMumbai, India\n\nI am not an expert on this, But I would like to take a shot :)Is it possible to share your prepared statement and parameter types.I mean something like this PREPARE usrrptplan (int) AS\n SELECT * FROM users u, logs l WHERE u.usrid=$1 AND u.usrid=l.usrid\n AND l.date = $2;\nEXECUTE usrrptplan(1, current_date);It's just that sometimes the datatypes of the prepared statement params are not the same as the datatype of the field in the join and as a result it may add some overhead.PostgreSQL - general - bpchar, text and indexes (postgresql-archive.org)There was one more thread where a person has similar issues, which was sorted by using the relevant field type in the prepared field.bPMA | explain.depesz.com -> slow (prepared) Row 8TsNn | explain.depesz.com -> fast (direct) Row 8It seems the join filters in the prepared version are doing a lot of work on the fields massaging the fields that may add the cost overhead, Also, if the above does not work, can you try the below plan GUC to check if you see any improvements.Tech preview: How PostgreSQL 12 handles prepared plans - CYBERTEC (cybertec-postgresql.com)Thanks,VijayOn Tue, 4 May 2021 at 20:52, Laurenz Albe <[email protected]> wrote:On Tue, 2021-05-04 at 13:59 +0000, Alex wrote:\n> \"Powerful general purpose relational database\" but not smart... \n\nToo smart can easily become slow...\n\n> I propose a feature to use information from previously executed queries to adjust the query plan time accordingly.\n> Reusing the same generic plan may and will lead to very long execution times.\n\nAI can go wrong too, and I personally would be worried that such cases\nare very hard to debug...\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n-- Thanks,VijayMumbai, India",
"msg_date": "Tue, 4 May 2021 21:20:19 +0530",
"msg_from": "Vijaykumar Jain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15x slower PreparedStatement vs raw query"
},
{
"msg_contents": "On Tue, 4 May 2021 at 22:05, Alex <[email protected]> wrote:\n> Shouldn't this process be automatic based on some heuristics?\n\nWhen plan_cache_mode is set to \"auto\", then the decision to use a\ngeneric or custom plan is cost-based. See [1]. There's a fairly crude\nmethod there for estimating the effort required to replan the query.\nThe remainder is based on the average cost of the previous custom\nplans + estimated planning effort vs cost of the generic plan. The\ncheaper one wins.\n\nCertainly, what's there is far from perfect. There are various\nproblems with it. The estimated planning cost is pretty crude and\ncould do with an overhaul. There are also issues with the plan costs\nnot being true to the cost of the query. One problem there is that\nrun-time partition pruning is not costed into the plan. This might\ncause choose_custom_plan() to pick a custom plan when a generic one\nwith run-time pruning might have been better.\n\nIn order to get a better idea of where things are going wrong for you,\nwe'd need to see the EXPLAIN ANALYZE output for both the custom and\nthe generic plan.\n\nDavid\n\n[1] https://github.com/postgres/postgres/blob/master/src/backend/utils/cache/plancache.c#L1019\n\n\n",
"msg_date": "Wed, 5 May 2021 18:57:04 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15x slower PreparedStatement vs raw query"
},
{
"msg_contents": "On Mon, May 03, 2021 at 03:18:11PM -0500, Justin Pryzby wrote:\n> On Sun, May 02, 2021 at 07:45:26PM +0000, Alex wrote:\n> > PreparedStatement: 15s\n> > Raw query with embedded params: 1s\n> > See issue on github with query and explain analyze:\n> > https://github.com/pgjdbc/pgjdbc/issues/2145\n> \n> | ..PostgreSQL Version? 12\n> |Prepared statement\n> |...\n> |Planning Time: 11.596 ms\n> |Execution Time: 14799.266 ms\n> |\n> |Raw statement\n> |Planning Time: 22.685 ms\n> |Execution Time: 1012.992 ms\n> \n> The prepared statemnt has 2x faster planning time, which is what it's meant to\n> improve.\n> \n> The execution time is slower, and I think you can improve it with this.\n> https://www.postgresql.org/docs/12/runtime-config-query.html#GUC-PLAN-CACHE_MODE\n\nAlso, the rowcount estimates are way off starting with the scan nodes.\n\n -> Bitmap Heap Scan on category_property_name cpn_limits (cost=32.13..53.55 rows=14 width=29) (actual time=0.665..8.822 rows=2650 loops=1)\n\tRecheck Cond: ((lexeme = ANY ('{''rata'',\"\"''polling'' ''rata'' ''ratez''\"\",\"\"''polling'' ''rata''\"\",\"\"''rata'' ''ratez'' ''semnal'' ''usb-ul''\"\"}'::tsvector[])) OR (lexeme = '''frecventa'' ''frecventez'''::tsvector) OR (lexeme = '''raportare'' ''rata'' ''ratez'''::tsvector) OR (lexeme = ANY ('{''latime'',\"\"''latime'' ''placi''\"\",\"\"''compatibila'' ''latime'' ''telefon''\"\"}'::tsvector[])) OR (lexeme = '''lungime'''::tsvector) OR (lexeme = '''cablu'' ''lungime'''::tsvector) OR (lexeme = '''inaltime'''::tsvector) OR (lexeme = '''rezolutie'''::tsvector) OR (lexeme = '''greutate'''::tsvector))\n\tHeap Blocks: exact=85\n\t-> BitmapOr (cost=32.13..32.13 rows=14 width=0) (actual time=0.574..0.577 rows=0 loops=1) \n\t -> Bitmap Index Scan on category_property_name_lexeme_idx (cost=0.00..9.17 rows=4 width=0) (actual time=0.088..0.089 rows=10 loops=1)\n\t\t Index Cond: (lexeme = ANY ('{''rata'',\"\"''polling'' ''rata'' ''ratez''\"\",\"\"''polling'' ''rata''\"\",\"\"''rata'' ''ratez'' ''semnal'' ''usb-ul''\"\"}'::tsvector[]))\n\t -> Bitmap Index Scan on category_property_name_lexeme_idx (cost=0.00..2.29 rows=1 width=0) (actual time=0.047..0.047 rows=171 loops=1) \n\t\t Index Cond: (lexeme = '''frecventa'' ''frecventez'''::tsvector) -> Bitmap Index Scan on category_property_name_lexeme_idx (cost=0.00..2.29 rows=1 width=0) (actual time=0.015..0.015 rows=1 loops=1) Index Cond: (lexeme = '''raportare'' ''rata'' ''ratez'''::tsvector)\n\t -> Bitmap Index Scan on category_property_name_lexeme_idx (cost=0.00..6.88 rows=3 width=0) (actual time=0.097..0.097 rows=547 loops=1) Index Cond: (lexeme = ANY ('{''latime'',\"\"''latime'' ''placi''\"\",\"\"''compatibila'' ''latime'' ''telefon''\"\"}'::tsvector[]))\n\t -> Bitmap Index Scan on category_property_name_lexeme_idx (cost=0.00..2.29 rows=1 width=0) (actual time=0.107..0.107 rows=604 loops=1) Index Cond: (lexeme = '''lungime'''::tsvector)\n\t -> Bitmap Index Scan on category_property_name_lexeme_idx (cost=0.00..2.29 rows=1 width=0) (actual time=0.030..0.030 rows=137 loops=1)\n\t\t Index Cond: (lexeme = '''cablu'' ''lungime'''::tsvector)\n\t -> Bitmap Index Scan on category_property_name_lexeme_idx (cost=0.00..2.29 rows=1 width=0) (actual time=0.079..0.079 rows=479 loops=1) Index Cond: (lexeme = '''inaltime'''::tsvector)\n\t -> Bitmap Index Scan on category_property_name_lexeme_idx (cost=0.00..2.29 rows=1 width=0) (actual time=0.020..0.020 rows=40 loops=1)\n\t\t Index Cond: (lexeme = '''rezolutie'''::tsvector) -> Bitmap Index Scan on category_property_name_lexeme_idx (cost=0.00..2.29 rows=1 width=0) (actual time=0.088..0.088 rows=661 loops=1) Index Cond: (lexeme = '''greutate'''::tsvector)\n\n\n",
"msg_date": "Wed, 5 May 2021 01:59:19 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15x slower PreparedStatement vs raw query"
},
{
"msg_contents": "This is exactly my issue.\nUsing raw query, planning takes 22ms (custom plan), but using PreparedStatement planning takes 11ms (generic plan).It choose the faster generic plan 11ms, wining 11ms faster than custom plan, but loosing 14seconds!!! to execution...\nThe auto choose algorithm should be changed to include execution time in the decision.\n On Wednesday, May 5, 2021, 9:57:20 AM GMT+3, David Rowley <[email protected]> wrote: \n \n On Tue, 4 May 2021 at 22:05, Alex <[email protected]> wrote:\n> Shouldn't this process be automatic based on some heuristics?\n\nWhen plan_cache_mode is set to \"auto\", then the decision to use a\ngeneric or custom plan is cost-based. See [1]. There's a fairly crude\nmethod there for estimating the effort required to replan the query.\nThe remainder is based on the average cost of the previous custom\nplans + estimated planning effort vs cost of the generic plan. The\ncheaper one wins.\n\nCertainly, what's there is far from perfect. There are various\nproblems with it. The estimated planning cost is pretty crude and\ncould do with an overhaul. There are also issues with the plan costs\nnot being true to the cost of the query. One problem there is that\nrun-time partition pruning is not costed into the plan. This might\ncause choose_custom_plan() to pick a custom plan when a generic one\nwith run-time pruning might have been better.\n\nIn order to get a better idea of where things are going wrong for you,\nwe'd need to see the EXPLAIN ANALYZE output for both the custom and\nthe generic plan.\n\nDavid\n\n[1] https://github.com/postgres/postgres/blob/master/src/backend/utils/cache/plancache.c#L1019\n \n\nThis is exactly my issue.Using raw query, planning takes 22ms (custom plan), but using PreparedStatement planning takes 11ms (generic plan).It choose the faster generic plan 11ms, wining 11ms faster than custom plan, but loosing 14seconds!!! to execution...The auto choose algorithm should be changed to include execution time in the decision.\n\n\n\n On Wednesday, May 5, 2021, 9:57:20 AM GMT+3, David Rowley <[email protected]> wrote:\n \n\n\nOn Tue, 4 May 2021 at 22:05, Alex <[email protected]> wrote:> Shouldn't this process be automatic based on some heuristics?When plan_cache_mode is set to \"auto\", then the decision to use ageneric or custom plan is cost-based. See [1]. There's a fairly crudemethod there for estimating the effort required to replan the query.The remainder is based on the average cost of the previous customplans + estimated planning effort vs cost of the generic plan. Thecheaper one wins.Certainly, what's there is far from perfect. There are variousproblems with it. The estimated planning cost is pretty crude andcould do with an overhaul. There are also issues with the plan costsnot being true to the cost of the query. One problem there is thatrun-time partition pruning is not costed into the plan. This mightcause choose_custom_plan() to pick a custom plan when a generic onewith run-time pruning might have been better.In order to get a better idea of where things are going wrong for you,we'd need to see the EXPLAIN ANALYZE output for both the custom andthe generic plan.David[1] https://github.com/postgres/postgres/blob/master/src/backend/utils/cache/plancache.c#L1019",
"msg_date": "Wed, 5 May 2021 15:21:30 +0000 (UTC)",
"msg_from": "Alex <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 15x slower PreparedStatement vs raw query"
}
] |
[
{
"msg_contents": "Hi there,\n\nI've recently been involved in migrating our old system to SQL Server and\nthen PostgreSQL. Everything has been working fine so far but now after\nexecuting our tests on Postgres, we saw a very slow running query on a\nlarge table in our database.\nI have tried asking on other platforms but no one has been able to give me\na satisfying answer.\n\n*Postgres Version : *PostgreSQL 12.2, compiled by Visual C++ build 1914,\n64-bit\nNo notable errors in the Server log and the Postgres Server itself.\n\nThe table structure :\n\nCREATE TABLE logtable\n(\n key character varying(20) COLLATE pg_catalog.\"default\" NOT NULL,\n id integer,\n column3 integer,\n column4 integer,\n column5 integer,\n column6 integer,\n column7 integer,\n column8 integer,\n column9 character varying(128) COLLATE pg_catalog.\"default\",\n column10 character varying(2048) COLLATE pg_catalog.\"default\",\n column11 character varying(2048) COLLATE pg_catalog.\"default\",\n column12 character varying(2048) COLLATE pg_catalog.\"default\",\n column13 character varying(2048) COLLATE pg_catalog.\"default\",\n column14 character varying(2048) COLLATE pg_catalog.\"default\",\n column15 character varying(2048) COLLATE pg_catalog.\"default\",\n column16 character varying(2048) COLLATE pg_catalog.\"default\",\n column17 character varying(2048) COLLATE pg_catalog.\"default\",\n column18 character varying(2048) COLLATE pg_catalog.\"default\",\n column19 character varying(2048) COLLATE pg_catalog.\"default\",\n column21 character varying(256) COLLATE pg_catalog.\"default\",\n column22 character varying(256) COLLATE pg_catalog.\"default\",\n column23 character varying(256) COLLATE pg_catalog.\"default\",\n column24 character varying(256) COLLATE pg_catalog.\"default\",\n column25 character varying(256) COLLATE pg_catalog.\"default\",\n column26 character varying(256) COLLATE pg_catalog.\"default\",\n column27 character varying(256) COLLATE pg_catalog.\"default\",\n column28 character varying(256) COLLATE pg_catalog.\"default\",\n column29 character varying(256) COLLATE pg_catalog.\"default\",\n column30 character varying(256) COLLATE pg_catalog.\"default\",\n column31 character varying(256) COLLATE pg_catalog.\"default\",\n column32 character varying(256) COLLATE pg_catalog.\"default\",\n column33 character varying(256) COLLATE pg_catalog.\"default\",\n column34 character varying(256) COLLATE pg_catalog.\"default\",\n column35 character varying(256) COLLATE pg_catalog.\"default\",\n entrytype integer,\n column37 bigint,\n column38 bigint,\n column39 bigint,\n column40 bigint,\n column41 bigint,\n column42 bigint,\n column43 bigint,\n column44 bigint,\n column45 bigint,\n column46 bigint,\n column47 character varying(128) COLLATE pg_catalog.\"default\",\n timestampcol timestamp without time zone,\n column49 timestamp without time zone,\n column50 timestamp without time zone,\n column51 timestamp without time zone,\n column52 timestamp without time zone,\n archivestatus integer,\n column54 integer,\n column55 character varying(20) COLLATE pg_catalog.\"default\",\n CONSTRAINT pkey PRIMARY KEY (key)\n USING INDEX TABLESPACE tablespace\n)\n\nTABLESPACE tablespace;\n\nALTER TABLE schema.logtable\n OWNER to user;\n\nCREATE INDEX idx_timestampcol\n ON schema.logtable USING btree\n ( timestampcol ASC NULLS LAST )\n TABLESPACE tablespace ;\n\nCREATE INDEX idx_test2\n ON schema.logtable USING btree\n ( entrytype ASC NULLS LAST)\n TABLESPACE tablespace\n WHERE archivestatus <= 1;\n\nCREATE INDEX idx_arcstatus\n ON schema.logtable USING btree\n ( archivestatus ASC NULLS LAST)\n TABLESPACE tablespace;\n\nCREATE INDEX idx_entrytype\n ON schema.logtable USING btree\n ( entrytype ASC NULLS LAST)\n TABLESPACE tablespace ;\n\n\nThe table contains 14.000.000 entries and has about 3.3 GB of data:\nNo triggers, inserts per day, probably 5-20 K per day.\n\nSELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\nrelhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE\nrelname='logtable';\n\nrelname\n|relpages|reltuples|relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|\n------------------|--------|---------|-------------|-------|--------|--------------|----------|-------------|\nlogtable | 405988| 14091424| 405907|r | 54|false\n |NULL | 3326803968|\n\n\nThe slow running query:\n\nSELECT column1,..., column54 where ((entrytype = 4000 or entrytype = 4001\nor entrytype = 4002) and (archivestatus <= 1)) order by timestampcol desc;\n\n\nThis query runs in about 45-60 seconds.\nThe same query runs in about 289 ms Oracle and 423 ms in SQL-Server.\nNow I understand that actually loading all results would take a while.\n(about 520K or so rows)\nBut that shouldn't be exactly what happens right? There should be a\nresultset iterator which can retrieve all data but doesn't from the get go.\n\nWith the help of some people in the slack and so thread, I've found a\nconfiguration parameter which helps performance :\n\nset random_page_cost = 1;\n\nThis improved performance from 45-60 s to 15-35 s. (since we are using\nssd's)\nStill not acceptable but definitely an improvement.\nSome maybe relevant system parameters:\n\neffective_cache_size 4GB\nmaintenance_work_mem 1GB\nshared_buffers 2GB\nwork_mem 1GB\n\n\nCurrently I'm accessing the data through DbBeaver (JDBC -\npostgresql-42.2.5.jar) and our JAVA application (JDBC -\npostgresql-42.2.19.jar). Both use the defaultRowFetchSize=5000 to not load\neverything into memory and limit the results.\nThe explain plan:\n\nEXPLAIN (ANALYZE, BUFFERS, SETTINGS, VERBOSE)...\n(Above Query)\n\n\nGather Merge (cost=347142.71..397196.91 rows=429006 width=2558) (actual\ntime=21210.019..22319.444 rows=515841 loops=1)\n Output: column1, .. , column54\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=141487 read=153489\n -> Sort (cost=346142.69..346678.95 rows=214503 width=2558) (actual\ntime=21148.887..21297.428 rows=171947 loops=3)\n Output: column1, .. , column54\n Sort Key: logtable.timestampcol DESC\n Sort Method: quicksort Memory: 62180kB\n Worker 0: Sort Method: quicksort Memory: 56969kB\n Worker 1: Sort Method: quicksort Memory: 56837kB\n Buffers: shared hit=141487 read=153489\n Worker 0: actual time=21129.973..21296.839 rows=166776 loops=1\n Buffers: shared hit=45558 read=49514\n Worker 1: actual time=21114.439..21268.117 rows=165896 loops=1\n Buffers: shared hit=45104 read=49506\n -> Parallel Bitmap Heap Scan on schema.logtable\n (cost=5652.74..327147.77 rows=214503 width=2558) (actual\ntime=1304.813..20637.462 rows=171947 loops=3)\n Output: column1, .. , column54\n Recheck Cond: ((logtable.entrytype = 4000) OR\n(logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n Filter: (logtable.archivestatus <= 1)\n Heap Blocks: exact=103962\n Buffers: shared hit=141473 read=153489\n Worker 0: actual time=1280.472..20638.620 rows=166776 loops=1\n Buffers: shared hit=45551 read=49514\n Worker 1: actual time=1275.274..20626.219 rows=165896 loops=1\n Buffers: shared hit=45097 read=49506\n -> BitmapOr (cost=5652.74..5652.74 rows=520443 width=0)\n(actual time=1179.438..1179.438 rows=0 loops=1)\n Buffers: shared hit=9 read=1323\n -> Bitmap Index Scan on idx_entrytype\n (cost=0.00..556.61 rows=54957 width=0) (actual time=161.939..161.940\nrows=65970 loops=1)\n Index Cond: (logtable.entrytype = 4000)\n Buffers: shared hit=1 read=171\n -> Bitmap Index Scan on idx_entrytype\n (cost=0.00..2243.22 rows=221705 width=0) (actual time=548.849..548.849\nrows=224945 loops=1)\n Index Cond: (logtable.entrytype = 4001)\n Buffers: shared hit=4 read=576\n -> Bitmap Index Scan on idx_entrytype\n (cost=0.00..2466.80 rows=243782 width=0) (actual time=468.637..468.637\nrows=224926 loops=1)\n Index Cond: (logtable.entrytype = 4002)\n Buffers: shared hit=4 read=576\nSettings: random_page_cost = '1', search_path = '\"$user\", schema, public',\ntemp_buffers = '80MB', work_mem = '1GB'\nPlanning Time: 0.578 ms\nExecution Time: 22617.351 ms\n\nAs mentioned before, oracle does this much faster.\n\n-------------------------------------------------------------------------------------------------------------------------\n| Id | Operation | Name |\nRows | Bytes |TempSpc| Cost (%CPU)| Time |\n-------------------------------------------------------------------------------------------------------------------------\n| 0 | SELECT STATEMENT | |\n 6878 | 2491K| | 2143 (1)| 00:00:01 |\n| 1 | SORT ORDER BY | |\n 6878 | 2491K| 3448K| 2143 (1)| 00:00:01 |\n| 2 | INLIST ITERATOR | |\n | | | | |\n|* 3 | TABLE ACCESS BY INDEX ROWID BATCHED| logtable |\n 6878 | 2491K| | 1597 (1)| 00:00:01 |\n|* 4 | INDEX RANGE SCAN | idx_entrytype |\n 6878 | | | 23 (0)| 00:00:01 |\n-------------------------------------------------------------------------------------------------------------------------\n\nIs there much I can analyze, any information you might need to further\nanalyze this?\n\nHi there,I've recently been involved in migrating our old system to SQL Server and then PostgreSQL. Everything has been working fine so far but now after executing our tests on Postgres, we saw a very slow running query on a large table in our database. I have tried asking on other platforms but no one has been able to give me a satisfying answer. Postgres Version : PostgreSQL 12.2, compiled by Visual C++ build 1914, 64-bitNo notable errors in the Server log and the Postgres Server itself.The table structure :CREATE TABLE logtable( key character varying(20) COLLATE pg_catalog.\"default\" NOT NULL, id integer, \n\ncolumn3 integer, \n\ncolumn4 integer, \n\ncolumn5 integer, \n\ncolumn6 integer, \n\ncolumn7 integer, \n\ncolumn8 integer, \n\ncolumn9 character varying(128) COLLATE pg_catalog.\"default\", \n\ncolumn10 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn11 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn12 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn13 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn14 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn15 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn16 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn17 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn18 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn19 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn21 \n\ncharacter varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn22 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn23 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn24 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn25 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn26 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn27 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn28 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn29 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn30 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn31 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn32 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn33 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn34 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn35 character varying(256) COLLATE pg_catalog.\"default\", \n\nentrytype integer, \n\ncolumn37 bigint, \n\ncolumn38 bigint, \n\ncolumn39 bigint, \n\ncolumn40 bigint, \n\ncolumn41 bigint, \n\ncolumn42 bigint, \n\ncolumn43 bigint, \n\ncolumn44 bigint, \n\ncolumn45 bigint, \n\ncolumn46 bigint, \n\ncolumn47 character varying(128) COLLATE pg_catalog.\"default\", \n\ntimestampcol timestamp without time zone, \n\ncolumn49 timestamp without time zone, \n\ncolumn50 timestamp without time zone, \n\ncolumn51 timestamp without time zone, \n\ncolumn52 timestamp without time zone, \n\n\n\narchivestatus \n\ninteger, \n\ncolumn54 integer, \n\ncolumn55 character varying(20) COLLATE pg_catalog.\"default\", CONSTRAINT pkey PRIMARY KEY (key) USING INDEX TABLESPACE tablespace)TABLESPACE tablespace;ALTER TABLE schema.logtable OWNER to user;CREATE INDEX idx_timestampcol ON schema.logtable USING btree (\n\n\n\ntimestampcol \n\n\n\n ASC NULLS LAST ) TABLESPACE \n\ntablespace\n\n;CREATE INDEX idx_test2 ON schema.logtable USING btree ( entrytype ASC NULLS LAST) TABLESPACE tablespace WHERE archivestatus <= 1;CREATE INDEX idx_arcstatus ON \n\nschema.logtable USING btree ( archivestatus ASC NULLS LAST) TABLESPACE tablespace;CREATE INDEX \n\nidx_entrytype ON schema.logtable USING btree ( entrytype ASC NULLS LAST) TABLESPACE \n\ntablespace\n\n;The table contains 14.000.000 entries and has about 3.3 GB of data:No triggers, inserts per day, probably 5-20 K per day.SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='logtable';relname |relpages|reltuples|relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|------------------|--------|---------|-------------|-------|--------|--------------|----------|-------------|logtable | 405988| 14091424| 405907|r | 54|false |NULL | 3326803968|The slow running query:SELECT column1,..., column54 where ((entrytype = 4000 or \n\nentrytype \n\n= 4001 or \n\nentrytype \n\n= 4002) and (archivestatus <= 1)) order by timestampcol desc;This query runs in about 45-60 seconds.The same query runs in about 289 ms Oracle and 423 ms in SQL-Server. Now I understand that actually loading all results would take a while. (about 520K or so rows) But that shouldn't be exactly what happens right? There should be a resultset iterator which can retrieve all data but doesn't from the get go. With the help of some people in the slack and so thread, I've found a configuration parameter which helps performance : set random_page_cost = 1;This improved performance from 45-60 s to 15-35 s. (since we are using ssd's) Still not acceptable but definitely an improvement. Some maybe relevant system parameters:effective_cache_size\t4GBmaintenance_work_mem\t1GBshared_buffers\t2GBwork_mem\t1GBCurrently I'm accessing the data through DbBeaver (JDBC - postgresql-42.2.5.jar) and our JAVA application (JDBC - postgresql-42.2.19.jar). Both use the defaultRowFetchSize=5000 to not load everything into memory and limit the results. The explain plan:EXPLAIN (ANALYZE, BUFFERS, SETTINGS, VERBOSE)...(Above Query)Gather Merge (cost=347142.71..397196.91 rows=429006 width=2558) (actual time=21210.019..22319.444 rows=515841 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=141487 read=153489 -> Sort (cost=346142.69..346678.95 rows=214503 width=2558) (actual time=21148.887..21297.428 rows=171947 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 62180kB Worker 0: Sort Method: quicksort Memory: 56969kB Worker 1: Sort Method: quicksort Memory: 56837kB Buffers: shared hit=141487 read=153489 Worker 0: actual time=21129.973..21296.839 rows=166776 loops=1 Buffers: shared hit=45558 read=49514 Worker 1: actual time=21114.439..21268.117 rows=165896 loops=1 Buffers: shared hit=45104 read=49506 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5652.74..327147.77 rows=214503 width=2558) (actual time=1304.813..20637.462 rows=171947 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=103962 Buffers: shared hit=141473 read=153489 Worker 0: actual time=1280.472..20638.620 rows=166776 loops=1 Buffers: shared hit=45551 read=49514 Worker 1: actual time=1275.274..20626.219 rows=165896 loops=1 Buffers: shared hit=45097 read=49506 -> BitmapOr (cost=5652.74..5652.74 rows=520443 width=0) (actual time=1179.438..1179.438 rows=0 loops=1) Buffers: shared hit=9 read=1323 -> Bitmap Index Scan on idx_entrytype (cost=0.00..556.61 rows=54957 width=0) (actual time=161.939..161.940 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=1 read=171 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2243.22 rows=221705 width=0) (actual time=548.849..548.849 rows=224945 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=4 read=576 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2466.80 rows=243782 width=0) (actual time=468.637..468.637 rows=224926 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=4 read=576Settings: random_page_cost = '1', search_path = '\"$user\", schema, public', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.578 msExecution Time: 22617.351 msAs mentioned before, oracle does this much faster. -------------------------------------------------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |-------------------------------------------------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 6878 | 2491K| | 2143 (1)| 00:00:01 || 1 | SORT ORDER BY | | 6878 | 2491K| 3448K| 2143 (1)| 00:00:01 || 2 | INLIST ITERATOR | | | | | | ||* 3 | TABLE ACCESS BY INDEX ROWID BATCHED| logtable | 6878 | 2491K| | 1597 (1)| 00:00:01 ||* 4 | INDEX RANGE SCAN | idx_entrytype | 6878 | | | 23 (0)| 00:00:01 |-------------------------------------------------------------------------------------------------------------------------Is there much I can analyze, any information you might need to further analyze this?",
"msg_date": "Thu, 6 May 2021 16:38:39 +0200",
"msg_from": "Semen Yefimenko <[email protected]>",
"msg_from_op": true,
"msg_subject": "Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "\n----- Mensagem original -----\n> De: \"Semen Yefimenko\" <[email protected]>\n> Para: \"pgsql-performance\" <[email protected]>\n> Enviadas: Quinta-feira, 6 de maio de 2021 11:38:39\n> Assunto: Very slow Query compared to Oracle / SQL - Server\n\n\n> SELECT column1,..., column54 where ((entrytype = 4000 or entrytype = 4001 or\n> entrytype = 4002) and (archivestatus <= 1)) order by timestampcol desc;\n \n\nThe first thing I would try is rewriting the query to:\n\nSELECT column1,..., column54 \n FROM logtable\n WHERE (entrytype in (4000,4001,4002)) \n AND (archivestatus <= 1)) \n ORDER BY timestampcol DESC;\n\nCheck if that makes a difference...\n\nLuis R. Weck \n\n\n",
"msg_date": "Thu, 6 May 2021 13:11:07 -0300 (BRT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "On 05/06/21 19:11, [email protected] wrote:\n> ----- Mensagem original -----\n>> De: \"Semen Yefimenko\" <[email protected]>\n>> Para: \"pgsql-performance\" <[email protected]>\n>> Enviadas: Quinta-feira, 6 de maio de 2021 11:38:39\n>> Assunto: Very slow Query compared to Oracle / SQL - Server\n>\n>> SELECT column1,..., column54 where ((entrytype = 4000 or entrytype = 4001 or\n>> entrytype = 4002) and (archivestatus <= 1)) order by timestampcol desc;\n> \n>\n> The first thing I would try is rewriting the query to:\n>\n> SELECT column1,..., column54\n> FROM logtable\n> WHERE (entrytype in (4000,4001,4002))\n> AND (archivestatus <= 1))\n> ORDER BY timestampcol DESC;\n>\n> Check if that makes a difference...\n>\n> Luis R. Weck\n>\n>\n>\nThe IN statement will probable result in just recheck condition change \nto entrytype = any('{a,b,c}'::int[]). Looks like dispersion of \narchivestatus is not enough to use index idx_arcstatus.\n\nPlease try to create partial index with condition like (archivestatus <= \n1) and rewrite select to use (archivestatus is not null and \narchivestatus <= 1).\n\nCREATE INDEX idx_arcstatus_le1 ON schema.logtable ( archivestatus ) \nwhere (archivestatus <= 1) TABLESPACE tablespace;\n\n\n\n\n\n\n\nOn 05/06/21 19:11,\n [email protected] wrote:\n\n\n\n----- Mensagem original -----\n\n\nDe: \"Semen Yefimenko\" <[email protected]>\nPara: \"pgsql-performance\" <[email protected]>\nEnviadas: Quinta-feira, 6 de maio de 2021 11:38:39\nAssunto: Very slow Query compared to Oracle / SQL - Server\n\n\n\n\n\n\nSELECT column1,..., column54 where ((entrytype = 4000 or entrytype = 4001 or\nentrytype = 4002) and (archivestatus <= 1)) order by timestampcol desc;\n\n\n \n\nThe first thing I would try is rewriting the query to:\n\nSELECT column1,..., column54 \n FROM logtable\n WHERE (entrytype in (4000,4001,4002)) \n AND (archivestatus <= 1)) \n ORDER BY timestampcol DESC;\n\nCheck if that makes a difference...\n\nLuis R. Weck \n\n\n\n\n\nThe IN statement will probable result in just\n recheck condition change to entrytype = any('{a,b,c}'::int[]). Looks like dispersion of archivestatus\n is not enough to use index idx_arcstatus.\nPlease try to create partial index with\n condition like (archivestatus <= 1) and rewrite\n select to use (archivestatus is not\n null and archivestatus <= 1).\nCREATE INDEX idx_arcstatus_le1 ON\n schema.logtable ( archivestatus ) where (archivestatus <= 1) TABLESPACE tablespace;",
"msg_date": "Thu, 6 May 2021 21:15:36 +0300",
"msg_from": "Alexey M Boltenkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "On 05/06/21 21:15, Alexey M Boltenkov wrote:\n> On 05/06/21 19:11, [email protected] wrote:\n>> ----- Mensagem original -----\n>>> De: \"Semen Yefimenko\"<[email protected]>\n>>> Para: \"pgsql-performance\"<[email protected]>\n>>> Enviadas: Quinta-feira, 6 de maio de 2021 11:38:39\n>>> Assunto: Very slow Query compared to Oracle / SQL - Server\n>>> SELECT column1,..., column54 where ((entrytype = 4000 or entrytype = 4001 or\n>>> entrytype = 4002) and (archivestatus <= 1)) order by timestampcol desc;\n>> \n>>\n>> The first thing I would try is rewriting the query to:\n>>\n>> SELECT column1,..., column54\n>> FROM logtable\n>> WHERE (entrytype in (4000,4001,4002))\n>> AND (archivestatus <= 1))\n>> ORDER BY timestampcol DESC;\n>>\n>> Check if that makes a difference...\n>>\n>> Luis R. Weck\n>>\n>>\n>>\n> The IN statement will probable result in just recheck condition change \n> to entrytype = any('{a,b,c}'::int[]). Looks like dispersion of \n> archivestatus is not enough to use index idx_arcstatus.\n>\n> Please try to create partial index with condition like (archivestatus \n> <= 1) and rewrite select to use (archivestatus is not null and \n> archivestatus <= 1).\n>\n> CREATE INDEX idx_arcstatus_le1 ON schema.logtable ( archivestatus ) \n> where (archivestatus <= 1) TABLESPACE tablespace;\n>\nI'm sorry, 'archivestatus is not null' is only necessary for index \nwithout nulls.\n\n\nCREATE INDEX idx_arcstatus_le1 ON schema.logtable ( archivestatus ) \nwhere (archivestatus is not null and archivestatus <= 1) TABLESPACE \ntablespace;\n\n\n\n\n\n\n\nOn 05/06/21 21:15, Alexey M Boltenkov\n wrote:\n\n\n\nOn 05/06/21 19:11, [email protected] wrote:\n\n\n----- Mensagem original -----\n\n\nDe: \"Semen Yefimenko\" <[email protected]>\nPara: \"pgsql-performance\" <[email protected]>\nEnviadas: Quinta-feira, 6 de maio de 2021 11:38:39\nAssunto: Very slow Query compared to Oracle / SQL - Server\n\n\n\nSELECT column1,..., column54 where ((entrytype = 4000 or entrytype = 4001 or\nentrytype = 4002) and (archivestatus <= 1)) order by timestampcol desc;\n\n\n \n\nThe first thing I would try is rewriting the query to:\n\nSELECT column1,..., column54 \n FROM logtable\n WHERE (entrytype in (4000,4001,4002)) \n AND (archivestatus <= 1)) \n ORDER BY timestampcol DESC;\n\nCheck if that makes a difference...\n\nLuis R. Weck \n\n\n\n\n\nThe IN statement will probable result in\n just recheck condition change to entrytype = any('{a,b,c}'::int[]). Looks like dispersion of\n archivestatus is not enough to use index idx_arcstatus.\nPlease try to create partial index with\n condition like (archivestatus <= 1) and rewrite\n select to use (archivestatus is not\n null and archivestatus <= 1).\nCREATE INDEX idx_arcstatus_le1 ON\n schema.logtable ( archivestatus ) where (archivestatus <= 1) TABLESPACE\n tablespace;\n\n\n\nI'm sorry, 'archivestatus is not\n null' is only necessary for index without nulls.\n\n\nCREATE INDEX idx_arcstatus_le1 ON\n schema.logtable ( archivestatus ) where (archivestatus is\n not null and archivestatus <= 1)\n TABLESPACE tablespace;",
"msg_date": "Thu, 6 May 2021 21:20:28 +0300",
"msg_from": "Alexey M Boltenkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "Yes, rewriting the query with an IN clause was also my first approach, but\nI didn't help much.\nThe Query plan did change a little bit but the performance was not impacted.\n\nCREATE INDEX idx_arcstatus_le1 ON schema.logtable ( archivestatus )\nwhere (archivestatus <= 1)\nANALYZE schema.logtable\n\n\nThis resulted in this query plan:\n\nGather Merge (cost=344618.96..394086.05 rows=423974 width=2549) (actual\ntime=7327.777..9142.358 rows=516031 loops=1)\n Output: column1, .. , column54\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=179817 read=115290\n -> Sort (cost=343618.94..344148.91 rows=211987 width=2549) (actual\ntime=7258.314..7476.733 rows=172010 loops=3)\n Output: column1, .. , column54\n Sort Key: logtable.timestampcol DESC\n Sort Method: quicksort Memory: 64730kB\n Worker 0: Sort Method: quicksort Memory: 55742kB\n Worker 1: Sort Method: quicksort Memory: 55565kB\n Buffers: shared hit=179817 read=115290\n Worker 0: actual time=7231.774..7458.703 rows=161723 loops=1\n Buffers: shared hit=55925 read=36265\n Worker 1: actual time=7217.856..7425.754 rows=161990 loops=1\n Buffers: shared hit=56197 read=36242\n -> Parallel Bitmap Heap Scan on schema.logtable\n (cost=5586.50..324864.86 rows=211987 width=2549) (actual\ntime=1073.266..6805.850 rows=172010 loops=3)\n Output: column1, .. , column54\n Recheck Cond: ((logtable.entrytype = 4000) OR\n(logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n Filter: (logtable.archivestatus <= 1)\n Heap Blocks: exact=109146\n Buffers: shared hit=179803 read=115290\n Worker 0: actual time=1049.875..6809.231 rows=161723 loops=1\n Buffers: shared hit=55918 read=36265\n Worker 1: actual time=1035.156..6788.037 rows=161990 loops=1\n Buffers: shared hit=56190 read=36242\n -> BitmapOr (cost=5586.50..5586.50 rows=514483 width=0)\n(actual time=945.179..945.179 rows=0 loops=1)\n Buffers: shared hit=3 read=1329\n -> Bitmap Index Scan on idx_entrytype\n (cost=0.00..738.13 rows=72893 width=0) (actual time=147.915..147.916\nrows=65970 loops=1)\n Index Cond: (logtable.entrytype = 4000)\n Buffers: shared hit=1 read=171\n -> Bitmap Index Scan on idx_entrytype\n (cost=0.00..2326.17 rows=229965 width=0) (actual time=473.450..473.451\nrows=225040 loops=1)\n Index Cond: (logtable.entrytype = 4001)\n Buffers: shared hit=1 read=579\n -> Bitmap Index Scan on idx_entrytype\n (cost=0.00..2140.61 rows=211624 width=0) (actual time=323.801..323.802\nrows=225021 loops=1)\n Index Cond: (logtable.entrytype = 4002)\n Buffers: shared hit=1 read=579\nSettings: random_page_cost = '1', search_path = '\"$user\", schema, public',\ntemp_buffers = '80MB', work_mem = '1GB'\nPlanning Time: 0.810 ms\nExecution Time: 9647.406 ms\n\n\nseemingly faster.\nAfter doing a few selects, I reran ANALYZE:\nNow it's even faster, probably due to cache and other mechanisms.\n\nGather Merge (cost=342639.19..391676.44 rows=420290 width=2542) (actual\ntime=2944.803..4534.725 rows=516035 loops=1)\n Output: column1, .. , column54\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=147334 read=147776\n -> Sort (cost=341639.16..342164.53 rows=210145 width=2542) (actual\ntime=2827.256..3013.960 rows=172012 loops=3)\n Output: column1, .. , column54\n Sort Key: logtable.timestampcol DESC\n Sort Method: quicksort Memory: 71565kB\n Worker 0: Sort Method: quicksort Memory: 52916kB\n Worker 1: Sort Method: quicksort Memory: 51556kB\n Buffers: shared hit=147334 read=147776\n Worker 0: actual time=2771.975..2948.928 rows=153292 loops=1\n Buffers: shared hit=43227 read=43808\n Worker 1: actual time=2767.752..2938.688 rows=148424 loops=1\n Buffers: shared hit=42246 read=42002\n -> Parallel Bitmap Heap Scan on schema.logtable\n (cost=5537.95..323061.27 rows=210145 width=2542) (actual\ntime=276.401..2418.925 rows=172012 loops=3)\n Output: column1, .. , column54\n Recheck Cond: ((logtable.entrytype = 4000) OR\n(logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n Filter: (logtable.archivestatus <= 1)\n Heap Blocks: exact=122495\n Buffers: shared hit=147320 read=147776\n Worker 0: actual time=227.701..2408.580 rows=153292 loops=1\n Buffers: shared hit=43220 read=43808\n Worker 1: actual time=225.996..2408.705 rows=148424 loops=1\n Buffers: shared hit=42239 read=42002\n -> BitmapOr (cost=5537.95..5537.95 rows=509918 width=0)\n(actual time=203.940..203.941 rows=0 loops=1)\n Buffers: shared hit=1332\n -> Bitmap Index Scan on idx_entrytype\n (cost=0.00..680.48 rows=67206 width=0) (actual time=31.155..31.156\nrows=65970 loops=1)\n Index Cond: (logtable.entrytype = 4000)\n Buffers: shared hit=172\n -> Bitmap Index Scan on idx_entrytype\n (cost=0.00..2220.50 rows=219476 width=0) (actual time=112.459..112.461\nrows=225042 loops=1)\n Index Cond: (logtable.entrytype = 4001)\n Buffers: shared hit=580\n -> Bitmap Index Scan on idx_entrytype\n (cost=0.00..2258.70 rows=223236 width=0) (actual time=60.313..60.314\nrows=225023 loops=1)\n Index Cond: (logtable.entrytype = 4002)\n Buffers: shared hit=580\nSettings: random_page_cost = '1', search_path = '\"$user\", schema, public',\ntemp_buffers = '80MB', work_mem = '1GB'\nPlanning Time: 0.609 ms\nExecution Time: 4984.490 ms\n\nI don't see the new index used but it seems it's boosting the performance\nnevertheless.\nI kept the query, so I didn't rewrite the query to be WITHOUT nulls.\nThank you already for the hint. What else can I do? With the current\nparameters, the query finishes in about 3.9-5.2 seconds which is\nalready much better but still nowhere near the speeds of 280 ms in oracle.\nI would love to get it to at least 1 second.\n\n\nAm Do., 6. Mai 2021 um 20:20 Uhr schrieb Alexey M Boltenkov <\[email protected]>:\n\n> On 05/06/21 21:15, Alexey M Boltenkov wrote:\n>\n> On 05/06/21 19:11, [email protected] wrote:\n>\n> ----- Mensagem original -----\n>\n> De: \"Semen Yefimenko\" <[email protected]> <[email protected]>\n> Para: \"pgsql-performance\" <[email protected]> <[email protected]>\n> Enviadas: Quinta-feira, 6 de maio de 2021 11:38:39\n> Assunto: Very slow Query compared to Oracle / SQL - Server\n>\n> SELECT column1,..., column54 where ((entrytype = 4000 or entrytype = 4001 or\n> entrytype = 4002) and (archivestatus <= 1)) order by timestampcol desc;\n>\n>\n>\n> The first thing I would try is rewriting the query to:\n>\n> SELECT column1,..., column54\n> FROM logtable\n> WHERE (entrytype in (4000,4001,4002))\n> AND (archivestatus <= 1))\n> ORDER BY timestampcol DESC;\n>\n> Check if that makes a difference...\n>\n> Luis R. Weck\n>\n>\n>\n>\n> The IN statement will probable result in just recheck condition change to entrytype\n> = any('{a,b,c}'::int[]). Looks like dispersion of archivestatus is not\n> enough to use index idx_arcstatus.\n>\n> Please try to create partial index with condition like (archivestatus <=\n> 1) and rewrite select to use (archivestatus is not null and archivestatus\n> <= 1).\n> CREATE INDEX idx_arcstatus_le1 ON schema.logtable ( archivestatus ) where (archivestatus\n> <= 1) TABLESPACE tablespace;\n>\n> I'm sorry, 'archivestatus is not null' is only necessary for index\n> without nulls.\n>\n>\n> CREATE INDEX idx_arcstatus_le1 ON schema.logtable ( archivestatus ) where\n> (archivestatus is not null and archivestatus <= 1) TABLESPACE tablespace;\n>\n\nYes, rewriting the query with an IN clause was also my first approach, but I didn't help much. The Query plan did change a little bit but the performance was not impacted.CREATE INDEX idx_arcstatus_le1 ON schema.logtable ( archivestatus ) where (archivestatus <= 1)ANALYZE \n\nschema.logtableThis resulted in this query plan:Gather Merge (cost=344618.96..394086.05 rows=423974 width=2549) (actual time=7327.777..9142.358 rows=516031 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=179817 read=115290 -> Sort (cost=343618.94..344148.91 rows=211987 width=2549) (actual time=7258.314..7476.733 rows=172010 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 64730kB Worker 0: Sort Method: quicksort Memory: 55742kB Worker 1: Sort Method: quicksort Memory: 55565kB Buffers: shared hit=179817 read=115290 Worker 0: actual time=7231.774..7458.703 rows=161723 loops=1 Buffers: shared hit=55925 read=36265 Worker 1: actual time=7217.856..7425.754 rows=161990 loops=1 Buffers: shared hit=56197 read=36242 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5586.50..324864.86 rows=211987 width=2549) (actual time=1073.266..6805.850 rows=172010 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=109146 Buffers: shared hit=179803 read=115290 Worker 0: actual time=1049.875..6809.231 rows=161723 loops=1 Buffers: shared hit=55918 read=36265 Worker 1: actual time=1035.156..6788.037 rows=161990 loops=1 Buffers: shared hit=56190 read=36242 -> BitmapOr (cost=5586.50..5586.50 rows=514483 width=0) (actual time=945.179..945.179 rows=0 loops=1) Buffers: shared hit=3 read=1329 -> Bitmap Index Scan on idx_entrytype (cost=0.00..738.13 rows=72893 width=0) (actual time=147.915..147.916 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=1 read=171 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2326.17 rows=229965 width=0) (actual time=473.450..473.451 rows=225040 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=1 read=579 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2140.61 rows=211624 width=0) (actual time=323.801..323.802 rows=225021 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=1 read=579Settings: random_page_cost = '1', search_path = '\"$user\", schema, public', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.810 msExecution Time: 9647.406 msseemingly faster.After doing a few selects, I reran ANALYZE:Now it's even faster, probably due to cache and other mechanisms.Gather Merge (cost=342639.19..391676.44 rows=420290 width=2542) (actual time=2944.803..4534.725 rows=516035 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=147334 read=147776 -> Sort (cost=341639.16..342164.53 rows=210145 width=2542) (actual time=2827.256..3013.960 rows=172012 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 71565kB Worker 0: Sort Method: quicksort Memory: 52916kB Worker 1: Sort Method: quicksort Memory: 51556kB Buffers: shared hit=147334 read=147776 Worker 0: actual time=2771.975..2948.928 rows=153292 loops=1 Buffers: shared hit=43227 read=43808 Worker 1: actual time=2767.752..2938.688 rows=148424 loops=1 Buffers: shared hit=42246 read=42002 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5537.95..323061.27 rows=210145 width=2542) (actual time=276.401..2418.925 rows=172012 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=122495 Buffers: shared hit=147320 read=147776 Worker 0: actual time=227.701..2408.580 rows=153292 loops=1 Buffers: shared hit=43220 read=43808 Worker 1: actual time=225.996..2408.705 rows=148424 loops=1 Buffers: shared hit=42239 read=42002 -> BitmapOr (cost=5537.95..5537.95 rows=509918 width=0) (actual time=203.940..203.941 rows=0 loops=1) Buffers: shared hit=1332 -> Bitmap Index Scan on idx_entrytype (cost=0.00..680.48 rows=67206 width=0) (actual time=31.155..31.156 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=172 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2220.50 rows=219476 width=0) (actual time=112.459..112.461 rows=225042 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=580 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2258.70 rows=223236 width=0) (actual time=60.313..60.314 rows=225023 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=580Settings: random_page_cost = '1', search_path = '\"$user\", schema, public', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.609 msExecution Time: 4984.490 msI don't see the new index used but it seems it's boosting the performance nevertheless.I kept the query, so I didn't rewrite the query to be WITHOUT nulls. Thank you already for the hint. What else can I do? With the current parameters, the query finishes in about 3.9-5.2 seconds which is already much better but still nowhere near the speeds of 280 ms in oracle.I would love to get it to at least 1 second. Am Do., 6. Mai 2021 um 20:20 Uhr schrieb Alexey M Boltenkov <[email protected]>:\n\nOn 05/06/21 21:15, Alexey M Boltenkov\n wrote:\n\n\nOn 05/06/21 19:11, [email protected] wrote:\n\n\n----- Mensagem original -----\n\n\nDe: \"Semen Yefimenko\" <[email protected]>\nPara: \"pgsql-performance\" <[email protected]>\nEnviadas: Quinta-feira, 6 de maio de 2021 11:38:39\nAssunto: Very slow Query compared to Oracle / SQL - Server\n\n\n\nSELECT column1,..., column54 where ((entrytype = 4000 or entrytype = 4001 or\nentrytype = 4002) and (archivestatus <= 1)) order by timestampcol desc;\n\n\n \n\nThe first thing I would try is rewriting the query to:\n\nSELECT column1,..., column54 \n FROM logtable\n WHERE (entrytype in (4000,4001,4002)) \n AND (archivestatus <= 1)) \n ORDER BY timestampcol DESC;\n\nCheck if that makes a difference...\n\nLuis R. Weck \n\n\n\n\n\nThe IN statement will probable result in\n just recheck condition change to entrytype = any('{a,b,c}'::int[]). Looks like dispersion of\n archivestatus is not enough to use index idx_arcstatus.\nPlease try to create partial index with\n condition like (archivestatus <= 1) and rewrite\n select to use (archivestatus is not\n null and archivestatus <= 1).\nCREATE INDEX idx_arcstatus_le1 ON\n schema.logtable ( archivestatus ) where (archivestatus <= 1) TABLESPACE\n tablespace;\n\n\n\nI'm sorry, 'archivestatus is not\n null' is only necessary for index without nulls.\n\n\nCREATE INDEX idx_arcstatus_le1 ON\n schema.logtable ( archivestatus ) where (archivestatus is\n not null and archivestatus <= 1)\n TABLESPACE tablespace;",
"msg_date": "Thu, 6 May 2021 20:59:34 +0200",
"msg_from": "Semen Yefimenko <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "På torsdag 06. mai 2021 kl. 20:59:34, skrev Semen Yefimenko <\[email protected] <mailto:[email protected]>>: \nYes, rewriting the query with an IN clause was also my first approach, but I \ndidn't help much.\n The Query plan did change a little bit but the performance was not impacted.\nCREATE INDEX idx_arcstatus_le1 ON schema.logtable ( archivestatus ) where \n(archivestatus <= 1) \nANALYZE schema.logtable \n\n This resulted in this query plan:\n [...] \n\nI assume (4000,4001,4002) are just example-values and that they might be \nanything? Else you can just include them in your partial-index. \n\n\n\n--\n Andreas Joseph Krogh",
"msg_date": "Thu, 6 May 2021 21:26:07 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "I am not sure, if the goal is just for the specific set of predicates or\nperformance in general.\n\nAlso from the explain plan, it seems there is still a significant amount\nof buffers read vs hit.\nThat would constitute i/o and may add to slow result.\n\nWhat is the size of the table and the index ?\nIs it possible to increase shared buffers ?\ncoz it seems, you would end up reading a ton of rows and columns which\nwould benefit from having the pages in cache.\nalthough the cache needs to be warmed by a query or via external extension\n:)\n\nCan you try tuning by increasing the shared_buffers slowly in steps of\n500MB, and running explain analyze against the query.\n\nIf the Buffers read are reduced, i guess that would help speed up the query.\nFYI, increasing shared_buffers requires a server restart.\n\nAs Always,\nIgnore if this does not work :)\n\n\nThanks,\nVijay\n\n\n\nOn Fri, 7 May 2021 at 00:56, Andreas Joseph Krogh <[email protected]>\nwrote:\n\n> På torsdag 06. mai 2021 kl. 20:59:34, skrev Semen Yefimenko <\n> [email protected]>:\n>\n> Yes, rewriting the query with an IN clause was also my first approach, but\n> I didn't help much.\n> The Query plan did change a little bit but the performance was not\n> impacted.\n>\n>\n> CREATE INDEX idx_arcstatus_le1 ON schema.logtable ( archivestatus )\n> where (archivestatus <= 1)\n> ANALYZE schema.logtable\n>\n>\n> This resulted in this query plan:\n> [...]\n>\n>\n> I assume (4000,4001,4002) are just example-values and that they might be\n> anything? Else you can just include them in your partial-index.\n>\n> --\n> Andreas Joseph Krogh\n>\n\n\n-- \nThanks,\nVijay\nMumbai, India\n\nI am not sure, if the goal is just for the specific set of predicates or performance in general.Also from the explain plan, it seems there is still a significant amount of buffers read vs hit. That would constitute i/o and may add to slow result.What is the size of the table and the index ?Is it possible to increase shared buffers ?coz it seems, you would end up reading a ton of rows and columns which would benefit from having the pages in cache.although the cache needs to be warmed by a query or via external extension :) Can you try tuning by increasing the shared_buffers slowly in steps of 500MB, and running explain analyze against the query.If the Buffers read are reduced, i guess that would help speed up the query.FYI, increasing shared_buffers requires a server restart.As Always,Ignore if this does not work :)Thanks,VijayOn Fri, 7 May 2021 at 00:56, Andreas Joseph Krogh <[email protected]> wrote:På torsdag 06. mai 2021 kl. 20:59:34, skrev Semen Yefimenko <[email protected]>:\n\nYes, rewriting the query with an IN clause was also my first approach, but I didn't help much. \nThe Query plan did change a little bit but the performance was not impacted.\n \nCREATE INDEX idx_arcstatus_le1 ON schema.logtable ( archivestatus ) where (archivestatus <= 1)\nANALYZE schema.logtable\n\n\nThis resulted in this query plan:\n[...]\n\n\n \nI assume (4000,4001,4002) are just example-values and that they might be anything? Else you can just include them in your partial-index.\n\n\n \n--\nAndreas Joseph Krogh\n\n-- Thanks,VijayMumbai, India",
"msg_date": "Fri, 7 May 2021 01:18:27 +0530",
"msg_from": "Vijaykumar Jain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "Have you try of excluding not null from index? Can you give dispersion of archivestatus?06.05.2021, 21:59, \"Semen Yefimenko\" <[email protected]>:Yes, rewriting the query with an IN clause was also my first approach, but I didn't help much. The Query plan did change a little bit but the performance was not impacted.CREATE INDEX idx_arcstatus_le1 ON schema.logtable ( archivestatus ) where (archivestatus <= 1)ANALYZE \n\nschema.logtableThis resulted in this query plan:Gather Merge (cost=344618.96..394086.05 rows=423974 width=2549) (actual time=7327.777..9142.358 rows=516031 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=179817 read=115290 -> Sort (cost=343618.94..344148.91 rows=211987 width=2549) (actual time=7258.314..7476.733 rows=172010 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 64730kB Worker 0: Sort Method: quicksort Memory: 55742kB Worker 1: Sort Method: quicksort Memory: 55565kB Buffers: shared hit=179817 read=115290 Worker 0: actual time=7231.774..7458.703 rows=161723 loops=1 Buffers: shared hit=55925 read=36265 Worker 1: actual time=7217.856..7425.754 rows=161990 loops=1 Buffers: shared hit=56197 read=36242 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5586.50..324864.86 rows=211987 width=2549) (actual time=1073.266..6805.850 rows=172010 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=109146 Buffers: shared hit=179803 read=115290 Worker 0: actual time=1049.875..6809.231 rows=161723 loops=1 Buffers: shared hit=55918 read=36265 Worker 1: actual time=1035.156..6788.037 rows=161990 loops=1 Buffers: shared hit=56190 read=36242 -> BitmapOr (cost=5586.50..5586.50 rows=514483 width=0) (actual time=945.179..945.179 rows=0 loops=1) Buffers: shared hit=3 read=1329 -> Bitmap Index Scan on idx_entrytype (cost=0.00..738.13 rows=72893 width=0) (actual time=147.915..147.916 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=1 read=171 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2326.17 rows=229965 width=0) (actual time=473.450..473.451 rows=225040 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=1 read=579 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2140.61 rows=211624 width=0) (actual time=323.801..323.802 rows=225021 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=1 read=579Settings: random_page_cost = '1', search_path = '\"$user\", schema, public', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.810 msExecution Time: 9647.406 msseemingly faster.After doing a few selects, I reran ANALYZE:Now it's even faster, probably due to cache and other mechanisms.Gather Merge (cost=342639.19..391676.44 rows=420290 width=2542) (actual time=2944.803..4534.725 rows=516035 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=147334 read=147776 -> Sort (cost=341639.16..342164.53 rows=210145 width=2542) (actual time=2827.256..3013.960 rows=172012 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 71565kB Worker 0: Sort Method: quicksort Memory: 52916kB Worker 1: Sort Method: quicksort Memory: 51556kB Buffers: shared hit=147334 read=147776 Worker 0: actual time=2771.975..2948.928 rows=153292 loops=1 Buffers: shared hit=43227 read=43808 Worker 1: actual time=2767.752..2938.688 rows=148424 loops=1 Buffers: shared hit=42246 read=42002 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5537.95..323061.27 rows=210145 width=2542) (actual time=276.401..2418.925 rows=172012 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=122495 Buffers: shared hit=147320 read=147776 Worker 0: actual time=227.701..2408.580 rows=153292 loops=1 Buffers: shared hit=43220 read=43808 Worker 1: actual time=225.996..2408.705 rows=148424 loops=1 Buffers: shared hit=42239 read=42002 -> BitmapOr (cost=5537.95..5537.95 rows=509918 width=0) (actual time=203.940..203.941 rows=0 loops=1) Buffers: shared hit=1332 -> Bitmap Index Scan on idx_entrytype (cost=0.00..680.48 rows=67206 width=0) (actual time=31.155..31.156 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=172 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2220.50 rows=219476 width=0) (actual time=112.459..112.461 rows=225042 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=580 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2258.70 rows=223236 width=0) (actual time=60.313..60.314 rows=225023 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=580Settings: random_page_cost = '1', search_path = '\"$user\", schema, public', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.609 msExecution Time: 4984.490 msI don't see the new index used but it seems it's boosting the performance nevertheless.I kept the query, so I didn't rewrite the query to be WITHOUT nulls. Thank you already for the hint. What else can I do? With the current parameters, the query finishes in about 3.9-5.2 seconds which is already much better but still nowhere near the speeds of 280 ms in oracle.I would love to get it to at least 1 second. Am Do., 6. Mai 2021 um 20:20 Uhr schrieb Alexey M Boltenkov <[email protected]>:\n\nOn 05/06/21 21:15, Alexey M Boltenkov\n wrote:\n\n\nOn 05/06/21 19:11, [email protected] wrote:\n\n\n----- Mensagem original -----\n\n\nDe: \"Semen Yefimenko\" <[email protected]>\nPara: \"pgsql-performance\" <[email protected]>\nEnviadas: Quinta-feira, 6 de maio de 2021 11:38:39\nAssunto: Very slow Query compared to Oracle / SQL - Server\n\n\n\nSELECT column1,..., column54 where ((entrytype = 4000 or entrytype = 4001 or\nentrytype = 4002) and (archivestatus <= 1)) order by timestampcol desc;\n\n\n \n\nThe first thing I would try is rewriting the query to:\n\nSELECT column1,..., column54 \n FROM logtable\n WHERE (entrytype in (4000,4001,4002)) \n AND (archivestatus <= 1)) \n ORDER BY timestampcol DESC;\n\nCheck if that makes a difference...\n\nLuis R. Weck \n\n\n\n\n\nThe IN statement will probable result in\n just recheck condition change to entrytype = any('{a,b,c}'::int[]). Looks like dispersion of\n archivestatus is not enough to use index idx_arcstatus.\nPlease try to create partial index with\n condition like (archivestatus <= 1) and rewrite\n select to use (archivestatus is not\n null and archivestatus <= 1).\nCREATE INDEX idx_arcstatus_le1 ON\n schema.logtable ( archivestatus ) where (archivestatus <= 1) TABLESPACE\n tablespace;\n\n\n\nI'm sorry, 'archivestatus is not\n null' is only necessary for index without nulls.\n\n\nCREATE INDEX idx_arcstatus_le1 ON\n schema.logtable ( archivestatus ) where (archivestatus is\n not null and archivestatus <= 1)\n TABLESPACE tablespace;\n\n\n",
"msg_date": "Thu, 06 May 2021 22:58:30 +0300",
"msg_from": "Alexey M Boltenkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "On Thu, May 06, 2021 at 04:38:39PM +0200, Semen Yefimenko wrote:\n> Hi there,\n> \n> I've recently been involved in migrating our old system to SQL Server and\n> then PostgreSQL. Everything has been working fine so far but now after\n> executing our tests on Postgres, we saw a very slow running query on a\n> large table in our database.\n> I have tried asking on other platforms but no one has been able to give me\n> a satisfying answer.\n\n> With the help of some people in the slack and so thread, I've found a\n> configuration parameter which helps performance :\n> set random_page_cost = 1;\n\nI wonder what the old query plan was...\nWould you include links to your prior correspondance ?\n\n> -> Parallel Bitmap Heap Scan on schema.logtable (cost=5652.74..327147.77 rows=214503 width=2558) (actual time=1304.813..20637.462 rows=171947 loops=3)\n> Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n> Filter: (logtable.archivestatus <= 1)\n> Heap Blocks: exact=103962\n> Buffers: shared hit=141473 read=153489\n> \n> -------------------------------------------------------------------------------------------------------------------------\n> | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |\n> -------------------------------------------------------------------------------------------------------------------------\n> | 0 | SELECT STATEMENT | | 6878 | 2491K| | 2143 (1)| 00:00:01 |\n> | 1 | SORT ORDER BY | | 6878 | 2491K| 3448K| 2143 (1)| 00:00:01 |\n> | 2 | INLIST ITERATOR | | | | | | |\n> |* 3 | TABLE ACCESS BY INDEX ROWID BATCHED| logtable | 6878 | 2491K| | 1597 (1)| 00:00:01 |\n> |* 4 | INDEX RANGE SCAN | idx_entrytype | 6878 | | | 23 (0)| 00:00:01 |\n> -------------------------------------------------------------------------------------------------------------------------\n> \n> Is there much I can analyze, any information you might need to further\n> analyze this?\n\nOracle is apparently doing a single scan on \"entrytype\".\n\nAs a test, you could try forcing that, like:\nbegin; SET enable_bitmapscan=off ; explain (analyze) [...]; rollback;\nor\nbegin; DROP INDEX idx_arcstatus; explain (analyze) [...]; rollback;\n\nYou could try to reduce the cost of that scan, by clustering on idx_arcstatus,\nand then analyzing. That will affect all other queries, too. Also, the\n\"clustering\" won't be preserved with future inserts/updates/deletes, so you may\nhave to do that as a periodic maintenance command.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 6 May 2021 15:01:03 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "On 05/06/21 22:58, Alexey M Boltenkov wrote:\n> Have you try of excluding not null from index? Can you give dispersion \n> of archivestatus?\n>\n>\n> 06.05.2021, 21:59, \"Semen Yefimenko\" <[email protected]>:\n>\n> Yes, rewriting the query with an IN clause was also my first\n> approach, but I didn't help much.\n> The Query plan did change a little bit but the performance was not\n> impacted.\n>\n> CREATE INDEX idx_arcstatus_le1 ON schema.logtable (\n> archivestatus ) where (archivestatus <= 1)\n> ANALYZE schema.logtable\n>\n>\n> This resulted in this query plan:\n>\n> Gather Merge (cost=344618.96..394086.05 rows=423974\n> width=2549) (actual time=7327.777..9142.358 rows=516031 loops=1)\n> Output: column1, .. , column54\n> Workers Planned: 2\n> Workers Launched: 2\n> Buffers: shared hit=179817 read=115290\n> -> Sort (cost=343618.94..344148.91 rows=211987 width=2549)\n> (actual time=7258.314..7476.733 rows=172010 loops=3)\n> Output: column1, .. , column54\n> Sort Key: logtable.timestampcol DESC\n> Sort Method: quicksort Memory: 64730kB\n> Worker 0: Sort Method: quicksort Memory: 55742kB\n> Worker 1: Sort Method: quicksort Memory: 55565kB\n> Buffers: shared hit=179817 read=115290\n> Worker 0: actual time=7231.774..7458.703 rows=161723\n> loops=1\n> Buffers: shared hit=55925 read=36265\n> Worker 1: actual time=7217.856..7425.754 rows=161990\n> loops=1\n> Buffers: shared hit=56197 read=36242\n> -> Parallel Bitmap Heap Scan on schema.logtable\n> (cost=5586.50..324864.86 rows=211987 width=2549) (actual\n> time=1073.266..6805.850 rows=172010 loops=3)\n> Output: column1, .. , column54\n> Recheck Cond: ((logtable.entrytype = 4000) OR\n> (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n> Filter: (logtable.archivestatus <= 1)\n> Heap Blocks: exact=109146\n> Buffers: shared hit=179803 read=115290\n> Worker 0: actual time=1049.875..6809.231\n> rows=161723 loops=1\n> Buffers: shared hit=55918 read=36265\n> Worker 1: actual time=1035.156..6788.037\n> rows=161990 loops=1\n> Buffers: shared hit=56190 read=36242\n> -> BitmapOr (cost=5586.50..5586.50 rows=514483\n> width=0) (actual time=945.179..945.179 rows=0 loops=1)\n> Buffers: shared hit=3 read=1329\n> -> Bitmap Index Scan on idx_entrytype\n> (cost=0.00..738.13 rows=72893 width=0) (actual\n> time=147.915..147.916 rows=65970 loops=1)\n> Index Cond: (logtable.entrytype = 4000)\n> Buffers: shared hit=1 read=171\n> -> Bitmap Index Scan on idx_entrytype\n> (cost=0.00..2326.17 rows=229965 width=0) (actual\n> time=473.450..473.451 rows=225040 loops=1)\n> Index Cond: (logtable.entrytype = 4001)\n> Buffers: shared hit=1 read=579\n> -> Bitmap Index Scan on idx_entrytype\n> (cost=0.00..2140.61 rows=211624 width=0) (actual\n> time=323.801..323.802 rows=225021 loops=1)\n> Index Cond: (logtable.entrytype = 4002)\n> Buffers: shared hit=1 read=579\n> Settings: random_page_cost = '1', search_path = '\"$user\",\n> schema, public', temp_buffers = '80MB', work_mem = '1GB'\n> Planning Time: 0.810 ms\n> Execution Time: 9647.406 ms\n>\n>\n> seemingly faster.\n> After doing a few selects, I reran ANALYZE:\n> Now it's even faster, probably due to cache and other mechanisms.\n>\n> Gather Merge (cost=342639.19..391676.44 rows=420290\n> width=2542) (actual time=2944.803..4534.725 rows=516035 loops=1)\n> Output: column1, .. , column54\n> Workers Planned: 2\n> Workers Launched: 2\n> Buffers: shared hit=147334 read=147776\n> -> Sort (cost=341639.16..342164.53 rows=210145 width=2542)\n> (actual time=2827.256..3013.960 rows=172012 loops=3)\n> Output: column1, .. , column54\n> Sort Key: logtable.timestampcol DESC\n> Sort Method: quicksort Memory: 71565kB\n> Worker 0: Sort Method: quicksort Memory: 52916kB\n> Worker 1: Sort Method: quicksort Memory: 51556kB\n> Buffers: shared hit=147334 read=147776\n> Worker 0: actual time=2771.975..2948.928 rows=153292\n> loops=1\n> Buffers: shared hit=43227 read=43808\n> Worker 1: actual time=2767.752..2938.688 rows=148424\n> loops=1\n> Buffers: shared hit=42246 read=42002\n> -> Parallel Bitmap Heap Scan on schema.logtable\n> (cost=5537.95..323061.27 rows=210145 width=2542) (actual\n> time=276.401..2418.925 rows=172012 loops=3)\n> Output: column1, .. , column54\n> Recheck Cond: ((logtable.entrytype = 4000) OR\n> (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n> Filter: (logtable.archivestatus <= 1)\n> Heap Blocks: exact=122495\n> Buffers: shared hit=147320 read=147776\n> Worker 0: actual time=227.701..2408.580\n> rows=153292 loops=1\n> Buffers: shared hit=43220 read=43808\n> Worker 1: actual time=225.996..2408.705\n> rows=148424 loops=1\n> Buffers: shared hit=42239 read=42002\n> -> BitmapOr (cost=5537.95..5537.95 rows=509918\n> width=0) (actual time=203.940..203.941 rows=0 loops=1)\n> Buffers: shared hit=1332\n> -> Bitmap Index Scan on idx_entrytype\n> (cost=0.00..680.48 rows=67206 width=0) (actual\n> time=31.155..31.156 rows=65970 loops=1)\n> Index Cond: (logtable.entrytype = 4000)\n> Buffers: shared hit=172\n> -> Bitmap Index Scan on idx_entrytype\n> (cost=0.00..2220.50 rows=219476 width=0) (actual\n> time=112.459..112.461 rows=225042 loops=1)\n> Index Cond: (logtable.entrytype = 4001)\n> Buffers: shared hit=580\n> -> Bitmap Index Scan on idx_entrytype\n> (cost=0.00..2258.70 rows=223236 width=0) (actual\n> time=60.313..60.314 rows=225023 loops=1)\n> Index Cond: (logtable.entrytype = 4002)\n> Buffers: shared hit=580\n> Settings: random_page_cost = '1', search_path = '\"$user\",\n> schema, public', temp_buffers = '80MB', work_mem = '1GB'\n> Planning Time: 0.609 ms\n> Execution Time: 4984.490 ms\n>\n> I don't see the new index used but it seems it's boosting the\n> performance nevertheless.\n> I kept the query, so I didn't rewrite the query to be WITHOUT nulls.\n> Thank you already for the hint. What else can I do? With the\n> current parameters, the query finishes in about 3.9-5.2 seconds\n> which is already much better but still nowhere near the speeds of\n> 280 ms in oracle.\n> I would love to get it to at least 1 second.\n>\n>\n> Am Do., 6. Mai 2021 um 20:20 Uhr schrieb Alexey M Boltenkov\n> <[email protected] <mailto:[email protected]>>:\n>\n> On 05/06/21 21:15, Alexey M Boltenkov wrote:\n>\n> On 05/06/21 19:11, [email protected]\n> <mailto:[email protected]> wrote:\n>\n> ----- Mensagem original -----\n>\n> De: \"Semen Yefimenko\"<[email protected]>\n> <mailto:[email protected]>\n> Para: \"pgsql-performance\"<[email protected]>\n> <mailto:[email protected]>\n> Enviadas: Quinta-feira, 6 de maio de 2021 11:38:39\n> Assunto: Very slow Query compared to Oracle / SQL - Server\n>\n> SELECT column1,..., column54 where ((entrytype = 4000 or entrytype = 4001 or\n> entrytype = 4002) and (archivestatus <= 1)) order by timestampcol desc;\n>\n> \n>\n> The first thing I would try is rewriting the query to:\n>\n> SELECT column1,..., column54\n> FROM logtable\n> WHERE (entrytype in (4000,4001,4002))\n> AND (archivestatus <= 1))\n> ORDER BY timestampcol DESC;\n>\n> Check if that makes a difference...\n>\n> Luis R. Weck\n>\n>\n>\n> The IN statement will probable result in just recheck\n> condition change to entrytype = any('{a,b,c}'::int[]).\n> Looks like dispersion of archivestatus is not enough to\n> use index idx_arcstatus.\n>\n> Please try to create partial index with condition like\n> (archivestatus <= 1) and rewrite select to use\n> (archivestatus is not null and archivestatus <= 1).\n>\n> CREATE INDEX idx_arcstatus_le1 ON schema.logtable (\n> archivestatus ) where (archivestatus <= 1) TABLESPACE\n> tablespace;\n>\n> I'm sorry, 'archivestatus is not null' is only necessary for\n> index without nulls.\n>\n>\n> CREATE INDEX idx_arcstatus_le1 ON schema.logtable (\n> archivestatus ) where (archivestatus is not null and\n> archivestatus <= 1) TABLESPACE tablespace;\n>\nBTW, please try to reset random_page_cost.\n\n\n\n\n\n\n\n\nOn 05/06/21 22:58, Alexey M Boltenkov\n wrote:\n\n\n\nHave you try of excluding not null from index? Can you give\n dispersion of archivestatus?\n\n\n\n\n06.05.2021, 21:59, \"Semen Yefimenko\"\n <[email protected]>:\n\nYes, rewriting the query with an IN clause was\n also my first approach, but I didn't help much. \n The Query plan did change a little bit but the performance was\n not impacted.\n\nCREATE INDEX\n idx_arcstatus_le1 ON schema.logtable ( archivestatus )\n where (archivestatus <= 1)\nANALYZE \n schema.logtable\n\n\n This resulted in this query plan:\n\n\n\nGather Merge\n (cost=344618.96..394086.05 rows=423974 width=2549)\n (actual time=7327.777..9142.358 rows=516031 loops=1)\n Output: column1, .. , column54\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=179817\n read=115290\n -> Sort\n (cost=343618.94..344148.91 rows=211987 width=2549)\n (actual time=7258.314..7476.733 rows=172010 loops=3)\n Output: column1, .. ,\n column54\n Sort Key:\n logtable.timestampcol DESC\n Sort Method: quicksort\n Memory: 64730kB\n Worker 0: Sort Method:\n quicksort Memory: 55742kB\n Worker 1: Sort Method:\n quicksort Memory: 55565kB\n Buffers: shared\n hit=179817 read=115290\n Worker 0: actual\n time=7231.774..7458.703 rows=161723 loops=1\n Buffers: shared\n hit=55925 read=36265\n Worker 1: actual\n time=7217.856..7425.754 rows=161990 loops=1\n Buffers: shared\n hit=56197 read=36242\n -> Parallel Bitmap\n Heap Scan on schema.logtable (cost=5586.50..324864.86\n rows=211987 width=2549) (actual time=1073.266..6805.850\n rows=172010 loops=3)\n Output: column1,\n .. , column54\n Recheck Cond:\n ((logtable.entrytype = 4000) OR (logtable.entrytype =\n 4001) OR (logtable.entrytype = 4002))\n Filter:\n (logtable.archivestatus <= 1)\n Heap Blocks:\n exact=109146\n Buffers: shared\n hit=179803 read=115290\n Worker 0: actual\n time=1049.875..6809.231 rows=161723 loops=1\n Buffers: shared\n hit=55918 read=36265\n Worker 1: actual\n time=1035.156..6788.037 rows=161990 loops=1\n Buffers: shared\n hit=56190 read=36242\n -> BitmapOr\n (cost=5586.50..5586.50 rows=514483 width=0) (actual\n time=945.179..945.179 rows=0 loops=1)\n Buffers:\n shared hit=3 read=1329\n ->\n Bitmap Index Scan on idx_entrytype (cost=0.00..738.13\n rows=72893 width=0) (actual time=147.915..147.916\n rows=65970 loops=1)\n Index\n Cond: (logtable.entrytype = 4000)\n \n Buffers: shared hit=1 read=171\n ->\n Bitmap Index Scan on idx_entrytype (cost=0.00..2326.17\n rows=229965 width=0) (actual time=473.450..473.451\n rows=225040 loops=1)\n Index\n Cond: (logtable.entrytype = 4001)\n \n Buffers: shared hit=1 read=579\n ->\n Bitmap Index Scan on idx_entrytype (cost=0.00..2140.61\n rows=211624 width=0) (actual time=323.801..323.802\n rows=225021 loops=1)\n Index\n Cond: (logtable.entrytype = 4002)\n \n Buffers: shared hit=1 read=579\nSettings: random_page_cost =\n '1', search_path = '\"$user\", schema, public',\n temp_buffers = '80MB', work_mem = '1GB'\nPlanning Time: 0.810 ms\nExecution Time: 9647.406 ms\n\n\n seemingly faster.\n After doing a few selects, I reran ANALYZE:\n Now it's even faster, probably due to cache and other\n mechanisms.\n\n\n\nGather Merge\n (cost=342639.19..391676.44 rows=420290 width=2542)\n (actual time=2944.803..4534.725 rows=516035 loops=1)\n Output: column1, .. , column54\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=147334\n read=147776\n -> Sort\n (cost=341639.16..342164.53 rows=210145 width=2542)\n (actual time=2827.256..3013.960 rows=172012 loops=3)\n Output: column1, .. ,\n column54\n Sort Key:\n logtable.timestampcol DESC\n Sort Method: quicksort\n Memory: 71565kB\n Worker 0: Sort Method:\n quicksort Memory: 52916kB\n Worker 1: Sort Method:\n quicksort Memory: 51556kB\n Buffers: shared\n hit=147334 read=147776\n Worker 0: actual\n time=2771.975..2948.928 rows=153292 loops=1\n Buffers: shared\n hit=43227 read=43808\n Worker 1: actual\n time=2767.752..2938.688 rows=148424 loops=1\n Buffers: shared\n hit=42246 read=42002\n -> Parallel Bitmap\n Heap Scan on schema.logtable (cost=5537.95..323061.27\n rows=210145 width=2542) (actual time=276.401..2418.925\n rows=172012 loops=3)\n Output: column1,\n .. , column54\n Recheck Cond:\n ((logtable.entrytype = 4000) OR (logtable.entrytype =\n 4001) OR (logtable.entrytype = 4002))\n Filter:\n (logtable.archivestatus <= 1)\n Heap Blocks:\n exact=122495\n Buffers: shared\n hit=147320 read=147776\n Worker 0: actual\n time=227.701..2408.580 rows=153292 loops=1\n Buffers: shared\n hit=43220 read=43808\n Worker 1: actual\n time=225.996..2408.705 rows=148424 loops=1\n Buffers: shared\n hit=42239 read=42002\n -> BitmapOr\n (cost=5537.95..5537.95 rows=509918 width=0) (actual\n time=203.940..203.941 rows=0 loops=1)\n Buffers:\n shared hit=1332\n ->\n Bitmap Index Scan on idx_entrytype (cost=0.00..680.48\n rows=67206 width=0) (actual time=31.155..31.156\n rows=65970 loops=1)\n Index\n Cond: (logtable.entrytype = 4000)\n \n Buffers: shared hit=172\n ->\n Bitmap Index Scan on idx_entrytype (cost=0.00..2220.50\n rows=219476 width=0) (actual time=112.459..112.461\n rows=225042 loops=1)\n Index\n Cond: (logtable.entrytype = 4001)\n \n Buffers: shared hit=580\n ->\n Bitmap Index Scan on idx_entrytype (cost=0.00..2258.70\n rows=223236 width=0) (actual time=60.313..60.314\n rows=225023 loops=1)\n Index\n Cond: (logtable.entrytype = 4002)\n \n Buffers: shared hit=580\nSettings: random_page_cost =\n '1', search_path = '\"$user\", schema, public',\n temp_buffers = '80MB', work_mem = '1GB'\nPlanning Time: 0.609 ms\nExecution Time: 4984.490 ms\n\n\n\nI don't see the new index used but it seems it's boosting\n the performance nevertheless.\n I kept the query, so I didn't rewrite the query to be\n WITHOUT nulls. \n Thank you already for the hint. What else can I do? With the\n current parameters, the query finishes in about 3.9-5.2\n seconds which is already much better but still nowhere near\n the speeds of 280 ms in oracle.\n I would love to get it to at least 1 second. \n\n\n\n\n\nAm Do., 6. Mai 2021 um 20:20 Uhr schrieb Alexey M\n Boltenkov <[email protected]>:\n\n\n\nOn 05/06/21 21:15, Alexey M Boltenkov wrote:\n\n\nOn 05/06/21 19:11, [email protected]\n wrote:\n\n\n----- Mensagem original -----\n\n\nDe: \"Semen Yefimenko\" <[email protected]>\nPara: \"pgsql-performance\" <[email protected]>\nEnviadas: Quinta-feira, 6 de maio de 2021 11:38:39\nAssunto: Very slow Query compared to Oracle / SQL - Server\n\n\n\nSELECT column1,..., column54 where ((entrytype = 4000 or entrytype = 4001 or\nentrytype = 4002) and (archivestatus <= 1)) order by timestampcol desc;\n\n\n \n\nThe first thing I would try is rewriting the query to:\n\nSELECT column1,..., column54 \n FROM logtable\n WHERE (entrytype in (4000,4001,4002)) \n AND (archivestatus <= 1)) \n ORDER BY timestampcol DESC;\n\nCheck if that makes a difference...\n\nLuis R. Weck \n\n\n\n\n\nThe IN statement will probable result in just\n recheck condition change to entrytype =\n any('{a,b,c}'::int[]). Looks like dispersion of archivestatus\n is not enough to use index idx_arcstatus.\nPlease try to create partial\n index with condition like (archivestatus\n <= 1) and rewrite select to use (archivestatus\n is not null and archivestatus\n <= 1).\nCREATE INDEX\n idx_arcstatus_le1 ON schema.logtable ( archivestatus\n ) where (archivestatus\n <= 1) TABLESPACE tablespace;\n\n\n\nI'm sorry, 'archivestatus\n is not null' is only necessary for index without\n nulls.\n\n\nCREATE INDEX idx_arcstatus_le1 ON\n schema.logtable ( archivestatus ) where (archivestatus\n is not null and archivestatus\n <= 1) TABLESPACE tablespace;\n\n\n\n\n\nBTW, please try to reset random_page_cost.",
"msg_date": "Thu, 6 May 2021 23:02:07 +0300",
"msg_from": "Alexey M Boltenkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "On 05/06/21 23:02, Alexey M Boltenkov wrote:\n> On 05/06/21 22:58, Alexey M Boltenkov wrote:\n>> Have you try of excluding not null from index? Can you give \n>> dispersion of archivestatus?\n>>\n>>\n>> 06.05.2021, 21:59, \"Semen Yefimenko\" <[email protected]>:\n>>\n>> Yes, rewriting the query with an IN clause was also my first\n>> approach, but I didn't help much.\n>> The Query plan did change a little bit but the performance was\n>> not impacted.\n>>\n>> CREATE INDEX idx_arcstatus_le1 ON schema.logtable (\n>> archivestatus ) where (archivestatus <= 1)\n>> ANALYZE schema.logtable\n>>\n>>\n>> This resulted in this query plan:\n>>\n>> Gather Merge (cost=344618.96..394086.05 rows=423974\n>> width=2549) (actual time=7327.777..9142.358 rows=516031 loops=1)\n>> Output: column1, .. , column54\n>> Workers Planned: 2\n>> Workers Launched: 2\n>> Buffers: shared hit=179817 read=115290\n>> -> Sort (cost=343618.94..344148.91 rows=211987\n>> width=2549) (actual time=7258.314..7476.733 rows=172010 loops=3)\n>> Output: column1, .. , column54\n>> Sort Key: logtable.timestampcol DESC\n>> Sort Method: quicksort Memory: 64730kB\n>> Worker 0: Sort Method: quicksort Memory: 55742kB\n>> Worker 1: Sort Method: quicksort Memory: 55565kB\n>> Buffers: shared hit=179817 read=115290\n>> Worker 0: actual time=7231.774..7458.703 rows=161723\n>> loops=1\n>> Buffers: shared hit=55925 read=36265\n>> Worker 1: actual time=7217.856..7425.754 rows=161990\n>> loops=1\n>> Buffers: shared hit=56197 read=36242\n>> -> Parallel Bitmap Heap Scan on schema.logtable\n>> (cost=5586.50..324864.86 rows=211987 width=2549) (actual\n>> time=1073.266..6805.850 rows=172010 loops=3)\n>> Output: column1, .. , column54\n>> Recheck Cond: ((logtable.entrytype = 4000) OR\n>> (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n>> Filter: (logtable.archivestatus <= 1)\n>> Heap Blocks: exact=109146\n>> Buffers: shared hit=179803 read=115290\n>> Worker 0: actual time=1049.875..6809.231\n>> rows=161723 loops=1\n>> Buffers: shared hit=55918 read=36265\n>> Worker 1: actual time=1035.156..6788.037\n>> rows=161990 loops=1\n>> Buffers: shared hit=56190 read=36242\n>> -> BitmapOr (cost=5586.50..5586.50\n>> rows=514483 width=0) (actual time=945.179..945.179 rows=0\n>> loops=1)\n>> Buffers: shared hit=3 read=1329\n>> -> Bitmap Index Scan on idx_entrytype\n>> (cost=0.00..738.13 rows=72893 width=0) (actual\n>> time=147.915..147.916 rows=65970 loops=1)\n>> Index Cond: (logtable.entrytype = 4000)\n>> Buffers: shared hit=1 read=171\n>> -> Bitmap Index Scan on idx_entrytype\n>> (cost=0.00..2326.17 rows=229965 width=0) (actual\n>> time=473.450..473.451 rows=225040 loops=1)\n>> Index Cond: (logtable.entrytype = 4001)\n>> Buffers: shared hit=1 read=579\n>> -> Bitmap Index Scan on idx_entrytype\n>> (cost=0.00..2140.61 rows=211624 width=0) (actual\n>> time=323.801..323.802 rows=225021 loops=1)\n>> Index Cond: (logtable.entrytype = 4002)\n>> Buffers: shared hit=1 read=579\n>> Settings: random_page_cost = '1', search_path = '\"$user\",\n>> schema, public', temp_buffers = '80MB', work_mem = '1GB'\n>> Planning Time: 0.810 ms\n>> Execution Time: 9647.406 ms\n>>\n>>\n>> seemingly faster.\n>> After doing a few selects, I reran ANALYZE:\n>> Now it's even faster, probably due to cache and other mechanisms.\n>>\n>> Gather Merge (cost=342639.19..391676.44 rows=420290\n>> width=2542) (actual time=2944.803..4534.725 rows=516035 loops=1)\n>> Output: column1, .. , column54\n>> Workers Planned: 2\n>> Workers Launched: 2\n>> Buffers: shared hit=147334 read=147776\n>> -> Sort (cost=341639.16..342164.53 rows=210145\n>> width=2542) (actual time=2827.256..3013.960 rows=172012 loops=3)\n>> Output: column1, .. , column54\n>> Sort Key: logtable.timestampcol DESC\n>> Sort Method: quicksort Memory: 71565kB\n>> Worker 0: Sort Method: quicksort Memory: 52916kB\n>> Worker 1: Sort Method: quicksort Memory: 51556kB\n>> Buffers: shared hit=147334 read=147776\n>> Worker 0: actual time=2771.975..2948.928 rows=153292\n>> loops=1\n>> Buffers: shared hit=43227 read=43808\n>> Worker 1: actual time=2767.752..2938.688 rows=148424\n>> loops=1\n>> Buffers: shared hit=42246 read=42002\n>> -> Parallel Bitmap Heap Scan on schema.logtable\n>> (cost=5537.95..323061.27 rows=210145 width=2542) (actual\n>> time=276.401..2418.925 rows=172012 loops=3)\n>> Output: column1, .. , column54\n>> Recheck Cond: ((logtable.entrytype = 4000) OR\n>> (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n>> Filter: (logtable.archivestatus <= 1)\n>> Heap Blocks: exact=122495\n>> Buffers: shared hit=147320 read=147776\n>> Worker 0: actual time=227.701..2408.580\n>> rows=153292 loops=1\n>> Buffers: shared hit=43220 read=43808\n>> Worker 1: actual time=225.996..2408.705\n>> rows=148424 loops=1\n>> Buffers: shared hit=42239 read=42002\n>> -> BitmapOr (cost=5537.95..5537.95\n>> rows=509918 width=0) (actual time=203.940..203.941 rows=0\n>> loops=1)\n>> Buffers: shared hit=1332\n>> -> Bitmap Index Scan on idx_entrytype\n>> (cost=0.00..680.48 rows=67206 width=0) (actual\n>> time=31.155..31.156 rows=65970 loops=1)\n>> Index Cond: (logtable.entrytype = 4000)\n>> Buffers: shared hit=172\n>> -> Bitmap Index Scan on idx_entrytype\n>> (cost=0.00..2220.50 rows=219476 width=0) (actual\n>> time=112.459..112.461 rows=225042 loops=1)\n>> Index Cond: (logtable.entrytype = 4001)\n>> Buffers: shared hit=580\n>> -> Bitmap Index Scan on idx_entrytype\n>> (cost=0.00..2258.70 rows=223236 width=0) (actual\n>> time=60.313..60.314 rows=225023 loops=1)\n>> Index Cond: (logtable.entrytype = 4002)\n>> Buffers: shared hit=580\n>> Settings: random_page_cost = '1', search_path = '\"$user\",\n>> schema, public', temp_buffers = '80MB', work_mem = '1GB'\n>> Planning Time: 0.609 ms\n>> Execution Time: 4984.490 ms\n>>\n>> I don't see the new index used but it seems it's boosting the\n>> performance nevertheless.\n>> I kept the query, so I didn't rewrite the query to be WITHOUT nulls.\n>> Thank you already for the hint. What else can I do? With the\n>> current parameters, the query finishes in about 3.9-5.2 seconds\n>> which is already much better but still nowhere near the speeds of\n>> 280 ms in oracle.\n>> I would love to get it to at least 1 second.\n>>\n>>\n>> Am Do., 6. Mai 2021 um 20:20 Uhr schrieb Alexey M Boltenkov\n>> <[email protected] <mailto:[email protected]>>:\n>>\n>> On 05/06/21 21:15, Alexey M Boltenkov wrote:\n>>\n>> On 05/06/21 19:11, [email protected]\n>> <mailto:[email protected]> wrote:\n>>\n>> ----- Mensagem original -----\n>>\n>> De: \"Semen Yefimenko\"<[email protected]>\n>> <mailto:[email protected]>\n>> Para: \"pgsql-performance\"<[email protected]>\n>> <mailto:[email protected]>\n>> Enviadas: Quinta-feira, 6 de maio de 2021 11:38:39\n>> Assunto: Very slow Query compared to Oracle / SQL - Server\n>>\n>> SELECT column1,..., column54 where ((entrytype = 4000 or entrytype = 4001 or\n>> entrytype = 4002) and (archivestatus <= 1)) order by timestampcol desc;\n>>\n>> \n>>\n>> The first thing I would try is rewriting the query to:\n>>\n>> SELECT column1,..., column54\n>> FROM logtable\n>> WHERE (entrytype in (4000,4001,4002))\n>> AND (archivestatus <= 1))\n>> ORDER BY timestampcol DESC;\n>>\n>> Check if that makes a difference...\n>>\n>> Luis R. Weck\n>>\n>>\n>>\n>> The IN statement will probable result in just recheck\n>> condition change to entrytype = any('{a,b,c}'::int[]).\n>> Looks like dispersion of archivestatus is not enough to\n>> use index idx_arcstatus.\n>>\n>> Please try to create partial index with condition like\n>> (archivestatus <= 1) and rewrite select to use\n>> (archivestatus is not null and archivestatus <= 1).\n>>\n>> CREATE INDEX idx_arcstatus_le1 ON schema.logtable (\n>> archivestatus ) where (archivestatus <= 1) TABLESPACE\n>> tablespace;\n>>\n>> I'm sorry, 'archivestatus is not null' is only necessary for\n>> index without nulls.\n>>\n>>\n>> CREATE INDEX idx_arcstatus_le1 ON schema.logtable (\n>> archivestatus ) where (archivestatus is not null and\n>> archivestatus <= 1) TABLESPACE tablespace;\n>>\n> BTW, please try to reset random_page_cost.\n>\n>\nThe root of problem is in:\n\n Heap Blocks: exact=122495\n\n Buffers: shared hit=147334 read=147776\n\nHave you tune shared buffers enough? Each block is of 8k by default.\nПиши напрямую мимо рассылки, если что :)\n\n\n\n\n\n\n\n\nOn 05/06/21 23:02, Alexey M Boltenkov\n wrote:\n\n\n\nOn 05/06/21 22:58, Alexey M Boltenkov\n wrote:\n\n\n\nHave you try of excluding not null from index? Can you give\n dispersion of archivestatus?\n\n\n\n\n06.05.2021, 21:59, \"Semen Yefimenko\" <[email protected]>:\n\nYes, rewriting the query with an IN clause was\n also my first approach, but I didn't help much. \n The Query plan did change a little bit but the performance\n was not impacted.\n\nCREATE INDEX\n idx_arcstatus_le1 ON schema.logtable ( archivestatus )\n where (archivestatus <= 1)\nANALYZE schema.logtable\n\n\n This resulted in this query plan:\n\n\n\nGather Merge\n (cost=344618.96..394086.05 rows=423974 width=2549)\n (actual time=7327.777..9142.358 rows=516031 loops=1)\n Output: column1, .. ,\n column54\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=179817\n read=115290\n -> Sort\n (cost=343618.94..344148.91 rows=211987 width=2549)\n (actual time=7258.314..7476.733 rows=172010 loops=3)\n Output: column1, .. ,\n column54\n Sort Key:\n logtable.timestampcol DESC\n Sort Method: quicksort\n Memory: 64730kB\n Worker 0: Sort\n Method: quicksort Memory: 55742kB\n Worker 1: Sort\n Method: quicksort Memory: 55565kB\n Buffers: shared\n hit=179817 read=115290\n Worker 0: actual\n time=7231.774..7458.703 rows=161723 loops=1\n Buffers: shared\n hit=55925 read=36265\n Worker 1: actual\n time=7217.856..7425.754 rows=161990 loops=1\n Buffers: shared\n hit=56197 read=36242\n -> Parallel Bitmap\n Heap Scan on schema.logtable (cost=5586.50..324864.86\n rows=211987 width=2549) (actual\n time=1073.266..6805.850 rows=172010 loops=3)\n Output: column1,\n .. , column54\n Recheck Cond:\n ((logtable.entrytype = 4000) OR (logtable.entrytype =\n 4001) OR (logtable.entrytype = 4002))\n Filter:\n (logtable.archivestatus <= 1)\n Heap Blocks:\n exact=109146\n Buffers: shared\n hit=179803 read=115290\n Worker 0: actual\n time=1049.875..6809.231 rows=161723 loops=1\n Buffers:\n shared hit=55918 read=36265\n Worker 1: actual\n time=1035.156..6788.037 rows=161990 loops=1\n Buffers:\n shared hit=56190 read=36242\n -> BitmapOr\n (cost=5586.50..5586.50 rows=514483 width=0) (actual\n time=945.179..945.179 rows=0 loops=1)\n Buffers:\n shared hit=3 read=1329\n ->\n Bitmap Index Scan on idx_entrytype\n (cost=0.00..738.13 rows=72893 width=0) (actual\n time=147.915..147.916 rows=65970 loops=1)\n \n Index Cond: (logtable.entrytype = 4000)\n \n Buffers: shared hit=1 read=171\n ->\n Bitmap Index Scan on idx_entrytype\n (cost=0.00..2326.17 rows=229965 width=0) (actual\n time=473.450..473.451 rows=225040 loops=1)\n \n Index Cond: (logtable.entrytype = 4001)\n \n Buffers: shared hit=1 read=579\n ->\n Bitmap Index Scan on idx_entrytype\n (cost=0.00..2140.61 rows=211624 width=0) (actual\n time=323.801..323.802 rows=225021 loops=1)\n \n Index Cond: (logtable.entrytype = 4002)\n \n Buffers: shared hit=1 read=579\nSettings: random_page_cost =\n '1', search_path = '\"$user\", schema, public',\n temp_buffers = '80MB', work_mem = '1GB'\nPlanning Time: 0.810 ms\nExecution Time: 9647.406 ms\n\n\n seemingly faster.\n After doing a few selects, I reran ANALYZE:\n Now it's even faster, probably due to cache and other\n mechanisms.\n\n\n\nGather Merge\n (cost=342639.19..391676.44 rows=420290 width=2542)\n (actual time=2944.803..4534.725 rows=516035 loops=1)\n Output: column1, .. ,\n column54\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=147334\n read=147776\n -> Sort\n (cost=341639.16..342164.53 rows=210145 width=2542)\n (actual time=2827.256..3013.960 rows=172012 loops=3)\n Output: column1, .. ,\n column54\n Sort Key:\n logtable.timestampcol DESC\n Sort Method: quicksort\n Memory: 71565kB\n Worker 0: Sort\n Method: quicksort Memory: 52916kB\n Worker 1: Sort\n Method: quicksort Memory: 51556kB\n Buffers: shared\n hit=147334 read=147776\n Worker 0: actual\n time=2771.975..2948.928 rows=153292 loops=1\n Buffers: shared\n hit=43227 read=43808\n Worker 1: actual\n time=2767.752..2938.688 rows=148424 loops=1\n Buffers: shared\n hit=42246 read=42002\n -> Parallel Bitmap\n Heap Scan on schema.logtable (cost=5537.95..323061.27\n rows=210145 width=2542) (actual time=276.401..2418.925\n rows=172012 loops=3)\n Output: column1,\n .. , column54\n Recheck Cond:\n ((logtable.entrytype = 4000) OR (logtable.entrytype =\n 4001) OR (logtable.entrytype = 4002))\n Filter:\n (logtable.archivestatus <= 1)\n Heap Blocks:\n exact=122495\n Buffers: shared\n hit=147320 read=147776\n Worker 0: actual\n time=227.701..2408.580 rows=153292 loops=1\n Buffers:\n shared hit=43220 read=43808\n Worker 1: actual\n time=225.996..2408.705 rows=148424 loops=1\n Buffers:\n shared hit=42239 read=42002\n -> BitmapOr\n (cost=5537.95..5537.95 rows=509918 width=0) (actual\n time=203.940..203.941 rows=0 loops=1)\n Buffers:\n shared hit=1332\n ->\n Bitmap Index Scan on idx_entrytype\n (cost=0.00..680.48 rows=67206 width=0) (actual\n time=31.155..31.156 rows=65970 loops=1)\n \n Index Cond: (logtable.entrytype = 4000)\n \n Buffers: shared hit=172\n ->\n Bitmap Index Scan on idx_entrytype\n (cost=0.00..2220.50 rows=219476 width=0) (actual\n time=112.459..112.461 rows=225042 loops=1)\n \n Index Cond: (logtable.entrytype = 4001)\n \n Buffers: shared hit=580\n ->\n Bitmap Index Scan on idx_entrytype\n (cost=0.00..2258.70 rows=223236 width=0) (actual\n time=60.313..60.314 rows=225023 loops=1)\n \n Index Cond: (logtable.entrytype = 4002)\n \n Buffers: shared hit=580\nSettings: random_page_cost =\n '1', search_path = '\"$user\", schema, public',\n temp_buffers = '80MB', work_mem = '1GB'\nPlanning Time: 0.609 ms\nExecution Time: 4984.490 ms\n\n\n\nI don't see the new index used but it seems it's\n boosting the performance nevertheless.\n I kept the query, so I didn't rewrite the query to be\n WITHOUT nulls. \n Thank you already for the hint. What else can I do? With\n the current parameters, the query finishes in about\n 3.9-5.2 seconds which is already much better but still\n nowhere near the speeds of 280 ms in oracle.\n I would love to get it to at least 1 second. \n\n\n\n\n\nAm Do., 6. Mai 2021 um 20:20 Uhr schrieb Alexey\n M Boltenkov <[email protected]>:\n\n\n\nOn 05/06/21 21:15, Alexey M Boltenkov wrote:\n\n\nOn 05/06/21 19:11, [email protected]\n wrote:\n\n\n----- Mensagem original -----\n\n\nDe: \"Semen Yefimenko\" <[email protected]>\nPara: \"pgsql-performance\" <[email protected]>\nEnviadas: Quinta-feira, 6 de maio de 2021 11:38:39\nAssunto: Very slow Query compared to Oracle / SQL - Server\n\n\n\nSELECT column1,..., column54 where ((entrytype = 4000 or entrytype = 4001 or\nentrytype = 4002) and (archivestatus <= 1)) order by timestampcol desc;\n\n\n \n\nThe first thing I would try is rewriting the query to:\n\nSELECT column1,..., column54 \n FROM logtable\n WHERE (entrytype in (4000,4001,4002)) \n AND (archivestatus <= 1)) \n ORDER BY timestampcol DESC;\n\nCheck if that makes a difference...\n\nLuis R. Weck \n\n\n\n\n\nThe IN statement will probable result in just\n recheck condition change to entrytype =\n any('{a,b,c}'::int[]). Looks like dispersion of archivestatus\n is not enough to use index idx_arcstatus.\nPlease try to create partial\n index with condition like (archivestatus\n <= 1) and rewrite select to use (archivestatus\n is not null and archivestatus\n <= 1).\nCREATE INDEX\n idx_arcstatus_le1 ON schema.logtable (\n archivestatus ) where (archivestatus <= 1)\n TABLESPACE tablespace;\n\n\n\nI'm sorry, 'archivestatus\n is not null' is only necessary for index without\n nulls.\n\n\nCREATE INDEX idx_arcstatus_le1\n ON schema.logtable ( archivestatus ) where (archivestatus\n is not null and archivestatus\n <= 1) TABLESPACE tablespace;\n\n\n\n\n\nBTW, please try to reset random_page_cost.\n\n\n\nThe root of problem is in:\n Heap Blocks: exact=122495\n\n\n Buffers: shared hit=147334\n read=147776\n\n Have you tune shared buffers enough? Each block is of 8k by\n default.\n Пиши напрямую мимо рассылки, если что :)",
"msg_date": "Thu, 6 May 2021 23:17:24 +0300",
"msg_from": "Alexey M Boltenkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "*> Postgres Version : *PostgreSQL 12.2,\n> ... ON ... USING btree\n\nIMHO:\nThe next minor (bugix&security) release is near ( expected ~ May 13th, 2021\n) https://www.postgresql.org/developer/roadmap/\nso you can update your PostgreSQL to 12.7 ( + full Reindexing\nrecommended ! )\n\nYou can find a lot of B-tree index-related fixes.\nhttps://www.postgresql.org/docs/12/release-12-3.html Release date:\n2020-05-14\n - Fix possible undercounting of deleted B-tree index pages in VACUUM\nVERBOSE output\n- Fix wrong bookkeeping for oldest deleted page in a B-tree index\n- Ensure INCLUDE'd columns are always removed from B-tree pivot tuples\nhttps://www.postgresql.org/docs/12/release-12-4.html\n - Avoid repeated marking of dead btree index entries as dead\nhttps://www.postgresql.org/docs/12/release-12-5.html\n - Fix failure of parallel B-tree index scans when the index condition is\nunsatisfiable\nhttps://www.postgresql.org/docs/12/release-12-6.html Release date:\n2021-02-11\n\n\n> COLLATE pg_catalog.\"default\"\n\nYou can test the \"C\" Collation in some columns (keys ? ) ; in theory, it\nshould be faster :\n\"The drawback of using locales other than C or POSIX in PostgreSQL is its\nperformance impact. It slows character handling and prevents ordinary\nindexes from being used by LIKE. For this reason use locales only if you\nactually need them.\"\nhttps://www.postgresql.org/docs/12/locale.html\nhttps://www.postgresql.org/message-id/flat/CAF6DVKNU0vb4ZeQQ-%3Dagg69QJU3wdjPnMYYrPYY7CKc6iOU7eQ%40mail.gmail.com\n\nBest,\n Imre\n\n\nSemen Yefimenko <[email protected]> ezt írta (időpont: 2021. máj.\n6., Cs, 16:38):\n\n> Hi there,\n>\n> I've recently been involved in migrating our old system to SQL Server and\n> then PostgreSQL. Everything has been working fine so far but now after\n> executing our tests on Postgres, we saw a very slow running query on a\n> large table in our database.\n> I have tried asking on other platforms but no one has been able to give me\n> a satisfying answer.\n>\n> *Postgres Version : *PostgreSQL 12.2, compiled by Visual C++ build 1914,\n> 64-bit\n> No notable errors in the Server log and the Postgres Server itself.\n>\n> The table structure :\n>\n> CREATE TABLE logtable\n> (\n> key character varying(20) COLLATE pg_catalog.\"default\" NOT NULL,\n> id integer,\n> column3 integer,\n> column4 integer,\n> column5 integer,\n> column6 integer,\n> column7 integer,\n> column8 integer,\n> column9 character varying(128) COLLATE pg_catalog.\"default\",\n> column10 character varying(2048) COLLATE pg_catalog.\"default\",\n> column11 character varying(2048) COLLATE pg_catalog.\"default\",\n> column12 character varying(2048) COLLATE pg_catalog.\"default\",\n> column13 character varying(2048) COLLATE pg_catalog.\"default\",\n> column14 character varying(2048) COLLATE pg_catalog.\"default\",\n> column15 character varying(2048) COLLATE pg_catalog.\"default\",\n> column16 character varying(2048) COLLATE pg_catalog.\"default\",\n> column17 character varying(2048) COLLATE pg_catalog.\"default\",\n> column18 character varying(2048) COLLATE pg_catalog.\"default\",\n> column19 character varying(2048) COLLATE pg_catalog.\"default\",\n> column21 character varying(256) COLLATE pg_catalog.\"default\",\n> column22 character varying(256) COLLATE pg_catalog.\"default\",\n> column23 character varying(256) COLLATE pg_catalog.\"default\",\n> column24 character varying(256) COLLATE pg_catalog.\"default\",\n> column25 character varying(256) COLLATE pg_catalog.\"default\",\n> column26 character varying(256) COLLATE pg_catalog.\"default\",\n> column27 character varying(256) COLLATE pg_catalog.\"default\",\n> column28 character varying(256) COLLATE pg_catalog.\"default\",\n> column29 character varying(256) COLLATE pg_catalog.\"default\",\n> column30 character varying(256) COLLATE pg_catalog.\"default\",\n> column31 character varying(256) COLLATE pg_catalog.\"default\",\n> column32 character varying(256) COLLATE pg_catalog.\"default\",\n> column33 character varying(256) COLLATE pg_catalog.\"default\",\n> column34 character varying(256) COLLATE pg_catalog.\"default\",\n> column35 character varying(256) COLLATE pg_catalog.\"default\",\n> entrytype integer,\n> column37 bigint,\n> column38 bigint,\n> column39 bigint,\n> column40 bigint,\n> column41 bigint,\n> column42 bigint,\n> column43 bigint,\n> column44 bigint,\n> column45 bigint,\n> column46 bigint,\n> column47 character varying(128) COLLATE pg_catalog.\"default\",\n> timestampcol timestamp without time zone,\n> column49 timestamp without time zone,\n> column50 timestamp without time zone,\n> column51 timestamp without time zone,\n> column52 timestamp without time zone,\n> archivestatus integer,\n> column54 integer,\n> column55 character varying(20) COLLATE pg_catalog.\"default\",\n> CONSTRAINT pkey PRIMARY KEY (key)\n> USING INDEX TABLESPACE tablespace\n> )\n>\n> TABLESPACE tablespace;\n>\n> ALTER TABLE schema.logtable\n> OWNER to user;\n>\n> CREATE INDEX idx_timestampcol\n> ON schema.logtable USING btree\n> ( timestampcol ASC NULLS LAST )\n> TABLESPACE tablespace ;\n>\n> CREATE INDEX idx_test2\n> ON schema.logtable USING btree\n> ( entrytype ASC NULLS LAST)\n> TABLESPACE tablespace\n> WHERE archivestatus <= 1;\n>\n> CREATE INDEX idx_arcstatus\n> ON schema.logtable USING btree\n> ( archivestatus ASC NULLS LAST)\n> TABLESPACE tablespace;\n>\n> CREATE INDEX idx_entrytype\n> ON schema.logtable USING btree\n> ( entrytype ASC NULLS LAST)\n> TABLESPACE tablespace ;\n>\n>\n> The table contains 14.000.000 entries and has about 3.3 GB of data:\n> No triggers, inserts per day, probably 5-20 K per day.\n>\n> SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\n> relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE\n> relname='logtable';\n>\n> relname\n> |relpages|reltuples|relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|\n>\n> ------------------|--------|---------|-------------|-------|--------|--------------|----------|-------------|\n> logtable | 405988| 14091424| 405907|r | 54|false\n> |NULL | 3326803968|\n>\n>\n> The slow running query:\n>\n> SELECT column1,..., column54 where ((entrytype = 4000 or entrytype = 4001\n> or entrytype = 4002) and (archivestatus <= 1)) order by timestampcol desc;\n>\n>\n> This query runs in about 45-60 seconds.\n> The same query runs in about 289 ms Oracle and 423 ms in SQL-Server.\n> Now I understand that actually loading all results would take a while.\n> (about 520K or so rows)\n> But that shouldn't be exactly what happens right? There should be a\n> resultset iterator which can retrieve all data but doesn't from the get go.\n>\n> With the help of some people in the slack and so thread, I've found a\n> configuration parameter which helps performance :\n>\n> set random_page_cost = 1;\n>\n> This improved performance from 45-60 s to 15-35 s. (since we are using\n> ssd's)\n> Still not acceptable but definitely an improvement.\n> Some maybe relevant system parameters:\n>\n> effective_cache_size 4GB\n> maintenance_work_mem 1GB\n> shared_buffers 2GB\n> work_mem 1GB\n>\n>\n> Currently I'm accessing the data through DbBeaver (JDBC -\n> postgresql-42.2.5.jar) and our JAVA application (JDBC -\n> postgresql-42.2.19.jar). Both use the defaultRowFetchSize=5000 to not load\n> everything into memory and limit the results.\n> The explain plan:\n>\n> EXPLAIN (ANALYZE, BUFFERS, SETTINGS, VERBOSE)...\n> (Above Query)\n>\n>\n> Gather Merge (cost=347142.71..397196.91 rows=429006 width=2558) (actual\n> time=21210.019..22319.444 rows=515841 loops=1)\n> Output: column1, .. , column54\n> Workers Planned: 2\n> Workers Launched: 2\n> Buffers: shared hit=141487 read=153489\n> -> Sort (cost=346142.69..346678.95 rows=214503 width=2558) (actual\n> time=21148.887..21297.428 rows=171947 loops=3)\n> Output: column1, .. , column54\n> Sort Key: logtable.timestampcol DESC\n> Sort Method: quicksort Memory: 62180kB\n> Worker 0: Sort Method: quicksort Memory: 56969kB\n> Worker 1: Sort Method: quicksort Memory: 56837kB\n> Buffers: shared hit=141487 read=153489\n> Worker 0: actual time=21129.973..21296.839 rows=166776 loops=1\n> Buffers: shared hit=45558 read=49514\n> Worker 1: actual time=21114.439..21268.117 rows=165896 loops=1\n> Buffers: shared hit=45104 read=49506\n> -> Parallel Bitmap Heap Scan on schema.logtable\n> (cost=5652.74..327147.77 rows=214503 width=2558) (actual\n> time=1304.813..20637.462 rows=171947 loops=3)\n> Output: column1, .. , column54\n> Recheck Cond: ((logtable.entrytype = 4000) OR\n> (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n> Filter: (logtable.archivestatus <= 1)\n> Heap Blocks: exact=103962\n> Buffers: shared hit=141473 read=153489\n> Worker 0: actual time=1280.472..20638.620 rows=166776 loops=1\n> Buffers: shared hit=45551 read=49514\n> Worker 1: actual time=1275.274..20626.219 rows=165896 loops=1\n> Buffers: shared hit=45097 read=49506\n> -> BitmapOr (cost=5652.74..5652.74 rows=520443 width=0)\n> (actual time=1179.438..1179.438 rows=0 loops=1)\n> Buffers: shared hit=9 read=1323\n> -> Bitmap Index Scan on idx_entrytype\n> (cost=0.00..556.61 rows=54957 width=0) (actual time=161.939..161.940\n> rows=65970 loops=1)\n> Index Cond: (logtable.entrytype = 4000)\n> Buffers: shared hit=1 read=171\n> -> Bitmap Index Scan on idx_entrytype\n> (cost=0.00..2243.22 rows=221705 width=0) (actual time=548.849..548.849\n> rows=224945 loops=1)\n> Index Cond: (logtable.entrytype = 4001)\n> Buffers: shared hit=4 read=576\n> -> Bitmap Index Scan on idx_entrytype\n> (cost=0.00..2466.80 rows=243782 width=0) (actual time=468.637..468.637\n> rows=224926 loops=1)\n> Index Cond: (logtable.entrytype = 4002)\n> Buffers: shared hit=4 read=576\n> Settings: random_page_cost = '1', search_path = '\"$user\", schema, public',\n> temp_buffers = '80MB', work_mem = '1GB'\n> Planning Time: 0.578 ms\n> Execution Time: 22617.351 ms\n>\n> As mentioned before, oracle does this much faster.\n>\n>\n> -------------------------------------------------------------------------------------------------------------------------\n> | Id | Operation | Name |\n> Rows | Bytes |TempSpc| Cost (%CPU)| Time |\n>\n> -------------------------------------------------------------------------------------------------------------------------\n> | 0 | SELECT STATEMENT | |\n> 6878 | 2491K| | 2143 (1)| 00:00:01 |\n> | 1 | SORT ORDER BY | |\n> 6878 | 2491K| 3448K| 2143 (1)| 00:00:01 |\n> | 2 | INLIST ITERATOR | |\n> | | | | |\n> |* 3 | TABLE ACCESS BY INDEX ROWID BATCHED| logtable |\n> 6878 | 2491K| | 1597 (1)| 00:00:01 |\n> |* 4 | INDEX RANGE SCAN | idx_entrytype |\n> 6878 | | | 23 (0)| 00:00:01 |\n>\n> -------------------------------------------------------------------------------------------------------------------------\n>\n> Is there much I can analyze, any information you might need to further\n> analyze this?\n>\n\n> Postgres Version : PostgreSQL 12.2,> ... ON ... USING btreeIMHO:The next minor (bugix&security) release is near ( expected ~ May 13th, 2021 ) https://www.postgresql.org/developer/roadmap/so you can update your PostgreSQL to 12.7 ( + full Reindexing recommended ! ) You can find a lot of B-tree index-related fixes.https://www.postgresql.org/docs/12/release-12-3.html Release date: 2020-05-14 - Fix possible undercounting of deleted B-tree index pages in VACUUM VERBOSE output - Fix wrong bookkeeping for oldest deleted page in a B-tree index- Ensure INCLUDE'd columns are always removed from B-tree pivot tupleshttps://www.postgresql.org/docs/12/release-12-4.html - Avoid repeated marking of dead btree index entries as dead https://www.postgresql.org/docs/12/release-12-5.html - Fix failure of parallel B-tree index scans when the index condition is unsatisfiablehttps://www.postgresql.org/docs/12/release-12-6.html Release date: 2021-02-11> COLLATE pg_catalog.\"default\"You can test the \"C\" Collation in some columns (keys ? ) ; in theory, it should be faster :\"The drawback of using locales other than C or POSIX in PostgreSQL is its performance impact. It slows character handling and prevents ordinary indexes from being used by LIKE. For this reason use locales only if you actually need them.\"https://www.postgresql.org/docs/12/locale.htmlhttps://www.postgresql.org/message-id/flat/CAF6DVKNU0vb4ZeQQ-%3Dagg69QJU3wdjPnMYYrPYY7CKc6iOU7eQ%40mail.gmail.comBest, ImreSemen Yefimenko <[email protected]> ezt írta (időpont: 2021. máj. 6., Cs, 16:38):Hi there,I've recently been involved in migrating our old system to SQL Server and then PostgreSQL. Everything has been working fine so far but now after executing our tests on Postgres, we saw a very slow running query on a large table in our database. I have tried asking on other platforms but no one has been able to give me a satisfying answer. Postgres Version : PostgreSQL 12.2, compiled by Visual C++ build 1914, 64-bitNo notable errors in the Server log and the Postgres Server itself.The table structure :CREATE TABLE logtable( key character varying(20) COLLATE pg_catalog.\"default\" NOT NULL, id integer, \n\ncolumn3 integer, \n\ncolumn4 integer, \n\ncolumn5 integer, \n\ncolumn6 integer, \n\ncolumn7 integer, \n\ncolumn8 integer, \n\ncolumn9 character varying(128) COLLATE pg_catalog.\"default\", \n\ncolumn10 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn11 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn12 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn13 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn14 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn15 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn16 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn17 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn18 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn19 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn21 \n\ncharacter varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn22 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn23 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn24 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn25 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn26 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn27 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn28 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn29 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn30 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn31 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn32 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn33 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn34 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn35 character varying(256) COLLATE pg_catalog.\"default\", \n\nentrytype integer, \n\ncolumn37 bigint, \n\ncolumn38 bigint, \n\ncolumn39 bigint, \n\ncolumn40 bigint, \n\ncolumn41 bigint, \n\ncolumn42 bigint, \n\ncolumn43 bigint, \n\ncolumn44 bigint, \n\ncolumn45 bigint, \n\ncolumn46 bigint, \n\ncolumn47 character varying(128) COLLATE pg_catalog.\"default\", \n\ntimestampcol timestamp without time zone, \n\ncolumn49 timestamp without time zone, \n\ncolumn50 timestamp without time zone, \n\ncolumn51 timestamp without time zone, \n\ncolumn52 timestamp without time zone, \n\n\n\narchivestatus \n\ninteger, \n\ncolumn54 integer, \n\ncolumn55 character varying(20) COLLATE pg_catalog.\"default\", CONSTRAINT pkey PRIMARY KEY (key) USING INDEX TABLESPACE tablespace)TABLESPACE tablespace;ALTER TABLE schema.logtable OWNER to user;CREATE INDEX idx_timestampcol ON schema.logtable USING btree (\n\n\n\ntimestampcol \n\n\n\n ASC NULLS LAST ) TABLESPACE \n\ntablespace\n\n;CREATE INDEX idx_test2 ON schema.logtable USING btree ( entrytype ASC NULLS LAST) TABLESPACE tablespace WHERE archivestatus <= 1;CREATE INDEX idx_arcstatus ON \n\nschema.logtable USING btree ( archivestatus ASC NULLS LAST) TABLESPACE tablespace;CREATE INDEX \n\nidx_entrytype ON schema.logtable USING btree ( entrytype ASC NULLS LAST) TABLESPACE \n\ntablespace\n\n;The table contains 14.000.000 entries and has about 3.3 GB of data:No triggers, inserts per day, probably 5-20 K per day.SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='logtable';relname |relpages|reltuples|relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|------------------|--------|---------|-------------|-------|--------|--------------|----------|-------------|logtable | 405988| 14091424| 405907|r | 54|false |NULL | 3326803968|The slow running query:SELECT column1,..., column54 where ((entrytype = 4000 or \n\nentrytype \n\n= 4001 or \n\nentrytype \n\n= 4002) and (archivestatus <= 1)) order by timestampcol desc;This query runs in about 45-60 seconds.The same query runs in about 289 ms Oracle and 423 ms in SQL-Server. Now I understand that actually loading all results would take a while. (about 520K or so rows) But that shouldn't be exactly what happens right? There should be a resultset iterator which can retrieve all data but doesn't from the get go. With the help of some people in the slack and so thread, I've found a configuration parameter which helps performance : set random_page_cost = 1;This improved performance from 45-60 s to 15-35 s. (since we are using ssd's) Still not acceptable but definitely an improvement. Some maybe relevant system parameters:effective_cache_size\t4GBmaintenance_work_mem\t1GBshared_buffers\t2GBwork_mem\t1GBCurrently I'm accessing the data through DbBeaver (JDBC - postgresql-42.2.5.jar) and our JAVA application (JDBC - postgresql-42.2.19.jar). Both use the defaultRowFetchSize=5000 to not load everything into memory and limit the results. The explain plan:EXPLAIN (ANALYZE, BUFFERS, SETTINGS, VERBOSE)...(Above Query)Gather Merge (cost=347142.71..397196.91 rows=429006 width=2558) (actual time=21210.019..22319.444 rows=515841 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=141487 read=153489 -> Sort (cost=346142.69..346678.95 rows=214503 width=2558) (actual time=21148.887..21297.428 rows=171947 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 62180kB Worker 0: Sort Method: quicksort Memory: 56969kB Worker 1: Sort Method: quicksort Memory: 56837kB Buffers: shared hit=141487 read=153489 Worker 0: actual time=21129.973..21296.839 rows=166776 loops=1 Buffers: shared hit=45558 read=49514 Worker 1: actual time=21114.439..21268.117 rows=165896 loops=1 Buffers: shared hit=45104 read=49506 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5652.74..327147.77 rows=214503 width=2558) (actual time=1304.813..20637.462 rows=171947 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=103962 Buffers: shared hit=141473 read=153489 Worker 0: actual time=1280.472..20638.620 rows=166776 loops=1 Buffers: shared hit=45551 read=49514 Worker 1: actual time=1275.274..20626.219 rows=165896 loops=1 Buffers: shared hit=45097 read=49506 -> BitmapOr (cost=5652.74..5652.74 rows=520443 width=0) (actual time=1179.438..1179.438 rows=0 loops=1) Buffers: shared hit=9 read=1323 -> Bitmap Index Scan on idx_entrytype (cost=0.00..556.61 rows=54957 width=0) (actual time=161.939..161.940 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=1 read=171 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2243.22 rows=221705 width=0) (actual time=548.849..548.849 rows=224945 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=4 read=576 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2466.80 rows=243782 width=0) (actual time=468.637..468.637 rows=224926 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=4 read=576Settings: random_page_cost = '1', search_path = '\"$user\", schema, public', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.578 msExecution Time: 22617.351 msAs mentioned before, oracle does this much faster. -------------------------------------------------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |-------------------------------------------------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 6878 | 2491K| | 2143 (1)| 00:00:01 || 1 | SORT ORDER BY | | 6878 | 2491K| 3448K| 2143 (1)| 00:00:01 || 2 | INLIST ITERATOR | | | | | | ||* 3 | TABLE ACCESS BY INDEX ROWID BATCHED| logtable | 6878 | 2491K| | 1597 (1)| 00:00:01 ||* 4 | INDEX RANGE SCAN | idx_entrytype | 6878 | | | 23 (0)| 00:00:01 |-------------------------------------------------------------------------------------------------------------------------Is there much I can analyze, any information you might need to further analyze this?",
"msg_date": "Thu, 6 May 2021 23:16:45 +0200",
"msg_from": "Imre Samu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "Sorry if I'm cumulatively answering everyone in one E-Mail, I'm not sure\nhow I'm supposed to do it. (single E-Mails vs many)\n\n\n> Can you try tuning by increasing the shared_buffers slowly in steps of\n> 500MB, and running explain analyze against the query.\n\n\n-- 2500 MB shared buffers - random_page_cost = 1;\nGather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual\ntime=2076.329..3737.050 rows=516517 loops=1)\n Output: column1, .. , column54\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=295446\n -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual\ntime=2007.487..2202.707 rows=172172 loops=3)\n Output: column1, .. , column54\n Sort Key: logtable.timestampcol DESC\n Sort Method: quicksort Memory: 65154kB\n Worker 0: Sort Method: quicksort Memory: 55707kB\n Worker 1: Sort Method: quicksort Memory: 55304kB\n Buffers: shared hit=295446\n Worker 0: actual time=1963.969..2156.624 rows=161205 loops=1\n Buffers: shared hit=91028\n Worker 1: actual time=1984.700..2179.697 rows=161935 loops=1\n Buffers: shared hit=92133\n -> Parallel Bitmap Heap Scan on schema.logtable\n (cost=5546.39..323481.21 rows=210418 width=2542) (actual\ntime=322.125..1618.971 rows=172172 loops=3)\n Output: column1, .. , column54\n Recheck Cond: ((logtable.entrytype = 4000) OR\n(logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n Filter: (logtable.archivestatus <= 1)\n Heap Blocks: exact=110951\n Buffers: shared hit=295432\n Worker 0: actual time=282.201..1595.117 rows=161205 loops=1\n Buffers: shared hit=91021\n Worker 1: actual time=303.671..1623.299 rows=161935 loops=1\n Buffers: shared hit=92126\n -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0)\n(actual time=199.119..199.119 rows=0 loops=1)\n Buffers: shared hit=1334\n -> Bitmap Index Scan on idx_entrytype\n (cost=0.00..682.13 rows=67293 width=0) (actual time=28.856..28.857\nrows=65970 loops=1)\n Index Cond: (logtable.entrytype = 4000)\n Buffers: shared hit=172\n -> Bitmap Index Scan on idx_entrytype\n (cost=0.00..2223.63 rows=219760 width=0) (actual time=108.871..108.872\nrows=225283 loops=1)\n Index Cond: (logtable.entrytype = 4001)\n Buffers: shared hit=581\n -> Bitmap Index Scan on idx_entrytype\n (cost=0.00..2261.87 rows=223525 width=0) (actual time=61.377..61.377\nrows=225264 loops=1)\n Index Cond: (logtable.entrytype = 4002)\n Buffers: shared hit=581\nSettings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'\nPlanning Time: 0.940 ms\nExecution Time: 4188.083 ms\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-- 3000 MB shared buffers - random_page_cost = 1;\nGather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual\ntime=2062.280..3763.408 rows=516517 loops=1)\n Output: column1, .. , column54\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=295446\n -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual\ntime=1987.933..2180.422 rows=172172 loops=3)\n Output: column1, .. , column54\n Sort Key: logtable.timestampcol DESC\n Sort Method: quicksort Memory: 66602kB\n Worker 0: Sort Method: quicksort Memory: 55149kB\n Worker 1: Sort Method: quicksort Memory: 54415kB\n Buffers: shared hit=295446\n Worker 0: actual time=1963.059..2147.916 rows=159556 loops=1\n Buffers: shared hit=89981\n Worker 1: actual time=1949.726..2136.200 rows=158554 loops=1\n Buffers: shared hit=90141\n -> Parallel Bitmap Heap Scan on schema.logtable\n (cost=5546.39..323481.21 rows=210418 width=2542) (actual\ntime=340.705..1603.796 rows=172172 loops=3)\n Output: column1, .. , column54\n Recheck Cond: ((logtable.entrytype = 4000) OR\n(logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n Filter: (logtable.archivestatus <= 1)\n Heap Blocks: exact=113990\n Buffers: shared hit=295432\n Worker 0: actual time=317.918..1605.548 rows=159556 loops=1\n Buffers: shared hit=89974\n Worker 1: actual time=304.744..1589.221 rows=158554 loops=1\n Buffers: shared hit=90134\n -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0)\n(actual time=218.972..218.973 rows=0 loops=1)\n Buffers: shared hit=1334\n -> Bitmap Index Scan on idx_entrytype\n (cost=0.00..682.13 rows=67293 width=0) (actual time=37.741..37.742\nrows=65970 loops=1)\n Index Cond: (logtable.entrytype = 4000)\n Buffers: shared hit=172\n -> Bitmap Index Scan on idx_entrytype\n (cost=0.00..2223.63 rows=219760 width=0) (actual time=119.120..119.121\nrows=225283 loops=1)\n Index Cond: (logtable.entrytype = 4001)\n Buffers: shared hit=581\n -> Bitmap Index Scan on idx_entrytype\n (cost=0.00..2261.87 rows=223525 width=0) (actual time=62.097..62.098\nrows=225264 loops=1)\n Index Cond: (logtable.entrytype = 4002)\n Buffers: shared hit=581\nSettings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'\nPlanning Time: 2.717 ms\nExecution Time: 4224.670 ms\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-- 3500 MB shared buffers - random_page_cost = 1;\nGather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual\ntime=3578.155..4932.858 rows=516517 loops=1)\n Output: column1, .. , column54\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=14 read=295432 written=67\n -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual\ntime=3482.159..3677.227 rows=172172 loops=3)\n Output: column1, .. , column54\n Sort Key: logtable.timestampcol DESC\n Sort Method: quicksort Memory: 58533kB\n Worker 0: Sort Method: quicksort Memory: 56878kB\n Worker 1: Sort Method: quicksort Memory: 60755kB\n Buffers: shared hit=14 read=295432 written=67\n Worker 0: actual time=3435.131..3632.985 rows=166842 loops=1\n Buffers: shared hit=7 read=95783 written=25\n Worker 1: actual time=3441.545..3649.345 rows=179354 loops=1\n Buffers: shared hit=5 read=101608 written=20\n -> Parallel Bitmap Heap Scan on schema.logtable\n (cost=5546.39..323481.21 rows=210418 width=2542) (actual\ntime=345.111..3042.932 rows=172172 loops=3)\n Output: column1, .. , column54\n Recheck Cond: ((logtable.entrytype = 4000) OR\n(logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n Filter: (logtable.archivestatus <= 1)\n Heap Blocks: exact=96709\n Buffers: shared hit=2 read=295430 written=67\n Worker 0: actual time=300.525..2999.403 rows=166842 loops=1\n Buffers: shared read=95783 written=25\n Worker 1: actual time=300.552..3004.859 rows=179354 loops=1\n Buffers: shared read=101606 written=20\n -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0)\n(actual time=241.996..241.997 rows=0 loops=1)\n Buffers: shared hit=2 read=1332\n -> Bitmap Index Scan on idx_entrytype\n (cost=0.00..682.13 rows=67293 width=0) (actual time=37.129..37.130\nrows=65970 loops=1)\n Index Cond: (logtable.entrytype = 4000)\n Buffers: shared read=172\n -> Bitmap Index Scan on idx_entrytype\n (cost=0.00..2223.63 rows=219760 width=0) (actual time=131.051..131.052\nrows=225283 loops=1)\n Index Cond: (logtable.entrytype = 4001)\n Buffers: shared hit=1 read=580\n -> Bitmap Index Scan on idx_entrytype\n (cost=0.00..2261.87 rows=223525 width=0) (actual time=73.800..73.800\nrows=225264 loops=1)\n Index Cond: (logtable.entrytype = 4002)\n Buffers: shared hit=1 read=580\nSettings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'\nPlanning Time: 0.597 ms\nExecution Time: 5389.811 ms\n\n\nThis doesn't seem to have had an effect.\nThanks for the suggestion.\n\nHave you try of excluding not null from index? Can you give dispersion of\n> archivestatus?\n>\n\nYes I have, it yielded the same performance boost as :\n\n create index test on logtable(entrytype) where archivestatus <= 1;\n\nI wonder what the old query plan was...\n> Would you include links to your prior correspondance ?\n\n\nSo prior Execution Plans are present in the SO.\nThe other forums I've tried are the official slack channel :\nhttps://postgresteam.slack.com/archives/C0FS3UTAP/p1620286295228600\nAnd SO :\nhttps://stackoverflow.com/questions/67401792/slow-running-postgresql-query\nBut I think most of the points discussed in these posts have already been\nmentionend by you except bloating of indexes.\n\nOracle is apparently doing a single scan on \"entrytype\".\n> As a test, you could try forcing that, like:\n> begin; SET enable_bitmapscan=off ; explain (analyze) [...]; rollback;\n> or\n> begin; DROP INDEX idx_arcstatus; explain (analyze) [...]; rollback;\n\n\nI've tried enable_bitmapscan=off but it didn't yield any good results.\n\n-- 2000 MB shared buffers - random_page_cost = 4 - enable_bitmapscan to off\nGather Merge (cost=543949.72..593050.69 rows=420836 width=2542) (actual\ntime=7716.031..9043.399 rows=516517 loops=1)\n Output: column1, .., column54\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=192 read=406605\n -> Sort (cost=542949.70..543475.75 rows=210418 width=2542) (actual\ntime=7642.666..7835.527 rows=172172 loops=3)\n Output: column1, .., column54\n Sort Key: logtable.timestampcol DESC\n Sort Method: quicksort Memory: 58803kB\n Worker 0: Sort Method: quicksort Memory: 60376kB\n Worker 1: Sort Method: quicksort Memory: 56988kB\n Buffers: shared hit=192 read=406605\n Worker 0: actual time=7610.482..7814.905 rows=177637 loops=1\n Buffers: shared hit=78 read=137826\n Worker 1: actual time=7607.645..7803.561 rows=167316 loops=1\n Buffers: shared hit=80 read=132672\n -> Parallel Seq Scan on schema.logtable (cost=0.00..524345.70\nrows=210418 width=2542) (actual time=1.669..7189.365 rows=172172 loops=3)\n Output: column1, .., column54\n Filter: ((logtable.acrhivestatus <= 1) AND\n((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR\n(logtable.entrytype = 4002)))\n Rows Removed by Filter: 4533459\n Buffers: shared hit=96 read=406605\n Worker 0: actual time=1.537..7158.286 rows=177637 loops=1\n Buffers: shared hit=30 read=137826\n Worker 1: actual time=1.414..7161.670 rows=167316 loops=1\n Buffers: shared hit=32 read=132672\nSettings: enable_bitmapscan = 'off', temp_buffers = '80MB', work_mem = '1GB'\nPlanning Time: 0.725 ms\nExecution Time: 9500.928 ms\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n2000 MB shared buffers - random_page_cost = 4 - -- 2000 -- 2000 MB shared\nbuffers - random_page_cost = 1 - enable_bitmapscan to off\nGather Merge (cost=543949.72..593050.69 rows=420836 width=2542) (actual\ntime=7519.032..8871.433 rows=516517 loops=1)\n Output: column1, .., column54\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=576 read=406221\n -> Sort (cost=542949.70..543475.75 rows=210418 width=2542) (actual\ntime=7451.958..7649.480 rows=172172 loops=3)\n Output: column1, .., column54\n Sort Key: logtable.timestampcol DESC\n Sort Method: quicksort Memory: 58867kB\n Worker 0: Sort Method: quicksort Memory: 58510kB\n Worker 1: Sort Method: quicksort Memory: 58788kB\n Buffers: shared hit=576 read=406221\n Worker 0: actual time=7438.271..7644.241 rows=172085 loops=1\n Buffers: shared hit=203 read=135166\n Worker 1: actual time=7407.574..7609.922 rows=172948 loops=1\n Buffers: shared hit=202 read=135225\n -> Parallel Seq Scan on schema.logtable (cost=0.00..524345.70\nrows=210418 width=2542) (actual time=2.839..7017.729 rows=172172 loops=3)\n Output: column1, .., column54\n Filter: ((logtable.acrhivestatus <= 1) AND\n((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR\n(logtable.entrytype = 4002)))\n Rows Removed by Filter: 4533459\n Buffers: shared hit=480 read=406221\n Worker 0: actual time=2.628..7006.420 rows=172085 loops=1\n Buffers: shared hit=155 read=135166\n Worker 1: actual time=3.948..6978.154 rows=172948 loops=1\n Buffers: shared hit=154 read=135225\nSettings: enable_bitmapscan = 'off', random_page_cost = '1', temp_buffers =\n'80MB', work_mem = '1GB'\nPlanning Time: 0.621 ms\nExecution Time: 9339.457 ms\n\nHave you tune shared buffers enough? Each block is of 8k by default.\n> BTW, please try to reset random_page_cost.\n\n\nLook above.\n\nI will try upgrading the minor version next.\nI will also try setting up a 13.X version locally and import the data from\n12.2 to 13.X and see if it might be faster.\n\n\nAm Do., 6. Mai 2021 um 23:16 Uhr schrieb Imre Samu <[email protected]>:\n\n> *> Postgres Version : *PostgreSQL 12.2,\n> > ... ON ... USING btree\n>\n> IMHO:\n> The next minor (bugix&security) release is near ( expected ~ May 13th,\n> 2021 ) https://www.postgresql.org/developer/roadmap/\n> so you can update your PostgreSQL to 12.7 ( + full Reindexing\n> recommended ! )\n>\n> You can find a lot of B-tree index-related fixes.\n> https://www.postgresql.org/docs/12/release-12-3.html Release date:\n> 2020-05-14\n> - Fix possible undercounting of deleted B-tree index pages in VACUUM\n> VERBOSE output\n> - Fix wrong bookkeeping for oldest deleted page in a B-tree index\n> - Ensure INCLUDE'd columns are always removed from B-tree pivot tuples\n> https://www.postgresql.org/docs/12/release-12-4.html\n> - Avoid repeated marking of dead btree index entries as dead\n> https://www.postgresql.org/docs/12/release-12-5.html\n> - Fix failure of parallel B-tree index scans when the index condition is\n> unsatisfiable\n> https://www.postgresql.org/docs/12/release-12-6.html Release date:\n> 2021-02-11\n>\n>\n> > COLLATE pg_catalog.\"default\"\n>\n> You can test the \"C\" Collation in some columns (keys ? ) ; in theory,\n> it should be faster :\n> \"The drawback of using locales other than C or POSIX in PostgreSQL is its\n> performance impact. It slows character handling and prevents ordinary\n> indexes from being used by LIKE. For this reason use locales only if you\n> actually need them.\"\n> https://www.postgresql.org/docs/12/locale.html\n>\n> https://www.postgresql.org/message-id/flat/CAF6DVKNU0vb4ZeQQ-%3Dagg69QJU3wdjPnMYYrPYY7CKc6iOU7eQ%40mail.gmail.com\n>\n> Best,\n> Imre\n>\n>\n> Semen Yefimenko <[email protected]> ezt írta (időpont: 2021. máj.\n> 6., Cs, 16:38):\n>\n>> Hi there,\n>>\n>> I've recently been involved in migrating our old system to SQL Server and\n>> then PostgreSQL. Everything has been working fine so far but now after\n>> executing our tests on Postgres, we saw a very slow running query on a\n>> large table in our database.\n>> I have tried asking on other platforms but no one has been able to give\n>> me a satisfying answer.\n>>\n>> *Postgres Version : *PostgreSQL 12.2, compiled by Visual C++ build 1914,\n>> 64-bit\n>> No notable errors in the Server log and the Postgres Server itself.\n>>\n>> The table structure :\n>>\n>> CREATE TABLE logtable\n>> (\n>> key character varying(20) COLLATE pg_catalog.\"default\" NOT NULL,\n>> id integer,\n>> column3 integer,\n>> column4 integer,\n>> column5 integer,\n>> column6 integer,\n>> column7 integer,\n>> column8 integer,\n>> column9 character varying(128) COLLATE pg_catalog.\"default\",\n>> column10 character varying(2048) COLLATE pg_catalog.\"default\",\n>> column11 character varying(2048) COLLATE pg_catalog.\"default\",\n>> column12 character varying(2048) COLLATE pg_catalog.\"default\",\n>> column13 character varying(2048) COLLATE pg_catalog.\"default\",\n>> column14 character varying(2048) COLLATE pg_catalog.\"default\",\n>> column15 character varying(2048) COLLATE pg_catalog.\"default\",\n>> column16 character varying(2048) COLLATE pg_catalog.\"default\",\n>> column17 character varying(2048) COLLATE pg_catalog.\"default\",\n>> column18 character varying(2048) COLLATE pg_catalog.\"default\",\n>> column19 character varying(2048) COLLATE pg_catalog.\"default\",\n>> column21 character varying(256) COLLATE pg_catalog.\"default\",\n>> column22 character varying(256) COLLATE pg_catalog.\"default\",\n>> column23 character varying(256) COLLATE pg_catalog.\"default\",\n>> column24 character varying(256) COLLATE pg_catalog.\"default\",\n>> column25 character varying(256) COLLATE pg_catalog.\"default\",\n>> column26 character varying(256) COLLATE pg_catalog.\"default\",\n>> column27 character varying(256) COLLATE pg_catalog.\"default\",\n>> column28 character varying(256) COLLATE pg_catalog.\"default\",\n>> column29 character varying(256) COLLATE pg_catalog.\"default\",\n>> column30 character varying(256) COLLATE pg_catalog.\"default\",\n>> column31 character varying(256) COLLATE pg_catalog.\"default\",\n>> column32 character varying(256) COLLATE pg_catalog.\"default\",\n>> column33 character varying(256) COLLATE pg_catalog.\"default\",\n>> column34 character varying(256) COLLATE pg_catalog.\"default\",\n>> column35 character varying(256) COLLATE pg_catalog.\"default\",\n>> entrytype integer,\n>> column37 bigint,\n>> column38 bigint,\n>> column39 bigint,\n>> column40 bigint,\n>> column41 bigint,\n>> column42 bigint,\n>> column43 bigint,\n>> column44 bigint,\n>> column45 bigint,\n>> column46 bigint,\n>> column47 character varying(128) COLLATE pg_catalog.\"default\",\n>> timestampcol timestamp without time zone,\n>> column49 timestamp without time zone,\n>> column50 timestamp without time zone,\n>> column51 timestamp without time zone,\n>> column52 timestamp without time zone,\n>> archivestatus integer,\n>> column54 integer,\n>> column55 character varying(20) COLLATE pg_catalog.\"default\",\n>> CONSTRAINT pkey PRIMARY KEY (key)\n>> USING INDEX TABLESPACE tablespace\n>> )\n>>\n>> TABLESPACE tablespace;\n>>\n>> ALTER TABLE schema.logtable\n>> OWNER to user;\n>>\n>> CREATE INDEX idx_timestampcol\n>> ON schema.logtable USING btree\n>> ( timestampcol ASC NULLS LAST )\n>> TABLESPACE tablespace ;\n>>\n>> CREATE INDEX idx_test2\n>> ON schema.logtable USING btree\n>> ( entrytype ASC NULLS LAST)\n>> TABLESPACE tablespace\n>> WHERE archivestatus <= 1;\n>>\n>> CREATE INDEX idx_arcstatus\n>> ON schema.logtable USING btree\n>> ( archivestatus ASC NULLS LAST)\n>> TABLESPACE tablespace;\n>>\n>> CREATE INDEX idx_entrytype\n>> ON schema.logtable USING btree\n>> ( entrytype ASC NULLS LAST)\n>> TABLESPACE tablespace ;\n>>\n>>\n>> The table contains 14.000.000 entries and has about 3.3 GB of data:\n>> No triggers, inserts per day, probably 5-20 K per day.\n>>\n>> SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\n>> relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE\n>> relname='logtable';\n>>\n>> relname\n>> |relpages|reltuples|relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|\n>>\n>> ------------------|--------|---------|-------------|-------|--------|--------------|----------|-------------|\n>> logtable | 405988| 14091424| 405907|r |\n>> 54|false |NULL | 3326803968|\n>>\n>>\n>> The slow running query:\n>>\n>> SELECT column1,..., column54 where ((entrytype = 4000 or entrytype =\n>> 4001 or entrytype = 4002) and (archivestatus <= 1)) order by timestampcol\n>> desc;\n>>\n>>\n>> This query runs in about 45-60 seconds.\n>> The same query runs in about 289 ms Oracle and 423 ms in SQL-Server.\n>> Now I understand that actually loading all results would take a while.\n>> (about 520K or so rows)\n>> But that shouldn't be exactly what happens right? There should be a\n>> resultset iterator which can retrieve all data but doesn't from the get go.\n>>\n>> With the help of some people in the slack and so thread, I've found a\n>> configuration parameter which helps performance :\n>>\n>> set random_page_cost = 1;\n>>\n>> This improved performance from 45-60 s to 15-35 s. (since we are using\n>> ssd's)\n>> Still not acceptable but definitely an improvement.\n>> Some maybe relevant system parameters:\n>>\n>> effective_cache_size 4GB\n>> maintenance_work_mem 1GB\n>> shared_buffers 2GB\n>> work_mem 1GB\n>>\n>>\n>> Currently I'm accessing the data through DbBeaver (JDBC -\n>> postgresql-42.2.5.jar) and our JAVA application (JDBC -\n>> postgresql-42.2.19.jar). Both use the defaultRowFetchSize=5000 to not load\n>> everything into memory and limit the results.\n>> The explain plan:\n>>\n>> EXPLAIN (ANALYZE, BUFFERS, SETTINGS, VERBOSE)...\n>> (Above Query)\n>>\n>>\n>> Gather Merge (cost=347142.71..397196.91 rows=429006 width=2558) (actual\n>> time=21210.019..22319.444 rows=515841 loops=1)\n>> Output: column1, .. , column54\n>> Workers Planned: 2\n>> Workers Launched: 2\n>> Buffers: shared hit=141487 read=153489\n>> -> Sort (cost=346142.69..346678.95 rows=214503 width=2558) (actual\n>> time=21148.887..21297.428 rows=171947 loops=3)\n>> Output: column1, .. , column54\n>> Sort Key: logtable.timestampcol DESC\n>> Sort Method: quicksort Memory: 62180kB\n>> Worker 0: Sort Method: quicksort Memory: 56969kB\n>> Worker 1: Sort Method: quicksort Memory: 56837kB\n>> Buffers: shared hit=141487 read=153489\n>> Worker 0: actual time=21129.973..21296.839 rows=166776 loops=1\n>> Buffers: shared hit=45558 read=49514\n>> Worker 1: actual time=21114.439..21268.117 rows=165896 loops=1\n>> Buffers: shared hit=45104 read=49506\n>> -> Parallel Bitmap Heap Scan on schema.logtable\n>> (cost=5652.74..327147.77 rows=214503 width=2558) (actual\n>> time=1304.813..20637.462 rows=171947 loops=3)\n>> Output: column1, .. , column54\n>> Recheck Cond: ((logtable.entrytype = 4000) OR\n>> (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n>> Filter: (logtable.archivestatus <= 1)\n>> Heap Blocks: exact=103962\n>> Buffers: shared hit=141473 read=153489\n>> Worker 0: actual time=1280.472..20638.620 rows=166776\n>> loops=1\n>> Buffers: shared hit=45551 read=49514\n>> Worker 1: actual time=1275.274..20626.219 rows=165896\n>> loops=1\n>> Buffers: shared hit=45097 read=49506\n>> -> BitmapOr (cost=5652.74..5652.74 rows=520443 width=0)\n>> (actual time=1179.438..1179.438 rows=0 loops=1)\n>> Buffers: shared hit=9 read=1323\n>> -> Bitmap Index Scan on idx_entrytype\n>> (cost=0.00..556.61 rows=54957 width=0) (actual time=161.939..161.940\n>> rows=65970 loops=1)\n>> Index Cond: (logtable.entrytype = 4000)\n>> Buffers: shared hit=1 read=171\n>> -> Bitmap Index Scan on idx_entrytype\n>> (cost=0.00..2243.22 rows=221705 width=0) (actual time=548.849..548.849\n>> rows=224945 loops=1)\n>> Index Cond: (logtable.entrytype = 4001)\n>> Buffers: shared hit=4 read=576\n>> -> Bitmap Index Scan on idx_entrytype\n>> (cost=0.00..2466.80 rows=243782 width=0) (actual time=468.637..468.637\n>> rows=224926 loops=1)\n>> Index Cond: (logtable.entrytype = 4002)\n>> Buffers: shared hit=4 read=576\n>> Settings: random_page_cost = '1', search_path = '\"$user\", schema,\n>> public', temp_buffers = '80MB', work_mem = '1GB'\n>> Planning Time: 0.578 ms\n>> Execution Time: 22617.351 ms\n>>\n>> As mentioned before, oracle does this much faster.\n>>\n>>\n>> -------------------------------------------------------------------------------------------------------------------------\n>> | Id | Operation | Name |\n>> Rows | Bytes |TempSpc| Cost (%CPU)| Time |\n>>\n>> -------------------------------------------------------------------------------------------------------------------------\n>> | 0 | SELECT STATEMENT | |\n>> 6878 | 2491K| | 2143 (1)| 00:00:01 |\n>> | 1 | SORT ORDER BY | |\n>> 6878 | 2491K| 3448K| 2143 (1)| 00:00:01 |\n>> | 2 | INLIST ITERATOR | |\n>> | | | | |\n>> |* 3 | TABLE ACCESS BY INDEX ROWID BATCHED| logtable |\n>> 6878 | 2491K| | 1597 (1)| 00:00:01 |\n>> |* 4 | INDEX RANGE SCAN | idx_entrytype |\n>> 6878 | | | 23 (0)| 00:00:01 |\n>>\n>> -------------------------------------------------------------------------------------------------------------------------\n>>\n>> Is there much I can analyze, any information you might need to further\n>> analyze this?\n>>\n>\n\nSorry if I'm cumulatively answering everyone in one E-Mail, I'm not sure how I'm supposed to do it. (single E-Mails vs many) Can you try tuning by increasing the shared_buffers slowly in steps of 500MB, and running explain analyze against the query.-- 2500 MB shared buffers - random_page_cost = 1;Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual time=2076.329..3737.050 rows=516517 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=295446 -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual time=2007.487..2202.707 rows=172172 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 65154kB Worker 0: Sort Method: quicksort Memory: 55707kB Worker 1: Sort Method: quicksort Memory: 55304kB Buffers: shared hit=295446 Worker 0: actual time=1963.969..2156.624 rows=161205 loops=1 Buffers: shared hit=91028 Worker 1: actual time=1984.700..2179.697 rows=161935 loops=1 Buffers: shared hit=92133 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5546.39..323481.21 rows=210418 width=2542) (actual time=322.125..1618.971 rows=172172 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=110951 Buffers: shared hit=295432 Worker 0: actual time=282.201..1595.117 rows=161205 loops=1 Buffers: shared hit=91021 Worker 1: actual time=303.671..1623.299 rows=161935 loops=1 Buffers: shared hit=92126 -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0) (actual time=199.119..199.119 rows=0 loops=1) Buffers: shared hit=1334 -> Bitmap Index Scan on idx_entrytype (cost=0.00..682.13 rows=67293 width=0) (actual time=28.856..28.857 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=172 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2223.63 rows=219760 width=0) (actual time=108.871..108.872 rows=225283 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=581 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2261.87 rows=223525 width=0) (actual time=61.377..61.377 rows=225264 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=581Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.940 msExecution Time: 4188.083 ms-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3000 MB shared buffers - random_page_cost = 1;Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual time=2062.280..3763.408 rows=516517 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=295446 -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual time=1987.933..2180.422 rows=172172 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 66602kB Worker 0: Sort Method: quicksort Memory: 55149kB Worker 1: Sort Method: quicksort Memory: 54415kB Buffers: shared hit=295446 Worker 0: actual time=1963.059..2147.916 rows=159556 loops=1 Buffers: shared hit=89981 Worker 1: actual time=1949.726..2136.200 rows=158554 loops=1 Buffers: shared hit=90141 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5546.39..323481.21 rows=210418 width=2542) (actual time=340.705..1603.796 rows=172172 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=113990 Buffers: shared hit=295432 Worker 0: actual time=317.918..1605.548 rows=159556 loops=1 Buffers: shared hit=89974 Worker 1: actual time=304.744..1589.221 rows=158554 loops=1 Buffers: shared hit=90134 -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0) (actual time=218.972..218.973 rows=0 loops=1) Buffers: shared hit=1334 -> Bitmap Index Scan on idx_entrytype (cost=0.00..682.13 rows=67293 width=0) (actual time=37.741..37.742 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=172 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2223.63 rows=219760 width=0) (actual time=119.120..119.121 rows=225283 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=581 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2261.87 rows=223525 width=0) (actual time=62.097..62.098 rows=225264 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=581Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 2.717 msExecution Time: 4224.670 ms-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3500 MB shared buffers - random_page_cost = 1;Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual time=3578.155..4932.858 rows=516517 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=14 read=295432 written=67 -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual time=3482.159..3677.227 rows=172172 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 58533kB Worker 0: Sort Method: quicksort Memory: 56878kB Worker 1: Sort Method: quicksort Memory: 60755kB Buffers: shared hit=14 read=295432 written=67 Worker 0: actual time=3435.131..3632.985 rows=166842 loops=1 Buffers: shared hit=7 read=95783 written=25 Worker 1: actual time=3441.545..3649.345 rows=179354 loops=1 Buffers: shared hit=5 read=101608 written=20 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5546.39..323481.21 rows=210418 width=2542) (actual time=345.111..3042.932 rows=172172 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=96709 Buffers: shared hit=2 read=295430 written=67 Worker 0: actual time=300.525..2999.403 rows=166842 loops=1 Buffers: shared read=95783 written=25 Worker 1: actual time=300.552..3004.859 rows=179354 loops=1 Buffers: shared read=101606 written=20 -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0) (actual time=241.996..241.997 rows=0 loops=1) Buffers: shared hit=2 read=1332 -> Bitmap Index Scan on idx_entrytype (cost=0.00..682.13 rows=67293 width=0) (actual time=37.129..37.130 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared read=172 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2223.63 rows=219760 width=0) (actual time=131.051..131.052 rows=225283 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=1 read=580 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2261.87 rows=223525 width=0) (actual time=73.800..73.800 rows=225264 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=1 read=580Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.597 msExecution Time: 5389.811 ms This doesn't seem to have had an effect. Thanks for the suggestion. Have you try of excluding not null from index? Can you give dispersion of archivestatus?Yes I have, it yielded the same performance boost as : create index test on logtable(entrytype) where archivestatus <= 1;I wonder what the old query plan was...Would you include links to your prior correspondance ?So prior Execution Plans are present in the SO.The other forums I've tried are the official slack channel : https://postgresteam.slack.com/archives/C0FS3UTAP/p1620286295228600And SO : https://stackoverflow.com/questions/67401792/slow-running-postgresql-queryBut I think most of the points discussed in these posts have already been mentionend by you except bloating of indexes. Oracle is apparently doing a single scan on \"entrytype\".As a test, you could try forcing that, like:begin; SET enable_bitmapscan=off ; explain (analyze) [...]; rollback;orbegin; DROP INDEX idx_arcstatus; explain (analyze) [...]; rollback;I've tried enable_bitmapscan=off but it didn't yield any good results.-- 2000 MB shared buffers - random_page_cost = 4 - enable_bitmapscan to offGather Merge (cost=543949.72..593050.69 rows=420836 width=2542) (actual time=7716.031..9043.399 rows=516517 loops=1) Output: column1, .., column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=192 read=406605 -> Sort (cost=542949.70..543475.75 rows=210418 width=2542) (actual time=7642.666..7835.527 rows=172172 loops=3) Output: column1, .., column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 58803kB Worker 0: Sort Method: quicksort Memory: 60376kB Worker 1: Sort Method: quicksort Memory: 56988kB Buffers: shared hit=192 read=406605 Worker 0: actual time=7610.482..7814.905 rows=177637 loops=1 Buffers: shared hit=78 read=137826 Worker 1: actual time=7607.645..7803.561 rows=167316 loops=1 Buffers: shared hit=80 read=132672 -> Parallel Seq Scan on schema.logtable (cost=0.00..524345.70 rows=210418 width=2542) (actual time=1.669..7189.365 rows=172172 loops=3) Output: column1, .., column54 Filter: ((logtable.acrhivestatus <= 1) AND ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))) Rows Removed by Filter: 4533459 Buffers: shared hit=96 read=406605 Worker 0: actual time=1.537..7158.286 rows=177637 loops=1 Buffers: shared hit=30 read=137826 Worker 1: actual time=1.414..7161.670 rows=167316 loops=1 Buffers: shared hit=32 read=132672Settings: enable_bitmapscan = 'off', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.725 msExecution Time: 9500.928 ms---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2000 MB shared buffers - random_page_cost = 4 - -- 2000 -- 2000 MB shared buffers - random_page_cost = 1 - enable_bitmapscan to offGather Merge (cost=543949.72..593050.69 rows=420836 width=2542) (actual time=7519.032..8871.433 rows=516517 loops=1) Output: column1, .., column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=576 read=406221 -> Sort (cost=542949.70..543475.75 rows=210418 width=2542) (actual time=7451.958..7649.480 rows=172172 loops=3) Output: column1, .., column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 58867kB Worker 0: Sort Method: quicksort Memory: 58510kB Worker 1: Sort Method: quicksort Memory: 58788kB Buffers: shared hit=576 read=406221 Worker 0: actual time=7438.271..7644.241 rows=172085 loops=1 Buffers: shared hit=203 read=135166 Worker 1: actual time=7407.574..7609.922 rows=172948 loops=1 Buffers: shared hit=202 read=135225 -> Parallel Seq Scan on schema.logtable (cost=0.00..524345.70 rows=210418 width=2542) (actual time=2.839..7017.729 rows=172172 loops=3) Output: column1, .., column54 Filter: ((logtable.acrhivestatus <= 1) AND ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))) Rows Removed by Filter: 4533459 Buffers: shared hit=480 read=406221 Worker 0: actual time=2.628..7006.420 rows=172085 loops=1 Buffers: shared hit=155 read=135166 Worker 1: actual time=3.948..6978.154 rows=172948 loops=1 Buffers: shared hit=154 read=135225Settings: enable_bitmapscan = 'off', random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.621 msExecution Time: 9339.457 msHave you tune shared buffers enough? Each block is of 8k by default.BTW, please try to reset random_page_cost.Look above. I will try upgrading the minor version next.I will also try setting up a 13.X version locally and import the data from 12.2 to 13.X and see if it might be faster.Am Do., 6. Mai 2021 um 23:16 Uhr schrieb Imre Samu <[email protected]>:> Postgres Version : PostgreSQL 12.2,> ... ON ... USING btreeIMHO:The next minor (bugix&security) release is near ( expected ~ May 13th, 2021 ) https://www.postgresql.org/developer/roadmap/so you can update your PostgreSQL to 12.7 ( + full Reindexing recommended ! ) You can find a lot of B-tree index-related fixes.https://www.postgresql.org/docs/12/release-12-3.html Release date: 2020-05-14 - Fix possible undercounting of deleted B-tree index pages in VACUUM VERBOSE output - Fix wrong bookkeeping for oldest deleted page in a B-tree index- Ensure INCLUDE'd columns are always removed from B-tree pivot tupleshttps://www.postgresql.org/docs/12/release-12-4.html - Avoid repeated marking of dead btree index entries as dead https://www.postgresql.org/docs/12/release-12-5.html - Fix failure of parallel B-tree index scans when the index condition is unsatisfiablehttps://www.postgresql.org/docs/12/release-12-6.html Release date: 2021-02-11> COLLATE pg_catalog.\"default\"You can test the \"C\" Collation in some columns (keys ? ) ; in theory, it should be faster :\"The drawback of using locales other than C or POSIX in PostgreSQL is its performance impact. It slows character handling and prevents ordinary indexes from being used by LIKE. For this reason use locales only if you actually need them.\"https://www.postgresql.org/docs/12/locale.htmlhttps://www.postgresql.org/message-id/flat/CAF6DVKNU0vb4ZeQQ-%3Dagg69QJU3wdjPnMYYrPYY7CKc6iOU7eQ%40mail.gmail.comBest, ImreSemen Yefimenko <[email protected]> ezt írta (időpont: 2021. máj. 6., Cs, 16:38):Hi there,I've recently been involved in migrating our old system to SQL Server and then PostgreSQL. Everything has been working fine so far but now after executing our tests on Postgres, we saw a very slow running query on a large table in our database. I have tried asking on other platforms but no one has been able to give me a satisfying answer. Postgres Version : PostgreSQL 12.2, compiled by Visual C++ build 1914, 64-bitNo notable errors in the Server log and the Postgres Server itself.The table structure :CREATE TABLE logtable( key character varying(20) COLLATE pg_catalog.\"default\" NOT NULL, id integer, \n\ncolumn3 integer, \n\ncolumn4 integer, \n\ncolumn5 integer, \n\ncolumn6 integer, \n\ncolumn7 integer, \n\ncolumn8 integer, \n\ncolumn9 character varying(128) COLLATE pg_catalog.\"default\", \n\ncolumn10 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn11 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn12 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn13 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn14 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn15 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn16 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn17 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn18 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn19 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn21 \n\ncharacter varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn22 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn23 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn24 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn25 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn26 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn27 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn28 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn29 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn30 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn31 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn32 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn33 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn34 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn35 character varying(256) COLLATE pg_catalog.\"default\", \n\nentrytype integer, \n\ncolumn37 bigint, \n\ncolumn38 bigint, \n\ncolumn39 bigint, \n\ncolumn40 bigint, \n\ncolumn41 bigint, \n\ncolumn42 bigint, \n\ncolumn43 bigint, \n\ncolumn44 bigint, \n\ncolumn45 bigint, \n\ncolumn46 bigint, \n\ncolumn47 character varying(128) COLLATE pg_catalog.\"default\", \n\ntimestampcol timestamp without time zone, \n\ncolumn49 timestamp without time zone, \n\ncolumn50 timestamp without time zone, \n\ncolumn51 timestamp without time zone, \n\ncolumn52 timestamp without time zone, \n\n\n\narchivestatus \n\ninteger, \n\ncolumn54 integer, \n\ncolumn55 character varying(20) COLLATE pg_catalog.\"default\", CONSTRAINT pkey PRIMARY KEY (key) USING INDEX TABLESPACE tablespace)TABLESPACE tablespace;ALTER TABLE schema.logtable OWNER to user;CREATE INDEX idx_timestampcol ON schema.logtable USING btree (\n\n\n\ntimestampcol \n\n\n\n ASC NULLS LAST ) TABLESPACE \n\ntablespace\n\n;CREATE INDEX idx_test2 ON schema.logtable USING btree ( entrytype ASC NULLS LAST) TABLESPACE tablespace WHERE archivestatus <= 1;CREATE INDEX idx_arcstatus ON \n\nschema.logtable USING btree ( archivestatus ASC NULLS LAST) TABLESPACE tablespace;CREATE INDEX \n\nidx_entrytype ON schema.logtable USING btree ( entrytype ASC NULLS LAST) TABLESPACE \n\ntablespace\n\n;The table contains 14.000.000 entries and has about 3.3 GB of data:No triggers, inserts per day, probably 5-20 K per day.SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='logtable';relname |relpages|reltuples|relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|------------------|--------|---------|-------------|-------|--------|--------------|----------|-------------|logtable | 405988| 14091424| 405907|r | 54|false |NULL | 3326803968|The slow running query:SELECT column1,..., column54 where ((entrytype = 4000 or \n\nentrytype \n\n= 4001 or \n\nentrytype \n\n= 4002) and (archivestatus <= 1)) order by timestampcol desc;This query runs in about 45-60 seconds.The same query runs in about 289 ms Oracle and 423 ms in SQL-Server. Now I understand that actually loading all results would take a while. (about 520K or so rows) But that shouldn't be exactly what happens right? There should be a resultset iterator which can retrieve all data but doesn't from the get go. With the help of some people in the slack and so thread, I've found a configuration parameter which helps performance : set random_page_cost = 1;This improved performance from 45-60 s to 15-35 s. (since we are using ssd's) Still not acceptable but definitely an improvement. Some maybe relevant system parameters:effective_cache_size\t4GBmaintenance_work_mem\t1GBshared_buffers\t2GBwork_mem\t1GBCurrently I'm accessing the data through DbBeaver (JDBC - postgresql-42.2.5.jar) and our JAVA application (JDBC - postgresql-42.2.19.jar). Both use the defaultRowFetchSize=5000 to not load everything into memory and limit the results. The explain plan:EXPLAIN (ANALYZE, BUFFERS, SETTINGS, VERBOSE)...(Above Query)Gather Merge (cost=347142.71..397196.91 rows=429006 width=2558) (actual time=21210.019..22319.444 rows=515841 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=141487 read=153489 -> Sort (cost=346142.69..346678.95 rows=214503 width=2558) (actual time=21148.887..21297.428 rows=171947 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 62180kB Worker 0: Sort Method: quicksort Memory: 56969kB Worker 1: Sort Method: quicksort Memory: 56837kB Buffers: shared hit=141487 read=153489 Worker 0: actual time=21129.973..21296.839 rows=166776 loops=1 Buffers: shared hit=45558 read=49514 Worker 1: actual time=21114.439..21268.117 rows=165896 loops=1 Buffers: shared hit=45104 read=49506 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5652.74..327147.77 rows=214503 width=2558) (actual time=1304.813..20637.462 rows=171947 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=103962 Buffers: shared hit=141473 read=153489 Worker 0: actual time=1280.472..20638.620 rows=166776 loops=1 Buffers: shared hit=45551 read=49514 Worker 1: actual time=1275.274..20626.219 rows=165896 loops=1 Buffers: shared hit=45097 read=49506 -> BitmapOr (cost=5652.74..5652.74 rows=520443 width=0) (actual time=1179.438..1179.438 rows=0 loops=1) Buffers: shared hit=9 read=1323 -> Bitmap Index Scan on idx_entrytype (cost=0.00..556.61 rows=54957 width=0) (actual time=161.939..161.940 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=1 read=171 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2243.22 rows=221705 width=0) (actual time=548.849..548.849 rows=224945 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=4 read=576 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2466.80 rows=243782 width=0) (actual time=468.637..468.637 rows=224926 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=4 read=576Settings: random_page_cost = '1', search_path = '\"$user\", schema, public', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.578 msExecution Time: 22617.351 msAs mentioned before, oracle does this much faster. -------------------------------------------------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |-------------------------------------------------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 6878 | 2491K| | 2143 (1)| 00:00:01 || 1 | SORT ORDER BY | | 6878 | 2491K| 3448K| 2143 (1)| 00:00:01 || 2 | INLIST ITERATOR | | | | | | ||* 3 | TABLE ACCESS BY INDEX ROWID BATCHED| logtable | 6878 | 2491K| | 1597 (1)| 00:00:01 ||* 4 | INDEX RANGE SCAN | idx_entrytype | 6878 | | | 23 (0)| 00:00:01 |-------------------------------------------------------------------------------------------------------------------------Is there much I can analyze, any information you might need to further analyze this?",
"msg_date": "Fri, 7 May 2021 10:04:01 +0200",
"msg_from": "Semen Yefimenko <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "ok one last thing, not to be a PITA, but just in case if this helps.\n\npostgres=# SELECT * FROM pg_stat_user_indexes where relname = 'logtable';\npostgres=# SELECT * FROM pg_stat_user_tables where relname = 'logtable';\nbasically, i just to verify if the table is not bloated.\nlooking at *n_live_tup* vs *n_dead_tup* would help understand it.\n\nif you see too many dead tuples,\nvacuum (ANALYZE,verbose) logtable; -- would get rid of dead tuples (if\nthere are no tx using the dead tuples)\n\nand then run your query.\n\nThanks,\nVijay\n\n\n\n\n\n\nOn Fri, 7 May 2021 at 13:34, Semen Yefimenko <[email protected]>\nwrote:\n\n> Sorry if I'm cumulatively answering everyone in one E-Mail, I'm not sure\n> how I'm supposed to do it. (single E-Mails vs many)\n>\n>\n>> Can you try tuning by increasing the shared_buffers slowly in steps of\n>> 500MB, and running explain analyze against the query.\n>\n>\n> -- 2500 MB shared buffers - random_page_cost = 1;\n> Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual\n> time=2076.329..3737.050 rows=516517 loops=1)\n> Output: column1, .. , column54\n> Workers Planned: 2\n> Workers Launched: 2\n> Buffers: shared hit=295446\n> -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual\n> time=2007.487..2202.707 rows=172172 loops=3)\n> Output: column1, .. , column54\n> Sort Key: logtable.timestampcol DESC\n> Sort Method: quicksort Memory: 65154kB\n> Worker 0: Sort Method: quicksort Memory: 55707kB\n> Worker 1: Sort Method: quicksort Memory: 55304kB\n> Buffers: shared hit=295446\n> Worker 0: actual time=1963.969..2156.624 rows=161205 loops=1\n> Buffers: shared hit=91028\n> Worker 1: actual time=1984.700..2179.697 rows=161935 loops=1\n> Buffers: shared hit=92133\n> -> Parallel Bitmap Heap Scan on schema.logtable\n> (cost=5546.39..323481.21 rows=210418 width=2542) (actual\n> time=322.125..1618.971 rows=172172 loops=3)\n> Output: column1, .. , column54\n> Recheck Cond: ((logtable.entrytype = 4000) OR\n> (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n> Filter: (logtable.archivestatus <= 1)\n> Heap Blocks: exact=110951\n> Buffers: shared hit=295432\n> Worker 0: actual time=282.201..1595.117 rows=161205 loops=1\n> Buffers: shared hit=91021\n> Worker 1: actual time=303.671..1623.299 rows=161935 loops=1\n> Buffers: shared hit=92126\n> -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0)\n> (actual time=199.119..199.119 rows=0 loops=1)\n> Buffers: shared hit=1334\n> -> Bitmap Index Scan on idx_entrytype\n> (cost=0.00..682.13 rows=67293 width=0) (actual time=28.856..28.857\n> rows=65970 loops=1)\n> Index Cond: (logtable.entrytype = 4000)\n> Buffers: shared hit=172\n> -> Bitmap Index Scan on idx_entrytype\n> (cost=0.00..2223.63 rows=219760 width=0) (actual time=108.871..108.872\n> rows=225283 loops=1)\n> Index Cond: (logtable.entrytype = 4001)\n> Buffers: shared hit=581\n> -> Bitmap Index Scan on idx_entrytype\n> (cost=0.00..2261.87 rows=223525 width=0) (actual time=61.377..61.377\n> rows=225264 loops=1)\n> Index Cond: (logtable.entrytype = 4002)\n> Buffers: shared hit=581\n> Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'\n> Planning Time: 0.940 ms\n> Execution Time: 4188.083 ms\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> -- 3000 MB shared buffers - random_page_cost = 1;\n> Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual\n> time=2062.280..3763.408 rows=516517 loops=1)\n> Output: column1, .. , column54\n> Workers Planned: 2\n> Workers Launched: 2\n> Buffers: shared hit=295446\n> -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual\n> time=1987.933..2180.422 rows=172172 loops=3)\n> Output: column1, .. , column54\n> Sort Key: logtable.timestampcol DESC\n> Sort Method: quicksort Memory: 66602kB\n> Worker 0: Sort Method: quicksort Memory: 55149kB\n> Worker 1: Sort Method: quicksort Memory: 54415kB\n> Buffers: shared hit=295446\n> Worker 0: actual time=1963.059..2147.916 rows=159556 loops=1\n> Buffers: shared hit=89981\n> Worker 1: actual time=1949.726..2136.200 rows=158554 loops=1\n> Buffers: shared hit=90141\n> -> Parallel Bitmap Heap Scan on schema.logtable\n> (cost=5546.39..323481.21 rows=210418 width=2542) (actual\n> time=340.705..1603.796 rows=172172 loops=3)\n> Output: column1, .. , column54\n> Recheck Cond: ((logtable.entrytype = 4000) OR\n> (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n> Filter: (logtable.archivestatus <= 1)\n> Heap Blocks: exact=113990\n> Buffers: shared hit=295432\n> Worker 0: actual time=317.918..1605.548 rows=159556 loops=1\n> Buffers: shared hit=89974\n> Worker 1: actual time=304.744..1589.221 rows=158554 loops=1\n> Buffers: shared hit=90134\n> -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0)\n> (actual time=218.972..218.973 rows=0 loops=1)\n> Buffers: shared hit=1334\n> -> Bitmap Index Scan on idx_entrytype\n> (cost=0.00..682.13 rows=67293 width=0) (actual time=37.741..37.742\n> rows=65970 loops=1)\n> Index Cond: (logtable.entrytype = 4000)\n> Buffers: shared hit=172\n> -> Bitmap Index Scan on idx_entrytype\n> (cost=0.00..2223.63 rows=219760 width=0) (actual time=119.120..119.121\n> rows=225283 loops=1)\n> Index Cond: (logtable.entrytype = 4001)\n> Buffers: shared hit=581\n> -> Bitmap Index Scan on idx_entrytype\n> (cost=0.00..2261.87 rows=223525 width=0) (actual time=62.097..62.098\n> rows=225264 loops=1)\n> Index Cond: (logtable.entrytype = 4002)\n> Buffers: shared hit=581\n> Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'\n> Planning Time: 2.717 ms\n> Execution Time: 4224.670 ms\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> -- 3500 MB shared buffers - random_page_cost = 1;\n> Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual\n> time=3578.155..4932.858 rows=516517 loops=1)\n> Output: column1, .. , column54\n> Workers Planned: 2\n> Workers Launched: 2\n> Buffers: shared hit=14 read=295432 written=67\n> -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual\n> time=3482.159..3677.227 rows=172172 loops=3)\n> Output: column1, .. , column54\n> Sort Key: logtable.timestampcol DESC\n> Sort Method: quicksort Memory: 58533kB\n> Worker 0: Sort Method: quicksort Memory: 56878kB\n> Worker 1: Sort Method: quicksort Memory: 60755kB\n> Buffers: shared hit=14 read=295432 written=67\n> Worker 0: actual time=3435.131..3632.985 rows=166842 loops=1\n> Buffers: shared hit=7 read=95783 written=25\n> Worker 1: actual time=3441.545..3649.345 rows=179354 loops=1\n> Buffers: shared hit=5 read=101608 written=20\n> -> Parallel Bitmap Heap Scan on schema.logtable\n> (cost=5546.39..323481.21 rows=210418 width=2542) (actual\n> time=345.111..3042.932 rows=172172 loops=3)\n> Output: column1, .. , column54\n> Recheck Cond: ((logtable.entrytype = 4000) OR\n> (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n> Filter: (logtable.archivestatus <= 1)\n> Heap Blocks: exact=96709\n> Buffers: shared hit=2 read=295430 written=67\n> Worker 0: actual time=300.525..2999.403 rows=166842 loops=1\n> Buffers: shared read=95783 written=25\n> Worker 1: actual time=300.552..3004.859 rows=179354 loops=1\n> Buffers: shared read=101606 written=20\n> -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0)\n> (actual time=241.996..241.997 rows=0 loops=1)\n> Buffers: shared hit=2 read=1332\n> -> Bitmap Index Scan on idx_entrytype\n> (cost=0.00..682.13 rows=67293 width=0) (actual time=37.129..37.130\n> rows=65970 loops=1)\n> Index Cond: (logtable.entrytype = 4000)\n> Buffers: shared read=172\n> -> Bitmap Index Scan on idx_entrytype\n> (cost=0.00..2223.63 rows=219760 width=0) (actual time=131.051..131.052\n> rows=225283 loops=1)\n> Index Cond: (logtable.entrytype = 4001)\n> Buffers: shared hit=1 read=580\n> -> Bitmap Index Scan on idx_entrytype\n> (cost=0.00..2261.87 rows=223525 width=0) (actual time=73.800..73.800\n> rows=225264 loops=1)\n> Index Cond: (logtable.entrytype = 4002)\n> Buffers: shared hit=1 read=580\n> Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'\n> Planning Time: 0.597 ms\n> Execution Time: 5389.811 ms\n>\n>\n> This doesn't seem to have had an effect.\n> Thanks for the suggestion.\n>\n> Have you try of excluding not null from index? Can you give dispersion of\n>> archivestatus?\n>>\n>\n> Yes I have, it yielded the same performance boost as :\n>\n> create index test on logtable(entrytype) where archivestatus <= 1;\n>\n> I wonder what the old query plan was...\n>> Would you include links to your prior correspondance ?\n>\n>\n> So prior Execution Plans are present in the SO.\n> The other forums I've tried are the official slack channel :\n> https://postgresteam.slack.com/archives/C0FS3UTAP/p1620286295228600\n> And SO :\n> https://stackoverflow.com/questions/67401792/slow-running-postgresql-query\n> But I think most of the points discussed in these posts have already been\n> mentionend by you except bloating of indexes.\n>\n> Oracle is apparently doing a single scan on \"entrytype\".\n>> As a test, you could try forcing that, like:\n>> begin; SET enable_bitmapscan=off ; explain (analyze) [...]; rollback;\n>> or\n>> begin; DROP INDEX idx_arcstatus; explain (analyze) [...]; rollback;\n>\n>\n> I've tried enable_bitmapscan=off but it didn't yield any good results.\n>\n> -- 2000 MB shared buffers - random_page_cost = 4 - enable_bitmapscan to off\n> Gather Merge (cost=543949.72..593050.69 rows=420836 width=2542) (actual\n> time=7716.031..9043.399 rows=516517 loops=1)\n> Output: column1, .., column54\n> Workers Planned: 2\n> Workers Launched: 2\n> Buffers: shared hit=192 read=406605\n> -> Sort (cost=542949.70..543475.75 rows=210418 width=2542) (actual\n> time=7642.666..7835.527 rows=172172 loops=3)\n> Output: column1, .., column54\n> Sort Key: logtable.timestampcol DESC\n> Sort Method: quicksort Memory: 58803kB\n> Worker 0: Sort Method: quicksort Memory: 60376kB\n> Worker 1: Sort Method: quicksort Memory: 56988kB\n> Buffers: shared hit=192 read=406605\n> Worker 0: actual time=7610.482..7814.905 rows=177637 loops=1\n> Buffers: shared hit=78 read=137826\n> Worker 1: actual time=7607.645..7803.561 rows=167316 loops=1\n> Buffers: shared hit=80 read=132672\n> -> Parallel Seq Scan on schema.logtable (cost=0.00..524345.70\n> rows=210418 width=2542) (actual time=1.669..7189.365 rows=172172 loops=3)\n> Output: column1, .., column54\n> Filter: ((logtable.acrhivestatus <= 1) AND\n> ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR\n> (logtable.entrytype = 4002)))\n> Rows Removed by Filter: 4533459\n> Buffers: shared hit=96 read=406605\n> Worker 0: actual time=1.537..7158.286 rows=177637 loops=1\n> Buffers: shared hit=30 read=137826\n> Worker 1: actual time=1.414..7161.670 rows=167316 loops=1\n> Buffers: shared hit=32 read=132672\n> Settings: enable_bitmapscan = 'off', temp_buffers = '80MB', work_mem =\n> '1GB'\n> Planning Time: 0.725 ms\n> Execution Time: 9500.928 ms\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> 2000 MB shared buffers - random_page_cost = 4 - -- 2000 -- 2000 MB shared\n> buffers - random_page_cost = 1 - enable_bitmapscan to off\n> Gather Merge (cost=543949.72..593050.69 rows=420836 width=2542) (actual\n> time=7519.032..8871.433 rows=516517 loops=1)\n> Output: column1, .., column54\n> Workers Planned: 2\n> Workers Launched: 2\n> Buffers: shared hit=576 read=406221\n> -> Sort (cost=542949.70..543475.75 rows=210418 width=2542) (actual\n> time=7451.958..7649.480 rows=172172 loops=3)\n> Output: column1, .., column54\n> Sort Key: logtable.timestampcol DESC\n> Sort Method: quicksort Memory: 58867kB\n> Worker 0: Sort Method: quicksort Memory: 58510kB\n> Worker 1: Sort Method: quicksort Memory: 58788kB\n> Buffers: shared hit=576 read=406221\n> Worker 0: actual time=7438.271..7644.241 rows=172085 loops=1\n> Buffers: shared hit=203 read=135166\n> Worker 1: actual time=7407.574..7609.922 rows=172948 loops=1\n> Buffers: shared hit=202 read=135225\n> -> Parallel Seq Scan on schema.logtable (cost=0.00..524345.70\n> rows=210418 width=2542) (actual time=2.839..7017.729 rows=172172 loops=3)\n> Output: column1, .., column54\n> Filter: ((logtable.acrhivestatus <= 1) AND\n> ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR\n> (logtable.entrytype = 4002)))\n> Rows Removed by Filter: 4533459\n> Buffers: shared hit=480 read=406221\n> Worker 0: actual time=2.628..7006.420 rows=172085 loops=1\n> Buffers: shared hit=155 read=135166\n> Worker 1: actual time=3.948..6978.154 rows=172948 loops=1\n> Buffers: shared hit=154 read=135225\n> Settings: enable_bitmapscan = 'off', random_page_cost = '1', temp_buffers\n> = '80MB', work_mem = '1GB'\n> Planning Time: 0.621 ms\n> Execution Time: 9339.457 ms\n>\n> Have you tune shared buffers enough? Each block is of 8k by default.\n>> BTW, please try to reset random_page_cost.\n>\n>\n> Look above.\n>\n> I will try upgrading the minor version next.\n> I will also try setting up a 13.X version locally and import the data from\n> 12.2 to 13.X and see if it might be faster.\n>\n>\n> Am Do., 6. Mai 2021 um 23:16 Uhr schrieb Imre Samu <[email protected]>:\n>\n>> *> Postgres Version : *PostgreSQL 12.2,\n>> > ... ON ... USING btree\n>>\n>> IMHO:\n>> The next minor (bugix&security) release is near ( expected ~ May 13th,\n>> 2021 ) https://www.postgresql.org/developer/roadmap/\n>> so you can update your PostgreSQL to 12.7 ( + full Reindexing\n>> recommended ! )\n>>\n>> You can find a lot of B-tree index-related fixes.\n>> https://www.postgresql.org/docs/12/release-12-3.html Release date:\n>> 2020-05-14\n>> - Fix possible undercounting of deleted B-tree index pages in VACUUM\n>> VERBOSE output\n>> - Fix wrong bookkeeping for oldest deleted page in a B-tree index\n>> - Ensure INCLUDE'd columns are always removed from B-tree pivot tuples\n>> https://www.postgresql.org/docs/12/release-12-4.html\n>> - Avoid repeated marking of dead btree index entries as dead\n>> https://www.postgresql.org/docs/12/release-12-5.html\n>> - Fix failure of parallel B-tree index scans when the index condition\n>> is unsatisfiable\n>> https://www.postgresql.org/docs/12/release-12-6.html Release date:\n>> 2021-02-11\n>>\n>>\n>> > COLLATE pg_catalog.\"default\"\n>>\n>> You can test the \"C\" Collation in some columns (keys ? ) ; in theory,\n>> it should be faster :\n>> \"The drawback of using locales other than C or POSIX in PostgreSQL is its\n>> performance impact. It slows character handling and prevents ordinary\n>> indexes from being used by LIKE. For this reason use locales only if you\n>> actually need them.\"\n>> https://www.postgresql.org/docs/12/locale.html\n>>\n>> https://www.postgresql.org/message-id/flat/CAF6DVKNU0vb4ZeQQ-%3Dagg69QJU3wdjPnMYYrPYY7CKc6iOU7eQ%40mail.gmail.com\n>>\n>> Best,\n>> Imre\n>>\n>>\n>> Semen Yefimenko <[email protected]> ezt írta (időpont: 2021.\n>> máj. 6., Cs, 16:38):\n>>\n>>> Hi there,\n>>>\n>>> I've recently been involved in migrating our old system to SQL Server\n>>> and then PostgreSQL. Everything has been working fine so far but now after\n>>> executing our tests on Postgres, we saw a very slow running query on a\n>>> large table in our database.\n>>> I have tried asking on other platforms but no one has been able to give\n>>> me a satisfying answer.\n>>>\n>>> *Postgres Version : *PostgreSQL 12.2, compiled by Visual C++ build\n>>> 1914, 64-bit\n>>> No notable errors in the Server log and the Postgres Server itself.\n>>>\n>>> The table structure :\n>>>\n>>> CREATE TABLE logtable\n>>> (\n>>> key character varying(20) COLLATE pg_catalog.\"default\" NOT NULL,\n>>> id integer,\n>>> column3 integer,\n>>> column4 integer,\n>>> column5 integer,\n>>> column6 integer,\n>>> column7 integer,\n>>> column8 integer,\n>>> column9 character varying(128) COLLATE pg_catalog.\"default\",\n>>> column10 character varying(2048) COLLATE pg_catalog.\"default\",\n>>> column11 character varying(2048) COLLATE pg_catalog.\"default\",\n>>> column12 character varying(2048) COLLATE pg_catalog.\"default\",\n>>> column13 character varying(2048) COLLATE pg_catalog.\"default\",\n>>> column14 character varying(2048) COLLATE pg_catalog.\"default\",\n>>> column15 character varying(2048) COLLATE pg_catalog.\"default\",\n>>> column16 character varying(2048) COLLATE pg_catalog.\"default\",\n>>> column17 character varying(2048) COLLATE pg_catalog.\"default\",\n>>> column18 character varying(2048) COLLATE pg_catalog.\"default\",\n>>> column19 character varying(2048) COLLATE pg_catalog.\"default\",\n>>> column21 character varying(256) COLLATE pg_catalog.\"default\",\n>>> column22 character varying(256) COLLATE pg_catalog.\"default\",\n>>> column23 character varying(256) COLLATE pg_catalog.\"default\",\n>>> column24 character varying(256) COLLATE pg_catalog.\"default\",\n>>> column25 character varying(256) COLLATE pg_catalog.\"default\",\n>>> column26 character varying(256) COLLATE pg_catalog.\"default\",\n>>> column27 character varying(256) COLLATE pg_catalog.\"default\",\n>>> column28 character varying(256) COLLATE pg_catalog.\"default\",\n>>> column29 character varying(256) COLLATE pg_catalog.\"default\",\n>>> column30 character varying(256) COLLATE pg_catalog.\"default\",\n>>> column31 character varying(256) COLLATE pg_catalog.\"default\",\n>>> column32 character varying(256) COLLATE pg_catalog.\"default\",\n>>> column33 character varying(256) COLLATE pg_catalog.\"default\",\n>>> column34 character varying(256) COLLATE pg_catalog.\"default\",\n>>> column35 character varying(256) COLLATE pg_catalog.\"default\",\n>>> entrytype integer,\n>>> column37 bigint,\n>>> column38 bigint,\n>>> column39 bigint,\n>>> column40 bigint,\n>>> column41 bigint,\n>>> column42 bigint,\n>>> column43 bigint,\n>>> column44 bigint,\n>>> column45 bigint,\n>>> column46 bigint,\n>>> column47 character varying(128) COLLATE pg_catalog.\"default\",\n>>> timestampcol timestamp without time zone,\n>>> column49 timestamp without time zone,\n>>> column50 timestamp without time zone,\n>>> column51 timestamp without time zone,\n>>> column52 timestamp without time zone,\n>>> archivestatus integer,\n>>> column54 integer,\n>>> column55 character varying(20) COLLATE pg_catalog.\"default\",\n>>> CONSTRAINT pkey PRIMARY KEY (key)\n>>> USING INDEX TABLESPACE tablespace\n>>> )\n>>>\n>>> TABLESPACE tablespace;\n>>>\n>>> ALTER TABLE schema.logtable\n>>> OWNER to user;\n>>>\n>>> CREATE INDEX idx_timestampcol\n>>> ON schema.logtable USING btree\n>>> ( timestampcol ASC NULLS LAST )\n>>> TABLESPACE tablespace ;\n>>>\n>>> CREATE INDEX idx_test2\n>>> ON schema.logtable USING btree\n>>> ( entrytype ASC NULLS LAST)\n>>> TABLESPACE tablespace\n>>> WHERE archivestatus <= 1;\n>>>\n>>> CREATE INDEX idx_arcstatus\n>>> ON schema.logtable USING btree\n>>> ( archivestatus ASC NULLS LAST)\n>>> TABLESPACE tablespace;\n>>>\n>>> CREATE INDEX idx_entrytype\n>>> ON schema.logtable USING btree\n>>> ( entrytype ASC NULLS LAST)\n>>> TABLESPACE tablespace ;\n>>>\n>>>\n>>> The table contains 14.000.000 entries and has about 3.3 GB of data:\n>>> No triggers, inserts per day, probably 5-20 K per day.\n>>>\n>>> SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\n>>> relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE\n>>> relname='logtable';\n>>>\n>>> relname\n>>> |relpages|reltuples|relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|\n>>>\n>>> ------------------|--------|---------|-------------|-------|--------|--------------|----------|-------------|\n>>> logtable | 405988| 14091424| 405907|r |\n>>> 54|false |NULL | 3326803968|\n>>>\n>>>\n>>> The slow running query:\n>>>\n>>> SELECT column1,..., column54 where ((entrytype = 4000 or entrytype =\n>>> 4001 or entrytype = 4002) and (archivestatus <= 1)) order by timestampcol\n>>> desc;\n>>>\n>>>\n>>> This query runs in about 45-60 seconds.\n>>> The same query runs in about 289 ms Oracle and 423 ms in SQL-Server.\n>>> Now I understand that actually loading all results would take a while.\n>>> (about 520K or so rows)\n>>> But that shouldn't be exactly what happens right? There should be a\n>>> resultset iterator which can retrieve all data but doesn't from the get go.\n>>>\n>>> With the help of some people in the slack and so thread, I've found a\n>>> configuration parameter which helps performance :\n>>>\n>>> set random_page_cost = 1;\n>>>\n>>> This improved performance from 45-60 s to 15-35 s. (since we are using\n>>> ssd's)\n>>> Still not acceptable but definitely an improvement.\n>>> Some maybe relevant system parameters:\n>>>\n>>> effective_cache_size 4GB\n>>> maintenance_work_mem 1GB\n>>> shared_buffers 2GB\n>>> work_mem 1GB\n>>>\n>>>\n>>> Currently I'm accessing the data through DbBeaver (JDBC -\n>>> postgresql-42.2.5.jar) and our JAVA application (JDBC -\n>>> postgresql-42.2.19.jar). Both use the defaultRowFetchSize=5000 to not load\n>>> everything into memory and limit the results.\n>>> The explain plan:\n>>>\n>>> EXPLAIN (ANALYZE, BUFFERS, SETTINGS, VERBOSE)...\n>>> (Above Query)\n>>>\n>>>\n>>> Gather Merge (cost=347142.71..397196.91 rows=429006 width=2558) (actual\n>>> time=21210.019..22319.444 rows=515841 loops=1)\n>>> Output: column1, .. , column54\n>>> Workers Planned: 2\n>>> Workers Launched: 2\n>>> Buffers: shared hit=141487 read=153489\n>>> -> Sort (cost=346142.69..346678.95 rows=214503 width=2558) (actual\n>>> time=21148.887..21297.428 rows=171947 loops=3)\n>>> Output: column1, .. , column54\n>>> Sort Key: logtable.timestampcol DESC\n>>> Sort Method: quicksort Memory: 62180kB\n>>> Worker 0: Sort Method: quicksort Memory: 56969kB\n>>> Worker 1: Sort Method: quicksort Memory: 56837kB\n>>> Buffers: shared hit=141487 read=153489\n>>> Worker 0: actual time=21129.973..21296.839 rows=166776 loops=1\n>>> Buffers: shared hit=45558 read=49514\n>>> Worker 1: actual time=21114.439..21268.117 rows=165896 loops=1\n>>> Buffers: shared hit=45104 read=49506\n>>> -> Parallel Bitmap Heap Scan on schema.logtable\n>>> (cost=5652.74..327147.77 rows=214503 width=2558) (actual\n>>> time=1304.813..20637.462 rows=171947 loops=3)\n>>> Output: column1, .. , column54\n>>> Recheck Cond: ((logtable.entrytype = 4000) OR\n>>> (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n>>> Filter: (logtable.archivestatus <= 1)\n>>> Heap Blocks: exact=103962\n>>> Buffers: shared hit=141473 read=153489\n>>> Worker 0: actual time=1280.472..20638.620 rows=166776\n>>> loops=1\n>>> Buffers: shared hit=45551 read=49514\n>>> Worker 1: actual time=1275.274..20626.219 rows=165896\n>>> loops=1\n>>> Buffers: shared hit=45097 read=49506\n>>> -> BitmapOr (cost=5652.74..5652.74 rows=520443 width=0)\n>>> (actual time=1179.438..1179.438 rows=0 loops=1)\n>>> Buffers: shared hit=9 read=1323\n>>> -> Bitmap Index Scan on idx_entrytype\n>>> (cost=0.00..556.61 rows=54957 width=0) (actual time=161.939..161.940\n>>> rows=65970 loops=1)\n>>> Index Cond: (logtable.entrytype = 4000)\n>>> Buffers: shared hit=1 read=171\n>>> -> Bitmap Index Scan on idx_entrytype\n>>> (cost=0.00..2243.22 rows=221705 width=0) (actual time=548.849..548.849\n>>> rows=224945 loops=1)\n>>> Index Cond: (logtable.entrytype = 4001)\n>>> Buffers: shared hit=4 read=576\n>>> -> Bitmap Index Scan on idx_entrytype\n>>> (cost=0.00..2466.80 rows=243782 width=0) (actual time=468.637..468.637\n>>> rows=224926 loops=1)\n>>> Index Cond: (logtable.entrytype = 4002)\n>>> Buffers: shared hit=4 read=576\n>>> Settings: random_page_cost = '1', search_path = '\"$user\", schema,\n>>> public', temp_buffers = '80MB', work_mem = '1GB'\n>>> Planning Time: 0.578 ms\n>>> Execution Time: 22617.351 ms\n>>>\n>>> As mentioned before, oracle does this much faster.\n>>>\n>>>\n>>> -------------------------------------------------------------------------------------------------------------------------\n>>> | Id | Operation | Name\n>>> | Rows | Bytes |TempSpc| Cost (%CPU)| Time |\n>>>\n>>> -------------------------------------------------------------------------------------------------------------------------\n>>> | 0 | SELECT STATEMENT |\n>>> | 6878 | 2491K| | 2143 (1)| 00:00:01 |\n>>> | 1 | SORT ORDER BY |\n>>> | 6878 | 2491K| 3448K| 2143 (1)| 00:00:01 |\n>>> | 2 | INLIST ITERATOR |\n>>> | | | | | |\n>>> |* 3 | TABLE ACCESS BY INDEX ROWID BATCHED| logtable\n>>> | 6878 | 2491K| | 1597 (1)| 00:00:01 |\n>>> |* 4 | INDEX RANGE SCAN | idx_entrytype\n>>> | 6878 | | | 23 (0)| 00:00:01 |\n>>>\n>>> -------------------------------------------------------------------------------------------------------------------------\n>>>\n>>> Is there much I can analyze, any information you might need to further\n>>> analyze this?\n>>>\n>>\n\n-- \nThanks,\nVijay\nMumbai, India\n\nok one last thing, not to be a PITA, but just in case if this helps.postgres=# SELECT * FROM pg_stat_user_indexes where relname = 'logtable';\npostgres=# SELECT * FROM pg_stat_user_tables where relname = 'logtable';\n\nbasically, i just to verify if the table is not bloated.looking at n_live_tup vs n_dead_tup would help understand it.if you see too many dead tuples, vacuum (ANALYZE,verbose) logtable; -- would get rid of dead tuples (if there are no tx using the dead tuples) and then run your query.Thanks,VijayOn Fri, 7 May 2021 at 13:34, Semen Yefimenko <[email protected]> wrote:Sorry if I'm cumulatively answering everyone in one E-Mail, I'm not sure how I'm supposed to do it. (single E-Mails vs many) Can you try tuning by increasing the shared_buffers slowly in steps of 500MB, and running explain analyze against the query.-- 2500 MB shared buffers - random_page_cost = 1;Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual time=2076.329..3737.050 rows=516517 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=295446 -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual time=2007.487..2202.707 rows=172172 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 65154kB Worker 0: Sort Method: quicksort Memory: 55707kB Worker 1: Sort Method: quicksort Memory: 55304kB Buffers: shared hit=295446 Worker 0: actual time=1963.969..2156.624 rows=161205 loops=1 Buffers: shared hit=91028 Worker 1: actual time=1984.700..2179.697 rows=161935 loops=1 Buffers: shared hit=92133 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5546.39..323481.21 rows=210418 width=2542) (actual time=322.125..1618.971 rows=172172 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=110951 Buffers: shared hit=295432 Worker 0: actual time=282.201..1595.117 rows=161205 loops=1 Buffers: shared hit=91021 Worker 1: actual time=303.671..1623.299 rows=161935 loops=1 Buffers: shared hit=92126 -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0) (actual time=199.119..199.119 rows=0 loops=1) Buffers: shared hit=1334 -> Bitmap Index Scan on idx_entrytype (cost=0.00..682.13 rows=67293 width=0) (actual time=28.856..28.857 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=172 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2223.63 rows=219760 width=0) (actual time=108.871..108.872 rows=225283 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=581 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2261.87 rows=223525 width=0) (actual time=61.377..61.377 rows=225264 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=581Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.940 msExecution Time: 4188.083 ms-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3000 MB shared buffers - random_page_cost = 1;Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual time=2062.280..3763.408 rows=516517 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=295446 -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual time=1987.933..2180.422 rows=172172 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 66602kB Worker 0: Sort Method: quicksort Memory: 55149kB Worker 1: Sort Method: quicksort Memory: 54415kB Buffers: shared hit=295446 Worker 0: actual time=1963.059..2147.916 rows=159556 loops=1 Buffers: shared hit=89981 Worker 1: actual time=1949.726..2136.200 rows=158554 loops=1 Buffers: shared hit=90141 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5546.39..323481.21 rows=210418 width=2542) (actual time=340.705..1603.796 rows=172172 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=113990 Buffers: shared hit=295432 Worker 0: actual time=317.918..1605.548 rows=159556 loops=1 Buffers: shared hit=89974 Worker 1: actual time=304.744..1589.221 rows=158554 loops=1 Buffers: shared hit=90134 -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0) (actual time=218.972..218.973 rows=0 loops=1) Buffers: shared hit=1334 -> Bitmap Index Scan on idx_entrytype (cost=0.00..682.13 rows=67293 width=0) (actual time=37.741..37.742 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=172 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2223.63 rows=219760 width=0) (actual time=119.120..119.121 rows=225283 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=581 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2261.87 rows=223525 width=0) (actual time=62.097..62.098 rows=225264 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=581Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 2.717 msExecution Time: 4224.670 ms-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3500 MB shared buffers - random_page_cost = 1;Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual time=3578.155..4932.858 rows=516517 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=14 read=295432 written=67 -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual time=3482.159..3677.227 rows=172172 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 58533kB Worker 0: Sort Method: quicksort Memory: 56878kB Worker 1: Sort Method: quicksort Memory: 60755kB Buffers: shared hit=14 read=295432 written=67 Worker 0: actual time=3435.131..3632.985 rows=166842 loops=1 Buffers: shared hit=7 read=95783 written=25 Worker 1: actual time=3441.545..3649.345 rows=179354 loops=1 Buffers: shared hit=5 read=101608 written=20 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5546.39..323481.21 rows=210418 width=2542) (actual time=345.111..3042.932 rows=172172 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=96709 Buffers: shared hit=2 read=295430 written=67 Worker 0: actual time=300.525..2999.403 rows=166842 loops=1 Buffers: shared read=95783 written=25 Worker 1: actual time=300.552..3004.859 rows=179354 loops=1 Buffers: shared read=101606 written=20 -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0) (actual time=241.996..241.997 rows=0 loops=1) Buffers: shared hit=2 read=1332 -> Bitmap Index Scan on idx_entrytype (cost=0.00..682.13 rows=67293 width=0) (actual time=37.129..37.130 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared read=172 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2223.63 rows=219760 width=0) (actual time=131.051..131.052 rows=225283 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=1 read=580 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2261.87 rows=223525 width=0) (actual time=73.800..73.800 rows=225264 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=1 read=580Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.597 msExecution Time: 5389.811 ms This doesn't seem to have had an effect. Thanks for the suggestion. Have you try of excluding not null from index? Can you give dispersion of archivestatus?Yes I have, it yielded the same performance boost as : create index test on logtable(entrytype) where archivestatus <= 1;I wonder what the old query plan was...Would you include links to your prior correspondance ?So prior Execution Plans are present in the SO.The other forums I've tried are the official slack channel : https://postgresteam.slack.com/archives/C0FS3UTAP/p1620286295228600And SO : https://stackoverflow.com/questions/67401792/slow-running-postgresql-queryBut I think most of the points discussed in these posts have already been mentionend by you except bloating of indexes. Oracle is apparently doing a single scan on \"entrytype\".As a test, you could try forcing that, like:begin; SET enable_bitmapscan=off ; explain (analyze) [...]; rollback;orbegin; DROP INDEX idx_arcstatus; explain (analyze) [...]; rollback;I've tried enable_bitmapscan=off but it didn't yield any good results.-- 2000 MB shared buffers - random_page_cost = 4 - enable_bitmapscan to offGather Merge (cost=543949.72..593050.69 rows=420836 width=2542) (actual time=7716.031..9043.399 rows=516517 loops=1) Output: column1, .., column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=192 read=406605 -> Sort (cost=542949.70..543475.75 rows=210418 width=2542) (actual time=7642.666..7835.527 rows=172172 loops=3) Output: column1, .., column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 58803kB Worker 0: Sort Method: quicksort Memory: 60376kB Worker 1: Sort Method: quicksort Memory: 56988kB Buffers: shared hit=192 read=406605 Worker 0: actual time=7610.482..7814.905 rows=177637 loops=1 Buffers: shared hit=78 read=137826 Worker 1: actual time=7607.645..7803.561 rows=167316 loops=1 Buffers: shared hit=80 read=132672 -> Parallel Seq Scan on schema.logtable (cost=0.00..524345.70 rows=210418 width=2542) (actual time=1.669..7189.365 rows=172172 loops=3) Output: column1, .., column54 Filter: ((logtable.acrhivestatus <= 1) AND ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))) Rows Removed by Filter: 4533459 Buffers: shared hit=96 read=406605 Worker 0: actual time=1.537..7158.286 rows=177637 loops=1 Buffers: shared hit=30 read=137826 Worker 1: actual time=1.414..7161.670 rows=167316 loops=1 Buffers: shared hit=32 read=132672Settings: enable_bitmapscan = 'off', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.725 msExecution Time: 9500.928 ms---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2000 MB shared buffers - random_page_cost = 4 - -- 2000 -- 2000 MB shared buffers - random_page_cost = 1 - enable_bitmapscan to offGather Merge (cost=543949.72..593050.69 rows=420836 width=2542) (actual time=7519.032..8871.433 rows=516517 loops=1) Output: column1, .., column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=576 read=406221 -> Sort (cost=542949.70..543475.75 rows=210418 width=2542) (actual time=7451.958..7649.480 rows=172172 loops=3) Output: column1, .., column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 58867kB Worker 0: Sort Method: quicksort Memory: 58510kB Worker 1: Sort Method: quicksort Memory: 58788kB Buffers: shared hit=576 read=406221 Worker 0: actual time=7438.271..7644.241 rows=172085 loops=1 Buffers: shared hit=203 read=135166 Worker 1: actual time=7407.574..7609.922 rows=172948 loops=1 Buffers: shared hit=202 read=135225 -> Parallel Seq Scan on schema.logtable (cost=0.00..524345.70 rows=210418 width=2542) (actual time=2.839..7017.729 rows=172172 loops=3) Output: column1, .., column54 Filter: ((logtable.acrhivestatus <= 1) AND ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))) Rows Removed by Filter: 4533459 Buffers: shared hit=480 read=406221 Worker 0: actual time=2.628..7006.420 rows=172085 loops=1 Buffers: shared hit=155 read=135166 Worker 1: actual time=3.948..6978.154 rows=172948 loops=1 Buffers: shared hit=154 read=135225Settings: enable_bitmapscan = 'off', random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.621 msExecution Time: 9339.457 msHave you tune shared buffers enough? Each block is of 8k by default.BTW, please try to reset random_page_cost.Look above. I will try upgrading the minor version next.I will also try setting up a 13.X version locally and import the data from 12.2 to 13.X and see if it might be faster.Am Do., 6. Mai 2021 um 23:16 Uhr schrieb Imre Samu <[email protected]>:> Postgres Version : PostgreSQL 12.2,> ... ON ... USING btreeIMHO:The next minor (bugix&security) release is near ( expected ~ May 13th, 2021 ) https://www.postgresql.org/developer/roadmap/so you can update your PostgreSQL to 12.7 ( + full Reindexing recommended ! ) You can find a lot of B-tree index-related fixes.https://www.postgresql.org/docs/12/release-12-3.html Release date: 2020-05-14 - Fix possible undercounting of deleted B-tree index pages in VACUUM VERBOSE output - Fix wrong bookkeeping for oldest deleted page in a B-tree index- Ensure INCLUDE'd columns are always removed from B-tree pivot tupleshttps://www.postgresql.org/docs/12/release-12-4.html - Avoid repeated marking of dead btree index entries as dead https://www.postgresql.org/docs/12/release-12-5.html - Fix failure of parallel B-tree index scans when the index condition is unsatisfiablehttps://www.postgresql.org/docs/12/release-12-6.html Release date: 2021-02-11> COLLATE pg_catalog.\"default\"You can test the \"C\" Collation in some columns (keys ? ) ; in theory, it should be faster :\"The drawback of using locales other than C or POSIX in PostgreSQL is its performance impact. It slows character handling and prevents ordinary indexes from being used by LIKE. For this reason use locales only if you actually need them.\"https://www.postgresql.org/docs/12/locale.htmlhttps://www.postgresql.org/message-id/flat/CAF6DVKNU0vb4ZeQQ-%3Dagg69QJU3wdjPnMYYrPYY7CKc6iOU7eQ%40mail.gmail.comBest, ImreSemen Yefimenko <[email protected]> ezt írta (időpont: 2021. máj. 6., Cs, 16:38):Hi there,I've recently been involved in migrating our old system to SQL Server and then PostgreSQL. Everything has been working fine so far but now after executing our tests on Postgres, we saw a very slow running query on a large table in our database. I have tried asking on other platforms but no one has been able to give me a satisfying answer. Postgres Version : PostgreSQL 12.2, compiled by Visual C++ build 1914, 64-bitNo notable errors in the Server log and the Postgres Server itself.The table structure :CREATE TABLE logtable( key character varying(20) COLLATE pg_catalog.\"default\" NOT NULL, id integer, \n\ncolumn3 integer, \n\ncolumn4 integer, \n\ncolumn5 integer, \n\ncolumn6 integer, \n\ncolumn7 integer, \n\ncolumn8 integer, \n\ncolumn9 character varying(128) COLLATE pg_catalog.\"default\", \n\ncolumn10 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn11 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn12 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn13 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn14 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn15 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn16 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn17 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn18 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn19 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn21 \n\ncharacter varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn22 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn23 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn24 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn25 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn26 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn27 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn28 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn29 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn30 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn31 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn32 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn33 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn34 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn35 character varying(256) COLLATE pg_catalog.\"default\", \n\nentrytype integer, \n\ncolumn37 bigint, \n\ncolumn38 bigint, \n\ncolumn39 bigint, \n\ncolumn40 bigint, \n\ncolumn41 bigint, \n\ncolumn42 bigint, \n\ncolumn43 bigint, \n\ncolumn44 bigint, \n\ncolumn45 bigint, \n\ncolumn46 bigint, \n\ncolumn47 character varying(128) COLLATE pg_catalog.\"default\", \n\ntimestampcol timestamp without time zone, \n\ncolumn49 timestamp without time zone, \n\ncolumn50 timestamp without time zone, \n\ncolumn51 timestamp without time zone, \n\ncolumn52 timestamp without time zone, \n\n\n\narchivestatus \n\ninteger, \n\ncolumn54 integer, \n\ncolumn55 character varying(20) COLLATE pg_catalog.\"default\", CONSTRAINT pkey PRIMARY KEY (key) USING INDEX TABLESPACE tablespace)TABLESPACE tablespace;ALTER TABLE schema.logtable OWNER to user;CREATE INDEX idx_timestampcol ON schema.logtable USING btree (\n\n\n\ntimestampcol \n\n\n\n ASC NULLS LAST ) TABLESPACE \n\ntablespace\n\n;CREATE INDEX idx_test2 ON schema.logtable USING btree ( entrytype ASC NULLS LAST) TABLESPACE tablespace WHERE archivestatus <= 1;CREATE INDEX idx_arcstatus ON \n\nschema.logtable USING btree ( archivestatus ASC NULLS LAST) TABLESPACE tablespace;CREATE INDEX \n\nidx_entrytype ON schema.logtable USING btree ( entrytype ASC NULLS LAST) TABLESPACE \n\ntablespace\n\n;The table contains 14.000.000 entries and has about 3.3 GB of data:No triggers, inserts per day, probably 5-20 K per day.SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='logtable';relname |relpages|reltuples|relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|------------------|--------|---------|-------------|-------|--------|--------------|----------|-------------|logtable | 405988| 14091424| 405907|r | 54|false |NULL | 3326803968|The slow running query:SELECT column1,..., column54 where ((entrytype = 4000 or \n\nentrytype \n\n= 4001 or \n\nentrytype \n\n= 4002) and (archivestatus <= 1)) order by timestampcol desc;This query runs in about 45-60 seconds.The same query runs in about 289 ms Oracle and 423 ms in SQL-Server. Now I understand that actually loading all results would take a while. (about 520K or so rows) But that shouldn't be exactly what happens right? There should be a resultset iterator which can retrieve all data but doesn't from the get go. With the help of some people in the slack and so thread, I've found a configuration parameter which helps performance : set random_page_cost = 1;This improved performance from 45-60 s to 15-35 s. (since we are using ssd's) Still not acceptable but definitely an improvement. Some maybe relevant system parameters:effective_cache_size\t4GBmaintenance_work_mem\t1GBshared_buffers\t2GBwork_mem\t1GBCurrently I'm accessing the data through DbBeaver (JDBC - postgresql-42.2.5.jar) and our JAVA application (JDBC - postgresql-42.2.19.jar). Both use the defaultRowFetchSize=5000 to not load everything into memory and limit the results. The explain plan:EXPLAIN (ANALYZE, BUFFERS, SETTINGS, VERBOSE)...(Above Query)Gather Merge (cost=347142.71..397196.91 rows=429006 width=2558) (actual time=21210.019..22319.444 rows=515841 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=141487 read=153489 -> Sort (cost=346142.69..346678.95 rows=214503 width=2558) (actual time=21148.887..21297.428 rows=171947 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 62180kB Worker 0: Sort Method: quicksort Memory: 56969kB Worker 1: Sort Method: quicksort Memory: 56837kB Buffers: shared hit=141487 read=153489 Worker 0: actual time=21129.973..21296.839 rows=166776 loops=1 Buffers: shared hit=45558 read=49514 Worker 1: actual time=21114.439..21268.117 rows=165896 loops=1 Buffers: shared hit=45104 read=49506 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5652.74..327147.77 rows=214503 width=2558) (actual time=1304.813..20637.462 rows=171947 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=103962 Buffers: shared hit=141473 read=153489 Worker 0: actual time=1280.472..20638.620 rows=166776 loops=1 Buffers: shared hit=45551 read=49514 Worker 1: actual time=1275.274..20626.219 rows=165896 loops=1 Buffers: shared hit=45097 read=49506 -> BitmapOr (cost=5652.74..5652.74 rows=520443 width=0) (actual time=1179.438..1179.438 rows=0 loops=1) Buffers: shared hit=9 read=1323 -> Bitmap Index Scan on idx_entrytype (cost=0.00..556.61 rows=54957 width=0) (actual time=161.939..161.940 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=1 read=171 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2243.22 rows=221705 width=0) (actual time=548.849..548.849 rows=224945 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=4 read=576 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2466.80 rows=243782 width=0) (actual time=468.637..468.637 rows=224926 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=4 read=576Settings: random_page_cost = '1', search_path = '\"$user\", schema, public', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.578 msExecution Time: 22617.351 msAs mentioned before, oracle does this much faster. -------------------------------------------------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |-------------------------------------------------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 6878 | 2491K| | 2143 (1)| 00:00:01 || 1 | SORT ORDER BY | | 6878 | 2491K| 3448K| 2143 (1)| 00:00:01 || 2 | INLIST ITERATOR | | | | | | ||* 3 | TABLE ACCESS BY INDEX ROWID BATCHED| logtable | 6878 | 2491K| | 1597 (1)| 00:00:01 ||* 4 | INDEX RANGE SCAN | idx_entrytype | 6878 | | | 23 (0)| 00:00:01 |-------------------------------------------------------------------------------------------------------------------------Is there much I can analyze, any information you might need to further analyze this? \n\n\n-- Thanks,VijayMumbai, India",
"msg_date": "Fri, 7 May 2021 14:14:32 +0530",
"msg_from": "Vijaykumar Jain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "As mentionend in the slack comments :\n\nSELECT pg_size_pretty(pg_relation_size('logtable')) as table_size,\npg_size_pretty(pg_relation_size('idx_entrytype')) as index_size,\n(pgstattuple('logtable')).dead_tuple_percent;\n\ntable_size | index_size | dead_tuple_percent\n------------+------------+--------------------\n3177 MB | 289 MB | 0\n\nI have roughly 6 indexes which all have around 300 MB\n\nSELECT pg_relation_size('logtable') as table_size,\npg_relation_size(idx_entrytype) as index_size,\n100-(pgstatindex('idx_entrytype')).avg_leaf_density as bloat_ratio\n\ntable_size | index_size | bloat_ratio\n------------+------------+-------------------\n3331694592 | 302555136 | 5.219999999999999\n\nYour queries:\n\nn_live_tup n_dead_tup\n14118380 0\n\n\nFor testing, I've also been running VACUUM and ANALYZE pretty much before\nevery test run.\n\nAm Fr., 7. Mai 2021 um 10:44 Uhr schrieb Vijaykumar Jain <\[email protected]>:\n\n> ok one last thing, not to be a PITA, but just in case if this helps.\n>\n> postgres=# SELECT * FROM pg_stat_user_indexes where relname = 'logtable';\n> postgres=# SELECT * FROM pg_stat_user_tables where relname = 'logtable';\n> basically, i just to verify if the table is not bloated.\n> looking at *n_live_tup* vs *n_dead_tup* would help understand it.\n>\n> if you see too many dead tuples,\n> vacuum (ANALYZE,verbose) logtable; -- would get rid of dead tuples (if\n> there are no tx using the dead tuples)\n>\n> and then run your query.\n>\n> Thanks,\n> Vijay\n>\n>\n>\n>\n>\n>\n> On Fri, 7 May 2021 at 13:34, Semen Yefimenko <[email protected]>\n> wrote:\n>\n>> Sorry if I'm cumulatively answering everyone in one E-Mail, I'm not sure\n>> how I'm supposed to do it. (single E-Mails vs many)\n>>\n>>\n>>> Can you try tuning by increasing the shared_buffers slowly in steps of\n>>> 500MB, and running explain analyze against the query.\n>>\n>>\n>> -- 2500 MB shared buffers - random_page_cost = 1;\n>> Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual\n>> time=2076.329..3737.050 rows=516517 loops=1)\n>> Output: column1, .. , column54\n>> Workers Planned: 2\n>> Workers Launched: 2\n>> Buffers: shared hit=295446\n>> -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual\n>> time=2007.487..2202.707 rows=172172 loops=3)\n>> Output: column1, .. , column54\n>> Sort Key: logtable.timestampcol DESC\n>> Sort Method: quicksort Memory: 65154kB\n>> Worker 0: Sort Method: quicksort Memory: 55707kB\n>> Worker 1: Sort Method: quicksort Memory: 55304kB\n>> Buffers: shared hit=295446\n>> Worker 0: actual time=1963.969..2156.624 rows=161205 loops=1\n>> Buffers: shared hit=91028\n>> Worker 1: actual time=1984.700..2179.697 rows=161935 loops=1\n>> Buffers: shared hit=92133\n>> -> Parallel Bitmap Heap Scan on schema.logtable\n>> (cost=5546.39..323481.21 rows=210418 width=2542) (actual\n>> time=322.125..1618.971 rows=172172 loops=3)\n>> Output: column1, .. , column54\n>> Recheck Cond: ((logtable.entrytype = 4000) OR\n>> (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n>> Filter: (logtable.archivestatus <= 1)\n>> Heap Blocks: exact=110951\n>> Buffers: shared hit=295432\n>> Worker 0: actual time=282.201..1595.117 rows=161205 loops=1\n>> Buffers: shared hit=91021\n>> Worker 1: actual time=303.671..1623.299 rows=161935 loops=1\n>> Buffers: shared hit=92126\n>> -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0)\n>> (actual time=199.119..199.119 rows=0 loops=1)\n>> Buffers: shared hit=1334\n>> -> Bitmap Index Scan on idx_entrytype\n>> (cost=0.00..682.13 rows=67293 width=0) (actual time=28.856..28.857\n>> rows=65970 loops=1)\n>> Index Cond: (logtable.entrytype = 4000)\n>> Buffers: shared hit=172\n>> -> Bitmap Index Scan on idx_entrytype\n>> (cost=0.00..2223.63 rows=219760 width=0) (actual time=108.871..108.872\n>> rows=225283 loops=1)\n>> Index Cond: (logtable.entrytype = 4001)\n>> Buffers: shared hit=581\n>> -> Bitmap Index Scan on idx_entrytype\n>> (cost=0.00..2261.87 rows=223525 width=0) (actual time=61.377..61.377\n>> rows=225264 loops=1)\n>> Index Cond: (logtable.entrytype = 4002)\n>> Buffers: shared hit=581\n>> Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'\n>> Planning Time: 0.940 ms\n>> Execution Time: 4188.083 ms\n>>\n>> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> -- 3000 MB shared buffers - random_page_cost = 1;\n>> Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual\n>> time=2062.280..3763.408 rows=516517 loops=1)\n>> Output: column1, .. , column54\n>> Workers Planned: 2\n>> Workers Launched: 2\n>> Buffers: shared hit=295446\n>> -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual\n>> time=1987.933..2180.422 rows=172172 loops=3)\n>> Output: column1, .. , column54\n>> Sort Key: logtable.timestampcol DESC\n>> Sort Method: quicksort Memory: 66602kB\n>> Worker 0: Sort Method: quicksort Memory: 55149kB\n>> Worker 1: Sort Method: quicksort Memory: 54415kB\n>> Buffers: shared hit=295446\n>> Worker 0: actual time=1963.059..2147.916 rows=159556 loops=1\n>> Buffers: shared hit=89981\n>> Worker 1: actual time=1949.726..2136.200 rows=158554 loops=1\n>> Buffers: shared hit=90141\n>> -> Parallel Bitmap Heap Scan on schema.logtable\n>> (cost=5546.39..323481.21 rows=210418 width=2542) (actual\n>> time=340.705..1603.796 rows=172172 loops=3)\n>> Output: column1, .. , column54\n>> Recheck Cond: ((logtable.entrytype = 4000) OR\n>> (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n>> Filter: (logtable.archivestatus <= 1)\n>> Heap Blocks: exact=113990\n>> Buffers: shared hit=295432\n>> Worker 0: actual time=317.918..1605.548 rows=159556 loops=1\n>> Buffers: shared hit=89974\n>> Worker 1: actual time=304.744..1589.221 rows=158554 loops=1\n>> Buffers: shared hit=90134\n>> -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0)\n>> (actual time=218.972..218.973 rows=0 loops=1)\n>> Buffers: shared hit=1334\n>> -> Bitmap Index Scan on idx_entrytype\n>> (cost=0.00..682.13 rows=67293 width=0) (actual time=37.741..37.742\n>> rows=65970 loops=1)\n>> Index Cond: (logtable.entrytype = 4000)\n>> Buffers: shared hit=172\n>> -> Bitmap Index Scan on idx_entrytype\n>> (cost=0.00..2223.63 rows=219760 width=0) (actual time=119.120..119.121\n>> rows=225283 loops=1)\n>> Index Cond: (logtable.entrytype = 4001)\n>> Buffers: shared hit=581\n>> -> Bitmap Index Scan on idx_entrytype\n>> (cost=0.00..2261.87 rows=223525 width=0) (actual time=62.097..62.098\n>> rows=225264 loops=1)\n>> Index Cond: (logtable.entrytype = 4002)\n>> Buffers: shared hit=581\n>> Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'\n>> Planning Time: 2.717 ms\n>> Execution Time: 4224.670 ms\n>>\n>> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> -- 3500 MB shared buffers - random_page_cost = 1;\n>> Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual\n>> time=3578.155..4932.858 rows=516517 loops=1)\n>> Output: column1, .. , column54\n>> Workers Planned: 2\n>> Workers Launched: 2\n>> Buffers: shared hit=14 read=295432 written=67\n>> -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual\n>> time=3482.159..3677.227 rows=172172 loops=3)\n>> Output: column1, .. , column54\n>> Sort Key: logtable.timestampcol DESC\n>> Sort Method: quicksort Memory: 58533kB\n>> Worker 0: Sort Method: quicksort Memory: 56878kB\n>> Worker 1: Sort Method: quicksort Memory: 60755kB\n>> Buffers: shared hit=14 read=295432 written=67\n>> Worker 0: actual time=3435.131..3632.985 rows=166842 loops=1\n>> Buffers: shared hit=7 read=95783 written=25\n>> Worker 1: actual time=3441.545..3649.345 rows=179354 loops=1\n>> Buffers: shared hit=5 read=101608 written=20\n>> -> Parallel Bitmap Heap Scan on schema.logtable\n>> (cost=5546.39..323481.21 rows=210418 width=2542) (actual\n>> time=345.111..3042.932 rows=172172 loops=3)\n>> Output: column1, .. , column54\n>> Recheck Cond: ((logtable.entrytype = 4000) OR\n>> (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n>> Filter: (logtable.archivestatus <= 1)\n>> Heap Blocks: exact=96709\n>> Buffers: shared hit=2 read=295430 written=67\n>> Worker 0: actual time=300.525..2999.403 rows=166842 loops=1\n>> Buffers: shared read=95783 written=25\n>> Worker 1: actual time=300.552..3004.859 rows=179354 loops=1\n>> Buffers: shared read=101606 written=20\n>> -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0)\n>> (actual time=241.996..241.997 rows=0 loops=1)\n>> Buffers: shared hit=2 read=1332\n>> -> Bitmap Index Scan on idx_entrytype\n>> (cost=0.00..682.13 rows=67293 width=0) (actual time=37.129..37.130\n>> rows=65970 loops=1)\n>> Index Cond: (logtable.entrytype = 4000)\n>> Buffers: shared read=172\n>> -> Bitmap Index Scan on idx_entrytype\n>> (cost=0.00..2223.63 rows=219760 width=0) (actual time=131.051..131.052\n>> rows=225283 loops=1)\n>> Index Cond: (logtable.entrytype = 4001)\n>> Buffers: shared hit=1 read=580\n>> -> Bitmap Index Scan on idx_entrytype\n>> (cost=0.00..2261.87 rows=223525 width=0) (actual time=73.800..73.800\n>> rows=225264 loops=1)\n>> Index Cond: (logtable.entrytype = 4002)\n>> Buffers: shared hit=1 read=580\n>> Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'\n>> Planning Time: 0.597 ms\n>> Execution Time: 5389.811 ms\n>>\n>>\n>> This doesn't seem to have had an effect.\n>> Thanks for the suggestion.\n>>\n>> Have you try of excluding not null from index? Can you give dispersion of\n>>> archivestatus?\n>>>\n>>\n>> Yes I have, it yielded the same performance boost as :\n>>\n>> create index test on logtable(entrytype) where archivestatus <= 1;\n>>\n>> I wonder what the old query plan was...\n>>> Would you include links to your prior correspondance ?\n>>\n>>\n>> So prior Execution Plans are present in the SO.\n>> The other forums I've tried are the official slack channel :\n>> https://postgresteam.slack.com/archives/C0FS3UTAP/p1620286295228600\n>> And SO :\n>> https://stackoverflow.com/questions/67401792/slow-running-postgresql-query\n>> But I think most of the points discussed in these posts have already been\n>> mentionend by you except bloating of indexes.\n>>\n>> Oracle is apparently doing a single scan on \"entrytype\".\n>>> As a test, you could try forcing that, like:\n>>> begin; SET enable_bitmapscan=off ; explain (analyze) [...]; rollback;\n>>> or\n>>> begin; DROP INDEX idx_arcstatus; explain (analyze) [...]; rollback;\n>>\n>>\n>> I've tried enable_bitmapscan=off but it didn't yield any good results.\n>>\n>> -- 2000 MB shared buffers - random_page_cost = 4 - enable_bitmapscan to\n>> off\n>> Gather Merge (cost=543949.72..593050.69 rows=420836 width=2542) (actual\n>> time=7716.031..9043.399 rows=516517 loops=1)\n>> Output: column1, .., column54\n>> Workers Planned: 2\n>> Workers Launched: 2\n>> Buffers: shared hit=192 read=406605\n>> -> Sort (cost=542949.70..543475.75 rows=210418 width=2542) (actual\n>> time=7642.666..7835.527 rows=172172 loops=3)\n>> Output: column1, .., column54\n>> Sort Key: logtable.timestampcol DESC\n>> Sort Method: quicksort Memory: 58803kB\n>> Worker 0: Sort Method: quicksort Memory: 60376kB\n>> Worker 1: Sort Method: quicksort Memory: 56988kB\n>> Buffers: shared hit=192 read=406605\n>> Worker 0: actual time=7610.482..7814.905 rows=177637 loops=1\n>> Buffers: shared hit=78 read=137826\n>> Worker 1: actual time=7607.645..7803.561 rows=167316 loops=1\n>> Buffers: shared hit=80 read=132672\n>> -> Parallel Seq Scan on schema.logtable (cost=0.00..524345.70\n>> rows=210418 width=2542) (actual time=1.669..7189.365 rows=172172 loops=3)\n>> Output: column1, .., column54\n>> Filter: ((logtable.acrhivestatus <= 1) AND\n>> ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR\n>> (logtable.entrytype = 4002)))\n>> Rows Removed by Filter: 4533459\n>> Buffers: shared hit=96 read=406605\n>> Worker 0: actual time=1.537..7158.286 rows=177637 loops=1\n>> Buffers: shared hit=30 read=137826\n>> Worker 1: actual time=1.414..7161.670 rows=167316 loops=1\n>> Buffers: shared hit=32 read=132672\n>> Settings: enable_bitmapscan = 'off', temp_buffers = '80MB', work_mem =\n>> '1GB'\n>> Planning Time: 0.725 ms\n>> Execution Time: 9500.928 ms\n>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> 2000 MB shared buffers - random_page_cost = 4 - -- 2000 -- 2000 MB shared\n>> buffers - random_page_cost = 1 - enable_bitmapscan to off\n>> Gather Merge (cost=543949.72..593050.69 rows=420836 width=2542) (actual\n>> time=7519.032..8871.433 rows=516517 loops=1)\n>> Output: column1, .., column54\n>> Workers Planned: 2\n>> Workers Launched: 2\n>> Buffers: shared hit=576 read=406221\n>> -> Sort (cost=542949.70..543475.75 rows=210418 width=2542) (actual\n>> time=7451.958..7649.480 rows=172172 loops=3)\n>> Output: column1, .., column54\n>> Sort Key: logtable.timestampcol DESC\n>> Sort Method: quicksort Memory: 58867kB\n>> Worker 0: Sort Method: quicksort Memory: 58510kB\n>> Worker 1: Sort Method: quicksort Memory: 58788kB\n>> Buffers: shared hit=576 read=406221\n>> Worker 0: actual time=7438.271..7644.241 rows=172085 loops=1\n>> Buffers: shared hit=203 read=135166\n>> Worker 1: actual time=7407.574..7609.922 rows=172948 loops=1\n>> Buffers: shared hit=202 read=135225\n>> -> Parallel Seq Scan on schema.logtable (cost=0.00..524345.70\n>> rows=210418 width=2542) (actual time=2.839..7017.729 rows=172172 loops=3)\n>> Output: column1, .., column54\n>> Filter: ((logtable.acrhivestatus <= 1) AND\n>> ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR\n>> (logtable.entrytype = 4002)))\n>> Rows Removed by Filter: 4533459\n>> Buffers: shared hit=480 read=406221\n>> Worker 0: actual time=2.628..7006.420 rows=172085 loops=1\n>> Buffers: shared hit=155 read=135166\n>> Worker 1: actual time=3.948..6978.154 rows=172948 loops=1\n>> Buffers: shared hit=154 read=135225\n>> Settings: enable_bitmapscan = 'off', random_page_cost = '1', temp_buffers\n>> = '80MB', work_mem = '1GB'\n>> Planning Time: 0.621 ms\n>> Execution Time: 9339.457 ms\n>>\n>> Have you tune shared buffers enough? Each block is of 8k by default.\n>>> BTW, please try to reset random_page_cost.\n>>\n>>\n>> Look above.\n>>\n>> I will try upgrading the minor version next.\n>> I will also try setting up a 13.X version locally and import the data\n>> from 12.2 to 13.X and see if it might be faster.\n>>\n>>\n>> Am Do., 6. Mai 2021 um 23:16 Uhr schrieb Imre Samu <[email protected]\n>> >:\n>>\n>>> *> Postgres Version : *PostgreSQL 12.2,\n>>> > ... ON ... USING btree\n>>>\n>>> IMHO:\n>>> The next minor (bugix&security) release is near ( expected ~ May 13th,\n>>> 2021 ) https://www.postgresql.org/developer/roadmap/\n>>> so you can update your PostgreSQL to 12.7 ( + full Reindexing\n>>> recommended ! )\n>>>\n>>> You can find a lot of B-tree index-related fixes.\n>>> https://www.postgresql.org/docs/12/release-12-3.html Release date:\n>>> 2020-05-14\n>>> - Fix possible undercounting of deleted B-tree index pages in VACUUM\n>>> VERBOSE output\n>>> - Fix wrong bookkeeping for oldest deleted page in a B-tree index\n>>> - Ensure INCLUDE'd columns are always removed from B-tree pivot tuples\n>>> https://www.postgresql.org/docs/12/release-12-4.html\n>>> - Avoid repeated marking of dead btree index entries as dead\n>>> https://www.postgresql.org/docs/12/release-12-5.html\n>>> - Fix failure of parallel B-tree index scans when the index condition\n>>> is unsatisfiable\n>>> https://www.postgresql.org/docs/12/release-12-6.html Release date:\n>>> 2021-02-11\n>>>\n>>>\n>>> > COLLATE pg_catalog.\"default\"\n>>>\n>>> You can test the \"C\" Collation in some columns (keys ? ) ; in theory,\n>>> it should be faster :\n>>> \"The drawback of using locales other than C or POSIX in PostgreSQL is\n>>> its performance impact. It slows character handling and prevents ordinary\n>>> indexes from being used by LIKE. For this reason use locales only if you\n>>> actually need them.\"\n>>> https://www.postgresql.org/docs/12/locale.html\n>>>\n>>> https://www.postgresql.org/message-id/flat/CAF6DVKNU0vb4ZeQQ-%3Dagg69QJU3wdjPnMYYrPYY7CKc6iOU7eQ%40mail.gmail.com\n>>>\n>>> Best,\n>>> Imre\n>>>\n>>>\n>>> Semen Yefimenko <[email protected]> ezt írta (időpont: 2021.\n>>> máj. 6., Cs, 16:38):\n>>>\n>>>> Hi there,\n>>>>\n>>>> I've recently been involved in migrating our old system to SQL Server\n>>>> and then PostgreSQL. Everything has been working fine so far but now after\n>>>> executing our tests on Postgres, we saw a very slow running query on a\n>>>> large table in our database.\n>>>> I have tried asking on other platforms but no one has been able to give\n>>>> me a satisfying answer.\n>>>>\n>>>> *Postgres Version : *PostgreSQL 12.2, compiled by Visual C++ build\n>>>> 1914, 64-bit\n>>>> No notable errors in the Server log and the Postgres Server itself.\n>>>>\n>>>> The table structure :\n>>>>\n>>>> CREATE TABLE logtable\n>>>> (\n>>>> key character varying(20) COLLATE pg_catalog.\"default\" NOT NULL,\n>>>> id integer,\n>>>> column3 integer,\n>>>> column4 integer,\n>>>> column5 integer,\n>>>> column6 integer,\n>>>> column7 integer,\n>>>> column8 integer,\n>>>> column9 character varying(128) COLLATE pg_catalog.\"default\",\n>>>> column10 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>> column11 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>> column12 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>> column13 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>> column14 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>> column15 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>> column16 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>> column17 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>> column18 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>> column19 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>> column21 character varying(256) COLLATE pg_catalog.\"default\",\n>>>> column22 character varying(256) COLLATE pg_catalog.\"default\",\n>>>> column23 character varying(256) COLLATE pg_catalog.\"default\",\n>>>> column24 character varying(256) COLLATE pg_catalog.\"default\",\n>>>> column25 character varying(256) COLLATE pg_catalog.\"default\",\n>>>> column26 character varying(256) COLLATE pg_catalog.\"default\",\n>>>> column27 character varying(256) COLLATE pg_catalog.\"default\",\n>>>> column28 character varying(256) COLLATE pg_catalog.\"default\",\n>>>> column29 character varying(256) COLLATE pg_catalog.\"default\",\n>>>> column30 character varying(256) COLLATE pg_catalog.\"default\",\n>>>> column31 character varying(256) COLLATE pg_catalog.\"default\",\n>>>> column32 character varying(256) COLLATE pg_catalog.\"default\",\n>>>> column33 character varying(256) COLLATE pg_catalog.\"default\",\n>>>> column34 character varying(256) COLLATE pg_catalog.\"default\",\n>>>> column35 character varying(256) COLLATE pg_catalog.\"default\",\n>>>> entrytype integer,\n>>>> column37 bigint,\n>>>> column38 bigint,\n>>>> column39 bigint,\n>>>> column40 bigint,\n>>>> column41 bigint,\n>>>> column42 bigint,\n>>>> column43 bigint,\n>>>> column44 bigint,\n>>>> column45 bigint,\n>>>> column46 bigint,\n>>>> column47 character varying(128) COLLATE pg_catalog.\"default\",\n>>>> timestampcol timestamp without time zone,\n>>>> column49 timestamp without time zone,\n>>>> column50 timestamp without time zone,\n>>>> column51 timestamp without time zone,\n>>>> column52 timestamp without time zone,\n>>>> archivestatus integer,\n>>>> column54 integer,\n>>>> column55 character varying(20) COLLATE pg_catalog.\"default\",\n>>>> CONSTRAINT pkey PRIMARY KEY (key)\n>>>> USING INDEX TABLESPACE tablespace\n>>>> )\n>>>>\n>>>> TABLESPACE tablespace;\n>>>>\n>>>> ALTER TABLE schema.logtable\n>>>> OWNER to user;\n>>>>\n>>>> CREATE INDEX idx_timestampcol\n>>>> ON schema.logtable USING btree\n>>>> ( timestampcol ASC NULLS LAST )\n>>>> TABLESPACE tablespace ;\n>>>>\n>>>> CREATE INDEX idx_test2\n>>>> ON schema.logtable USING btree\n>>>> ( entrytype ASC NULLS LAST)\n>>>> TABLESPACE tablespace\n>>>> WHERE archivestatus <= 1;\n>>>>\n>>>> CREATE INDEX idx_arcstatus\n>>>> ON schema.logtable USING btree\n>>>> ( archivestatus ASC NULLS LAST)\n>>>> TABLESPACE tablespace;\n>>>>\n>>>> CREATE INDEX idx_entrytype\n>>>> ON schema.logtable USING btree\n>>>> ( entrytype ASC NULLS LAST)\n>>>> TABLESPACE tablespace ;\n>>>>\n>>>>\n>>>> The table contains 14.000.000 entries and has about 3.3 GB of data:\n>>>> No triggers, inserts per day, probably 5-20 K per day.\n>>>>\n>>>> SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\n>>>> relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE\n>>>> relname='logtable';\n>>>>\n>>>> relname\n>>>> |relpages|reltuples|relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|\n>>>>\n>>>> ------------------|--------|---------|-------------|-------|--------|--------------|----------|-------------|\n>>>> logtable | 405988| 14091424| 405907|r |\n>>>> 54|false |NULL | 3326803968|\n>>>>\n>>>>\n>>>> The slow running query:\n>>>>\n>>>> SELECT column1,..., column54 where ((entrytype = 4000 or entrytype =\n>>>> 4001 or entrytype = 4002) and (archivestatus <= 1)) order by timestampcol\n>>>> desc;\n>>>>\n>>>>\n>>>> This query runs in about 45-60 seconds.\n>>>> The same query runs in about 289 ms Oracle and 423 ms in SQL-Server.\n>>>> Now I understand that actually loading all results would take a while.\n>>>> (about 520K or so rows)\n>>>> But that shouldn't be exactly what happens right? There should be a\n>>>> resultset iterator which can retrieve all data but doesn't from the get go.\n>>>>\n>>>> With the help of some people in the slack and so thread, I've found a\n>>>> configuration parameter which helps performance :\n>>>>\n>>>> set random_page_cost = 1;\n>>>>\n>>>> This improved performance from 45-60 s to 15-35 s. (since we are using\n>>>> ssd's)\n>>>> Still not acceptable but definitely an improvement.\n>>>> Some maybe relevant system parameters:\n>>>>\n>>>> effective_cache_size 4GB\n>>>> maintenance_work_mem 1GB\n>>>> shared_buffers 2GB\n>>>> work_mem 1GB\n>>>>\n>>>>\n>>>> Currently I'm accessing the data through DbBeaver (JDBC -\n>>>> postgresql-42.2.5.jar) and our JAVA application (JDBC -\n>>>> postgresql-42.2.19.jar). Both use the defaultRowFetchSize=5000 to not load\n>>>> everything into memory and limit the results.\n>>>> The explain plan:\n>>>>\n>>>> EXPLAIN (ANALYZE, BUFFERS, SETTINGS, VERBOSE)...\n>>>> (Above Query)\n>>>>\n>>>>\n>>>> Gather Merge (cost=347142.71..397196.91 rows=429006 width=2558)\n>>>> (actual time=21210.019..22319.444 rows=515841 loops=1)\n>>>> Output: column1, .. , column54\n>>>> Workers Planned: 2\n>>>> Workers Launched: 2\n>>>> Buffers: shared hit=141487 read=153489\n>>>> -> Sort (cost=346142.69..346678.95 rows=214503 width=2558) (actual\n>>>> time=21148.887..21297.428 rows=171947 loops=3)\n>>>> Output: column1, .. , column54\n>>>> Sort Key: logtable.timestampcol DESC\n>>>> Sort Method: quicksort Memory: 62180kB\n>>>> Worker 0: Sort Method: quicksort Memory: 56969kB\n>>>> Worker 1: Sort Method: quicksort Memory: 56837kB\n>>>> Buffers: shared hit=141487 read=153489\n>>>> Worker 0: actual time=21129.973..21296.839 rows=166776 loops=1\n>>>> Buffers: shared hit=45558 read=49514\n>>>> Worker 1: actual time=21114.439..21268.117 rows=165896 loops=1\n>>>> Buffers: shared hit=45104 read=49506\n>>>> -> Parallel Bitmap Heap Scan on schema.logtable\n>>>> (cost=5652.74..327147.77 rows=214503 width=2558) (actual\n>>>> time=1304.813..20637.462 rows=171947 loops=3)\n>>>> Output: column1, .. , column54\n>>>> Recheck Cond: ((logtable.entrytype = 4000) OR\n>>>> (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n>>>> Filter: (logtable.archivestatus <= 1)\n>>>> Heap Blocks: exact=103962\n>>>> Buffers: shared hit=141473 read=153489\n>>>> Worker 0: actual time=1280.472..20638.620 rows=166776\n>>>> loops=1\n>>>> Buffers: shared hit=45551 read=49514\n>>>> Worker 1: actual time=1275.274..20626.219 rows=165896\n>>>> loops=1\n>>>> Buffers: shared hit=45097 read=49506\n>>>> -> BitmapOr (cost=5652.74..5652.74 rows=520443 width=0)\n>>>> (actual time=1179.438..1179.438 rows=0 loops=1)\n>>>> Buffers: shared hit=9 read=1323\n>>>> -> Bitmap Index Scan on idx_entrytype\n>>>> (cost=0.00..556.61 rows=54957 width=0) (actual time=161.939..161.940\n>>>> rows=65970 loops=1)\n>>>> Index Cond: (logtable.entrytype = 4000)\n>>>> Buffers: shared hit=1 read=171\n>>>> -> Bitmap Index Scan on idx_entrytype\n>>>> (cost=0.00..2243.22 rows=221705 width=0) (actual time=548.849..548.849\n>>>> rows=224945 loops=1)\n>>>> Index Cond: (logtable.entrytype = 4001)\n>>>> Buffers: shared hit=4 read=576\n>>>> -> Bitmap Index Scan on idx_entrytype\n>>>> (cost=0.00..2466.80 rows=243782 width=0) (actual time=468.637..468.637\n>>>> rows=224926 loops=1)\n>>>> Index Cond: (logtable.entrytype = 4002)\n>>>> Buffers: shared hit=4 read=576\n>>>> Settings: random_page_cost = '1', search_path = '\"$user\", schema,\n>>>> public', temp_buffers = '80MB', work_mem = '1GB'\n>>>> Planning Time: 0.578 ms\n>>>> Execution Time: 22617.351 ms\n>>>>\n>>>> As mentioned before, oracle does this much faster.\n>>>>\n>>>>\n>>>> -------------------------------------------------------------------------------------------------------------------------\n>>>> | Id | Operation | Name\n>>>> | Rows | Bytes |TempSpc| Cost (%CPU)| Time |\n>>>>\n>>>> -------------------------------------------------------------------------------------------------------------------------\n>>>> | 0 | SELECT STATEMENT |\n>>>> | 6878 | 2491K| | 2143 (1)| 00:00:01 |\n>>>> | 1 | SORT ORDER BY |\n>>>> | 6878 | 2491K| 3448K| 2143 (1)| 00:00:01 |\n>>>> | 2 | INLIST ITERATOR |\n>>>> | | | | | |\n>>>> |* 3 | TABLE ACCESS BY INDEX ROWID BATCHED| logtable\n>>>> | 6878 | 2491K| | 1597 (1)| 00:00:01 |\n>>>> |* 4 | INDEX RANGE SCAN | idx_entrytype\n>>>> | 6878 | | | 23 (0)| 00:00:01 |\n>>>>\n>>>> -------------------------------------------------------------------------------------------------------------------------\n>>>>\n>>>> Is there much I can analyze, any information you might need to further\n>>>> analyze this?\n>>>>\n>>>\n>\n> --\n> Thanks,\n> Vijay\n> Mumbai, India\n>\n\nAs mentionend in the slack comments :SELECT pg_size_pretty(pg_relation_size('logtable')) as table_size, pg_size_pretty(pg_relation_size('idx_entrytype')) as index_size, (pgstattuple('logtable')).dead_tuple_percent;table_size | index_size | dead_tuple_percent------------+------------+--------------------3177 MB | 289 MB | 0I have roughly 6 indexes which all have around 300 MBSELECT pg_relation_size('logtable') as table_size, pg_relation_size(idx_entrytype) as index_size, 100-(pgstatindex('idx_entrytype')).avg_leaf_density as bloat_ratiotable_size | index_size | bloat_ratio------------+------------+-------------------3331694592 | 302555136 | 5.219999999999999Your queries:n_live_tup n_dead_tup14118380 0For testing, I've also been running VACUUM and ANALYZE pretty much before every test run.Am Fr., 7. Mai 2021 um 10:44 Uhr schrieb Vijaykumar Jain <[email protected]>:ok one last thing, not to be a PITA, but just in case if this helps.postgres=# SELECT * FROM pg_stat_user_indexes where relname = 'logtable';\npostgres=# SELECT * FROM pg_stat_user_tables where relname = 'logtable';\n\nbasically, i just to verify if the table is not bloated.looking at n_live_tup vs n_dead_tup would help understand it.if you see too many dead tuples, vacuum (ANALYZE,verbose) logtable; -- would get rid of dead tuples (if there are no tx using the dead tuples) and then run your query.Thanks,VijayOn Fri, 7 May 2021 at 13:34, Semen Yefimenko <[email protected]> wrote:Sorry if I'm cumulatively answering everyone in one E-Mail, I'm not sure how I'm supposed to do it. (single E-Mails vs many) Can you try tuning by increasing the shared_buffers slowly in steps of 500MB, and running explain analyze against the query.-- 2500 MB shared buffers - random_page_cost = 1;Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual time=2076.329..3737.050 rows=516517 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=295446 -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual time=2007.487..2202.707 rows=172172 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 65154kB Worker 0: Sort Method: quicksort Memory: 55707kB Worker 1: Sort Method: quicksort Memory: 55304kB Buffers: shared hit=295446 Worker 0: actual time=1963.969..2156.624 rows=161205 loops=1 Buffers: shared hit=91028 Worker 1: actual time=1984.700..2179.697 rows=161935 loops=1 Buffers: shared hit=92133 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5546.39..323481.21 rows=210418 width=2542) (actual time=322.125..1618.971 rows=172172 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=110951 Buffers: shared hit=295432 Worker 0: actual time=282.201..1595.117 rows=161205 loops=1 Buffers: shared hit=91021 Worker 1: actual time=303.671..1623.299 rows=161935 loops=1 Buffers: shared hit=92126 -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0) (actual time=199.119..199.119 rows=0 loops=1) Buffers: shared hit=1334 -> Bitmap Index Scan on idx_entrytype (cost=0.00..682.13 rows=67293 width=0) (actual time=28.856..28.857 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=172 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2223.63 rows=219760 width=0) (actual time=108.871..108.872 rows=225283 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=581 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2261.87 rows=223525 width=0) (actual time=61.377..61.377 rows=225264 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=581Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.940 msExecution Time: 4188.083 ms-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3000 MB shared buffers - random_page_cost = 1;Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual time=2062.280..3763.408 rows=516517 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=295446 -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual time=1987.933..2180.422 rows=172172 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 66602kB Worker 0: Sort Method: quicksort Memory: 55149kB Worker 1: Sort Method: quicksort Memory: 54415kB Buffers: shared hit=295446 Worker 0: actual time=1963.059..2147.916 rows=159556 loops=1 Buffers: shared hit=89981 Worker 1: actual time=1949.726..2136.200 rows=158554 loops=1 Buffers: shared hit=90141 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5546.39..323481.21 rows=210418 width=2542) (actual time=340.705..1603.796 rows=172172 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=113990 Buffers: shared hit=295432 Worker 0: actual time=317.918..1605.548 rows=159556 loops=1 Buffers: shared hit=89974 Worker 1: actual time=304.744..1589.221 rows=158554 loops=1 Buffers: shared hit=90134 -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0) (actual time=218.972..218.973 rows=0 loops=1) Buffers: shared hit=1334 -> Bitmap Index Scan on idx_entrytype (cost=0.00..682.13 rows=67293 width=0) (actual time=37.741..37.742 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=172 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2223.63 rows=219760 width=0) (actual time=119.120..119.121 rows=225283 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=581 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2261.87 rows=223525 width=0) (actual time=62.097..62.098 rows=225264 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=581Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 2.717 msExecution Time: 4224.670 ms-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3500 MB shared buffers - random_page_cost = 1;Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual time=3578.155..4932.858 rows=516517 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=14 read=295432 written=67 -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual time=3482.159..3677.227 rows=172172 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 58533kB Worker 0: Sort Method: quicksort Memory: 56878kB Worker 1: Sort Method: quicksort Memory: 60755kB Buffers: shared hit=14 read=295432 written=67 Worker 0: actual time=3435.131..3632.985 rows=166842 loops=1 Buffers: shared hit=7 read=95783 written=25 Worker 1: actual time=3441.545..3649.345 rows=179354 loops=1 Buffers: shared hit=5 read=101608 written=20 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5546.39..323481.21 rows=210418 width=2542) (actual time=345.111..3042.932 rows=172172 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=96709 Buffers: shared hit=2 read=295430 written=67 Worker 0: actual time=300.525..2999.403 rows=166842 loops=1 Buffers: shared read=95783 written=25 Worker 1: actual time=300.552..3004.859 rows=179354 loops=1 Buffers: shared read=101606 written=20 -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0) (actual time=241.996..241.997 rows=0 loops=1) Buffers: shared hit=2 read=1332 -> Bitmap Index Scan on idx_entrytype (cost=0.00..682.13 rows=67293 width=0) (actual time=37.129..37.130 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared read=172 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2223.63 rows=219760 width=0) (actual time=131.051..131.052 rows=225283 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=1 read=580 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2261.87 rows=223525 width=0) (actual time=73.800..73.800 rows=225264 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=1 read=580Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.597 msExecution Time: 5389.811 ms This doesn't seem to have had an effect. Thanks for the suggestion. Have you try of excluding not null from index? Can you give dispersion of archivestatus?Yes I have, it yielded the same performance boost as : create index test on logtable(entrytype) where archivestatus <= 1;I wonder what the old query plan was...Would you include links to your prior correspondance ?So prior Execution Plans are present in the SO.The other forums I've tried are the official slack channel : https://postgresteam.slack.com/archives/C0FS3UTAP/p1620286295228600And SO : https://stackoverflow.com/questions/67401792/slow-running-postgresql-queryBut I think most of the points discussed in these posts have already been mentionend by you except bloating of indexes. Oracle is apparently doing a single scan on \"entrytype\".As a test, you could try forcing that, like:begin; SET enable_bitmapscan=off ; explain (analyze) [...]; rollback;orbegin; DROP INDEX idx_arcstatus; explain (analyze) [...]; rollback;I've tried enable_bitmapscan=off but it didn't yield any good results.-- 2000 MB shared buffers - random_page_cost = 4 - enable_bitmapscan to offGather Merge (cost=543949.72..593050.69 rows=420836 width=2542) (actual time=7716.031..9043.399 rows=516517 loops=1) Output: column1, .., column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=192 read=406605 -> Sort (cost=542949.70..543475.75 rows=210418 width=2542) (actual time=7642.666..7835.527 rows=172172 loops=3) Output: column1, .., column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 58803kB Worker 0: Sort Method: quicksort Memory: 60376kB Worker 1: Sort Method: quicksort Memory: 56988kB Buffers: shared hit=192 read=406605 Worker 0: actual time=7610.482..7814.905 rows=177637 loops=1 Buffers: shared hit=78 read=137826 Worker 1: actual time=7607.645..7803.561 rows=167316 loops=1 Buffers: shared hit=80 read=132672 -> Parallel Seq Scan on schema.logtable (cost=0.00..524345.70 rows=210418 width=2542) (actual time=1.669..7189.365 rows=172172 loops=3) Output: column1, .., column54 Filter: ((logtable.acrhivestatus <= 1) AND ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))) Rows Removed by Filter: 4533459 Buffers: shared hit=96 read=406605 Worker 0: actual time=1.537..7158.286 rows=177637 loops=1 Buffers: shared hit=30 read=137826 Worker 1: actual time=1.414..7161.670 rows=167316 loops=1 Buffers: shared hit=32 read=132672Settings: enable_bitmapscan = 'off', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.725 msExecution Time: 9500.928 ms---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2000 MB shared buffers - random_page_cost = 4 - -- 2000 -- 2000 MB shared buffers - random_page_cost = 1 - enable_bitmapscan to offGather Merge (cost=543949.72..593050.69 rows=420836 width=2542) (actual time=7519.032..8871.433 rows=516517 loops=1) Output: column1, .., column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=576 read=406221 -> Sort (cost=542949.70..543475.75 rows=210418 width=2542) (actual time=7451.958..7649.480 rows=172172 loops=3) Output: column1, .., column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 58867kB Worker 0: Sort Method: quicksort Memory: 58510kB Worker 1: Sort Method: quicksort Memory: 58788kB Buffers: shared hit=576 read=406221 Worker 0: actual time=7438.271..7644.241 rows=172085 loops=1 Buffers: shared hit=203 read=135166 Worker 1: actual time=7407.574..7609.922 rows=172948 loops=1 Buffers: shared hit=202 read=135225 -> Parallel Seq Scan on schema.logtable (cost=0.00..524345.70 rows=210418 width=2542) (actual time=2.839..7017.729 rows=172172 loops=3) Output: column1, .., column54 Filter: ((logtable.acrhivestatus <= 1) AND ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))) Rows Removed by Filter: 4533459 Buffers: shared hit=480 read=406221 Worker 0: actual time=2.628..7006.420 rows=172085 loops=1 Buffers: shared hit=155 read=135166 Worker 1: actual time=3.948..6978.154 rows=172948 loops=1 Buffers: shared hit=154 read=135225Settings: enable_bitmapscan = 'off', random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.621 msExecution Time: 9339.457 msHave you tune shared buffers enough? Each block is of 8k by default.BTW, please try to reset random_page_cost.Look above. I will try upgrading the minor version next.I will also try setting up a 13.X version locally and import the data from 12.2 to 13.X and see if it might be faster.Am Do., 6. Mai 2021 um 23:16 Uhr schrieb Imre Samu <[email protected]>:> Postgres Version : PostgreSQL 12.2,> ... ON ... USING btreeIMHO:The next minor (bugix&security) release is near ( expected ~ May 13th, 2021 ) https://www.postgresql.org/developer/roadmap/so you can update your PostgreSQL to 12.7 ( + full Reindexing recommended ! ) You can find a lot of B-tree index-related fixes.https://www.postgresql.org/docs/12/release-12-3.html Release date: 2020-05-14 - Fix possible undercounting of deleted B-tree index pages in VACUUM VERBOSE output - Fix wrong bookkeeping for oldest deleted page in a B-tree index- Ensure INCLUDE'd columns are always removed from B-tree pivot tupleshttps://www.postgresql.org/docs/12/release-12-4.html - Avoid repeated marking of dead btree index entries as dead https://www.postgresql.org/docs/12/release-12-5.html - Fix failure of parallel B-tree index scans when the index condition is unsatisfiablehttps://www.postgresql.org/docs/12/release-12-6.html Release date: 2021-02-11> COLLATE pg_catalog.\"default\"You can test the \"C\" Collation in some columns (keys ? ) ; in theory, it should be faster :\"The drawback of using locales other than C or POSIX in PostgreSQL is its performance impact. It slows character handling and prevents ordinary indexes from being used by LIKE. For this reason use locales only if you actually need them.\"https://www.postgresql.org/docs/12/locale.htmlhttps://www.postgresql.org/message-id/flat/CAF6DVKNU0vb4ZeQQ-%3Dagg69QJU3wdjPnMYYrPYY7CKc6iOU7eQ%40mail.gmail.comBest, ImreSemen Yefimenko <[email protected]> ezt írta (időpont: 2021. máj. 6., Cs, 16:38):Hi there,I've recently been involved in migrating our old system to SQL Server and then PostgreSQL. Everything has been working fine so far but now after executing our tests on Postgres, we saw a very slow running query on a large table in our database. I have tried asking on other platforms but no one has been able to give me a satisfying answer. Postgres Version : PostgreSQL 12.2, compiled by Visual C++ build 1914, 64-bitNo notable errors in the Server log and the Postgres Server itself.The table structure :CREATE TABLE logtable( key character varying(20) COLLATE pg_catalog.\"default\" NOT NULL, id integer, \n\ncolumn3 integer, \n\ncolumn4 integer, \n\ncolumn5 integer, \n\ncolumn6 integer, \n\ncolumn7 integer, \n\ncolumn8 integer, \n\ncolumn9 character varying(128) COLLATE pg_catalog.\"default\", \n\ncolumn10 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn11 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn12 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn13 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn14 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn15 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn16 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn17 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn18 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn19 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn21 \n\ncharacter varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn22 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn23 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn24 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn25 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn26 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn27 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn28 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn29 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn30 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn31 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn32 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn33 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn34 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn35 character varying(256) COLLATE pg_catalog.\"default\", \n\nentrytype integer, \n\ncolumn37 bigint, \n\ncolumn38 bigint, \n\ncolumn39 bigint, \n\ncolumn40 bigint, \n\ncolumn41 bigint, \n\ncolumn42 bigint, \n\ncolumn43 bigint, \n\ncolumn44 bigint, \n\ncolumn45 bigint, \n\ncolumn46 bigint, \n\ncolumn47 character varying(128) COLLATE pg_catalog.\"default\", \n\ntimestampcol timestamp without time zone, \n\ncolumn49 timestamp without time zone, \n\ncolumn50 timestamp without time zone, \n\ncolumn51 timestamp without time zone, \n\ncolumn52 timestamp without time zone, \n\n\n\narchivestatus \n\ninteger, \n\ncolumn54 integer, \n\ncolumn55 character varying(20) COLLATE pg_catalog.\"default\", CONSTRAINT pkey PRIMARY KEY (key) USING INDEX TABLESPACE tablespace)TABLESPACE tablespace;ALTER TABLE schema.logtable OWNER to user;CREATE INDEX idx_timestampcol ON schema.logtable USING btree (\n\n\n\ntimestampcol \n\n\n\n ASC NULLS LAST ) TABLESPACE \n\ntablespace\n\n;CREATE INDEX idx_test2 ON schema.logtable USING btree ( entrytype ASC NULLS LAST) TABLESPACE tablespace WHERE archivestatus <= 1;CREATE INDEX idx_arcstatus ON \n\nschema.logtable USING btree ( archivestatus ASC NULLS LAST) TABLESPACE tablespace;CREATE INDEX \n\nidx_entrytype ON schema.logtable USING btree ( entrytype ASC NULLS LAST) TABLESPACE \n\ntablespace\n\n;The table contains 14.000.000 entries and has about 3.3 GB of data:No triggers, inserts per day, probably 5-20 K per day.SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='logtable';relname |relpages|reltuples|relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|------------------|--------|---------|-------------|-------|--------|--------------|----------|-------------|logtable | 405988| 14091424| 405907|r | 54|false |NULL | 3326803968|The slow running query:SELECT column1,..., column54 where ((entrytype = 4000 or \n\nentrytype \n\n= 4001 or \n\nentrytype \n\n= 4002) and (archivestatus <= 1)) order by timestampcol desc;This query runs in about 45-60 seconds.The same query runs in about 289 ms Oracle and 423 ms in SQL-Server. Now I understand that actually loading all results would take a while. (about 520K or so rows) But that shouldn't be exactly what happens right? There should be a resultset iterator which can retrieve all data but doesn't from the get go. With the help of some people in the slack and so thread, I've found a configuration parameter which helps performance : set random_page_cost = 1;This improved performance from 45-60 s to 15-35 s. (since we are using ssd's) Still not acceptable but definitely an improvement. Some maybe relevant system parameters:effective_cache_size\t4GBmaintenance_work_mem\t1GBshared_buffers\t2GBwork_mem\t1GBCurrently I'm accessing the data through DbBeaver (JDBC - postgresql-42.2.5.jar) and our JAVA application (JDBC - postgresql-42.2.19.jar). Both use the defaultRowFetchSize=5000 to not load everything into memory and limit the results. The explain plan:EXPLAIN (ANALYZE, BUFFERS, SETTINGS, VERBOSE)...(Above Query)Gather Merge (cost=347142.71..397196.91 rows=429006 width=2558) (actual time=21210.019..22319.444 rows=515841 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=141487 read=153489 -> Sort (cost=346142.69..346678.95 rows=214503 width=2558) (actual time=21148.887..21297.428 rows=171947 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 62180kB Worker 0: Sort Method: quicksort Memory: 56969kB Worker 1: Sort Method: quicksort Memory: 56837kB Buffers: shared hit=141487 read=153489 Worker 0: actual time=21129.973..21296.839 rows=166776 loops=1 Buffers: shared hit=45558 read=49514 Worker 1: actual time=21114.439..21268.117 rows=165896 loops=1 Buffers: shared hit=45104 read=49506 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5652.74..327147.77 rows=214503 width=2558) (actual time=1304.813..20637.462 rows=171947 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=103962 Buffers: shared hit=141473 read=153489 Worker 0: actual time=1280.472..20638.620 rows=166776 loops=1 Buffers: shared hit=45551 read=49514 Worker 1: actual time=1275.274..20626.219 rows=165896 loops=1 Buffers: shared hit=45097 read=49506 -> BitmapOr (cost=5652.74..5652.74 rows=520443 width=0) (actual time=1179.438..1179.438 rows=0 loops=1) Buffers: shared hit=9 read=1323 -> Bitmap Index Scan on idx_entrytype (cost=0.00..556.61 rows=54957 width=0) (actual time=161.939..161.940 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=1 read=171 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2243.22 rows=221705 width=0) (actual time=548.849..548.849 rows=224945 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=4 read=576 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2466.80 rows=243782 width=0) (actual time=468.637..468.637 rows=224926 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=4 read=576Settings: random_page_cost = '1', search_path = '\"$user\", schema, public', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.578 msExecution Time: 22617.351 msAs mentioned before, oracle does this much faster. -------------------------------------------------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |-------------------------------------------------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 6878 | 2491K| | 2143 (1)| 00:00:01 || 1 | SORT ORDER BY | | 6878 | 2491K| 3448K| 2143 (1)| 00:00:01 || 2 | INLIST ITERATOR | | | | | | ||* 3 | TABLE ACCESS BY INDEX ROWID BATCHED| logtable | 6878 | 2491K| | 1597 (1)| 00:00:01 ||* 4 | INDEX RANGE SCAN | idx_entrytype | 6878 | | | 23 (0)| 00:00:01 |-------------------------------------------------------------------------------------------------------------------------Is there much I can analyze, any information you might need to further analyze this? \n\n\n-- Thanks,VijayMumbai, India",
"msg_date": "Fri, 7 May 2021 11:02:48 +0200",
"msg_from": "Semen Yefimenko <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "Is this on windows ?\n\nI see a thread that mentions of performance penalty due to parallel worker\n\n\nThere is a mailing thread with subject line -\nHuge performance penalty with parallel queries in Windows x64 v. Linux x64\n\n\n\nOn Fri, 7 May 2021 at 2:33 PM Semen Yefimenko <[email protected]>\nwrote:\n\n> As mentionend in the slack comments :\n>\n> SELECT pg_size_pretty(pg_relation_size('logtable')) as table_size,\n> pg_size_pretty(pg_relation_size('idx_entrytype')) as index_size,\n> (pgstattuple('logtable')).dead_tuple_percent;\n>\n> table_size | index_size | dead_tuple_percent\n> ------------+------------+--------------------\n> 3177 MB | 289 MB | 0\n>\n> I have roughly 6 indexes which all have around 300 MB\n>\n> SELECT pg_relation_size('logtable') as table_size,\n> pg_relation_size(idx_entrytype) as index_size,\n> 100-(pgstatindex('idx_entrytype')).avg_leaf_density as bloat_ratio\n>\n> table_size | index_size | bloat_ratio\n> ------------+------------+-------------------\n> 3331694592 | 302555136 | 5.219999999999999\n>\n> Your queries:\n>\n> n_live_tup n_dead_tup\n> 14118380 0\n>\n>\n> For testing, I've also been running VACUUM and ANALYZE pretty much before\n> every test run.\n>\n> Am Fr., 7. Mai 2021 um 10:44 Uhr schrieb Vijaykumar Jain <\n> [email protected]>:\n>\n>> ok one last thing, not to be a PITA, but just in case if this helps.\n>>\n>> postgres=# SELECT * FROM pg_stat_user_indexes where relname = 'logtable';\n>> postgres=# SELECT * FROM pg_stat_user_tables where relname = 'logtable';\n>> basically, i just to verify if the table is not bloated.\n>> looking at *n_live_tup* vs *n_dead_tup* would help understand it.\n>>\n>> if you see too many dead tuples,\n>> vacuum (ANALYZE,verbose) logtable; -- would get rid of dead tuples (if\n>> there are no tx using the dead tuples)\n>>\n>> and then run your query.\n>>\n>> Thanks,\n>> Vijay\n>>\n>>\n>>\n>>\n>>\n>>\n>> On Fri, 7 May 2021 at 13:34, Semen Yefimenko <[email protected]>\n>> wrote:\n>>\n>>> Sorry if I'm cumulatively answering everyone in one E-Mail, I'm not sure\n>>> how I'm supposed to do it. (single E-Mails vs many)\n>>>\n>>>\n>>>> Can you try tuning by increasing the shared_buffers slowly in steps of\n>>>> 500MB, and running explain analyze against the query.\n>>>\n>>>\n>>> -- 2500 MB shared buffers - random_page_cost = 1;\n>>> Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual\n>>> time=2076.329..3737.050 rows=516517 loops=1)\n>>> Output: column1, .. , column54\n>>> Workers Planned: 2\n>>> Workers Launched: 2\n>>> Buffers: shared hit=295446\n>>> -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual\n>>> time=2007.487..2202.707 rows=172172 loops=3)\n>>> Output: column1, .. , column54\n>>> Sort Key: logtable.timestampcol DESC\n>>> Sort Method: quicksort Memory: 65154kB\n>>> Worker 0: Sort Method: quicksort Memory: 55707kB\n>>> Worker 1: Sort Method: quicksort Memory: 55304kB\n>>> Buffers: shared hit=295446\n>>> Worker 0: actual time=1963.969..2156.624 rows=161205 loops=1\n>>> Buffers: shared hit=91028\n>>> Worker 1: actual time=1984.700..2179.697 rows=161935 loops=1\n>>> Buffers: shared hit=92133\n>>> -> Parallel Bitmap Heap Scan on schema.logtable\n>>> (cost=5546.39..323481.21 rows=210418 width=2542) (actual\n>>> time=322.125..1618.971 rows=172172 loops=3)\n>>> Output: column1, .. , column54\n>>> Recheck Cond: ((logtable.entrytype = 4000) OR\n>>> (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n>>> Filter: (logtable.archivestatus <= 1)\n>>> Heap Blocks: exact=110951\n>>> Buffers: shared hit=295432\n>>> Worker 0: actual time=282.201..1595.117 rows=161205 loops=1\n>>> Buffers: shared hit=91021\n>>> Worker 1: actual time=303.671..1623.299 rows=161935 loops=1\n>>> Buffers: shared hit=92126\n>>> -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0)\n>>> (actual time=199.119..199.119 rows=0 loops=1)\n>>> Buffers: shared hit=1334\n>>> -> Bitmap Index Scan on idx_entrytype\n>>> (cost=0.00..682.13 rows=67293 width=0) (actual time=28.856..28.857\n>>> rows=65970 loops=1)\n>>> Index Cond: (logtable.entrytype = 4000)\n>>> Buffers: shared hit=172\n>>> -> Bitmap Index Scan on idx_entrytype\n>>> (cost=0.00..2223.63 rows=219760 width=0) (actual time=108.871..108.872\n>>> rows=225283 loops=1)\n>>> Index Cond: (logtable.entrytype = 4001)\n>>> Buffers: shared hit=581\n>>> -> Bitmap Index Scan on idx_entrytype\n>>> (cost=0.00..2261.87 rows=223525 width=0) (actual time=61.377..61.377\n>>> rows=225264 loops=1)\n>>> Index Cond: (logtable.entrytype = 4002)\n>>> Buffers: shared hit=581\n>>> Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'\n>>> Planning Time: 0.940 ms\n>>> Execution Time: 4188.083 ms\n>>>\n>>> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>> -- 3000 MB shared buffers - random_page_cost = 1;\n>>> Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual\n>>> time=2062.280..3763.408 rows=516517 loops=1)\n>>> Output: column1, .. , column54\n>>> Workers Planned: 2\n>>> Workers Launched: 2\n>>> Buffers: shared hit=295446\n>>> -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual\n>>> time=1987.933..2180.422 rows=172172 loops=3)\n>>> Output: column1, .. , column54\n>>> Sort Key: logtable.timestampcol DESC\n>>> Sort Method: quicksort Memory: 66602kB\n>>> Worker 0: Sort Method: quicksort Memory: 55149kB\n>>> Worker 1: Sort Method: quicksort Memory: 54415kB\n>>> Buffers: shared hit=295446\n>>> Worker 0: actual time=1963.059..2147.916 rows=159556 loops=1\n>>> Buffers: shared hit=89981\n>>> Worker 1: actual time=1949.726..2136.200 rows=158554 loops=1\n>>> Buffers: shared hit=90141\n>>> -> Parallel Bitmap Heap Scan on schema.logtable\n>>> (cost=5546.39..323481.21 rows=210418 width=2542) (actual\n>>> time=340.705..1603.796 rows=172172 loops=3)\n>>> Output: column1, .. , column54\n>>> Recheck Cond: ((logtable.entrytype = 4000) OR\n>>> (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n>>> Filter: (logtable.archivestatus <= 1)\n>>> Heap Blocks: exact=113990\n>>> Buffers: shared hit=295432\n>>> Worker 0: actual time=317.918..1605.548 rows=159556 loops=1\n>>> Buffers: shared hit=89974\n>>> Worker 1: actual time=304.744..1589.221 rows=158554 loops=1\n>>> Buffers: shared hit=90134\n>>> -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0)\n>>> (actual time=218.972..218.973 rows=0 loops=1)\n>>> Buffers: shared hit=1334\n>>> -> Bitmap Index Scan on idx_entrytype\n>>> (cost=0.00..682.13 rows=67293 width=0) (actual time=37.741..37.742\n>>> rows=65970 loops=1)\n>>> Index Cond: (logtable.entrytype = 4000)\n>>> Buffers: shared hit=172\n>>> -> Bitmap Index Scan on idx_entrytype\n>>> (cost=0.00..2223.63 rows=219760 width=0) (actual time=119.120..119.121\n>>> rows=225283 loops=1)\n>>> Index Cond: (logtable.entrytype = 4001)\n>>> Buffers: shared hit=581\n>>> -> Bitmap Index Scan on idx_entrytype\n>>> (cost=0.00..2261.87 rows=223525 width=0) (actual time=62.097..62.098\n>>> rows=225264 loops=1)\n>>> Index Cond: (logtable.entrytype = 4002)\n>>> Buffers: shared hit=581\n>>> Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'\n>>> Planning Time: 2.717 ms\n>>> Execution Time: 4224.670 ms\n>>>\n>>> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>> -- 3500 MB shared buffers - random_page_cost = 1;\n>>> Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual\n>>> time=3578.155..4932.858 rows=516517 loops=1)\n>>> Output: column1, .. , column54\n>>> Workers Planned: 2\n>>> Workers Launched: 2\n>>> Buffers: shared hit=14 read=295432 written=67\n>>> -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual\n>>> time=3482.159..3677.227 rows=172172 loops=3)\n>>> Output: column1, .. , column54\n>>> Sort Key: logtable.timestampcol DESC\n>>> Sort Method: quicksort Memory: 58533kB\n>>> Worker 0: Sort Method: quicksort Memory: 56878kB\n>>> Worker 1: Sort Method: quicksort Memory: 60755kB\n>>> Buffers: shared hit=14 read=295432 written=67\n>>> Worker 0: actual time=3435.131..3632.985 rows=166842 loops=1\n>>> Buffers: shared hit=7 read=95783 written=25\n>>> Worker 1: actual time=3441.545..3649.345 rows=179354 loops=1\n>>> Buffers: shared hit=5 read=101608 written=20\n>>> -> Parallel Bitmap Heap Scan on schema.logtable\n>>> (cost=5546.39..323481.21 rows=210418 width=2542) (actual\n>>> time=345.111..3042.932 rows=172172 loops=3)\n>>> Output: column1, .. , column54\n>>> Recheck Cond: ((logtable.entrytype = 4000) OR\n>>> (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n>>> Filter: (logtable.archivestatus <= 1)\n>>> Heap Blocks: exact=96709\n>>> Buffers: shared hit=2 read=295430 written=67\n>>> Worker 0: actual time=300.525..2999.403 rows=166842 loops=1\n>>> Buffers: shared read=95783 written=25\n>>> Worker 1: actual time=300.552..3004.859 rows=179354 loops=1\n>>> Buffers: shared read=101606 written=20\n>>> -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0)\n>>> (actual time=241.996..241.997 rows=0 loops=1)\n>>> Buffers: shared hit=2 read=1332\n>>> -> Bitmap Index Scan on idx_entrytype\n>>> (cost=0.00..682.13 rows=67293 width=0) (actual time=37.129..37.130\n>>> rows=65970 loops=1)\n>>> Index Cond: (logtable.entrytype = 4000)\n>>> Buffers: shared read=172\n>>> -> Bitmap Index Scan on idx_entrytype\n>>> (cost=0.00..2223.63 rows=219760 width=0) (actual time=131.051..131.052\n>>> rows=225283 loops=1)\n>>> Index Cond: (logtable.entrytype = 4001)\n>>> Buffers: shared hit=1 read=580\n>>> -> Bitmap Index Scan on idx_entrytype\n>>> (cost=0.00..2261.87 rows=223525 width=0) (actual time=73.800..73.800\n>>> rows=225264 loops=1)\n>>> Index Cond: (logtable.entrytype = 4002)\n>>> Buffers: shared hit=1 read=580\n>>> Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'\n>>> Planning Time: 0.597 ms\n>>> Execution Time: 5389.811 ms\n>>>\n>>>\n>>> This doesn't seem to have had an effect.\n>>> Thanks for the suggestion.\n>>>\n>>> Have you try of excluding not null from index? Can you give dispersion\n>>>> of archivestatus?\n>>>>\n>>>\n>>> Yes I have, it yielded the same performance boost as :\n>>>\n>>> create index test on logtable(entrytype) where archivestatus <= 1;\n>>>\n>>> I wonder what the old query plan was...\n>>>> Would you include links to your prior correspondance ?\n>>>\n>>>\n>>> So prior Execution Plans are present in the SO.\n>>> The other forums I've tried are the official slack channel :\n>>> https://postgresteam.slack.com/archives/C0FS3UTAP/p1620286295228600\n>>> And SO :\n>>> https://stackoverflow.com/questions/67401792/slow-running-postgresql-query\n>>> But I think most of the points discussed in these posts have already\n>>> been mentionend by you except bloating of indexes.\n>>>\n>>> Oracle is apparently doing a single scan on \"entrytype\".\n>>>> As a test, you could try forcing that, like:\n>>>> begin; SET enable_bitmapscan=off ; explain (analyze) [...]; rollback;\n>>>> or\n>>>> begin; DROP INDEX idx_arcstatus; explain (analyze) [...]; rollback;\n>>>\n>>>\n>>> I've tried enable_bitmapscan=off but it didn't yield any good results.\n>>>\n>>> -- 2000 MB shared buffers - random_page_cost = 4 - enable_bitmapscan to\n>>> off\n>>> Gather Merge (cost=543949.72..593050.69 rows=420836 width=2542) (actual\n>>> time=7716.031..9043.399 rows=516517 loops=1)\n>>> Output: column1, .., column54\n>>> Workers Planned: 2\n>>> Workers Launched: 2\n>>> Buffers: shared hit=192 read=406605\n>>> -> Sort (cost=542949.70..543475.75 rows=210418 width=2542) (actual\n>>> time=7642.666..7835.527 rows=172172 loops=3)\n>>> Output: column1, .., column54\n>>> Sort Key: logtable.timestampcol DESC\n>>> Sort Method: quicksort Memory: 58803kB\n>>> Worker 0: Sort Method: quicksort Memory: 60376kB\n>>> Worker 1: Sort Method: quicksort Memory: 56988kB\n>>> Buffers: shared hit=192 read=406605\n>>> Worker 0: actual time=7610.482..7814.905 rows=177637 loops=1\n>>> Buffers: shared hit=78 read=137826\n>>> Worker 1: actual time=7607.645..7803.561 rows=167316 loops=1\n>>> Buffers: shared hit=80 read=132672\n>>> -> Parallel Seq Scan on schema.logtable (cost=0.00..524345.70\n>>> rows=210418 width=2542) (actual time=1.669..7189.365 rows=172172 loops=3)\n>>> Output: column1, .., column54\n>>> Filter: ((logtable.acrhivestatus <= 1) AND\n>>> ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR\n>>> (logtable.entrytype = 4002)))\n>>> Rows Removed by Filter: 4533459\n>>> Buffers: shared hit=96 read=406605\n>>> Worker 0: actual time=1.537..7158.286 rows=177637 loops=1\n>>> Buffers: shared hit=30 read=137826\n>>> Worker 1: actual time=1.414..7161.670 rows=167316 loops=1\n>>> Buffers: shared hit=32 read=132672\n>>> Settings: enable_bitmapscan = 'off', temp_buffers = '80MB', work_mem =\n>>> '1GB'\n>>> Planning Time: 0.725 ms\n>>> Execution Time: 9500.928 ms\n>>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>> 2000 MB shared buffers - random_page_cost = 4 - -- 2000 -- 2000 MB shared\n>>> buffers - random_page_cost = 1 - enable_bitmapscan to off\n>>> Gather Merge (cost=543949.72..593050.69 rows=420836 width=2542) (actual\n>>> time=7519.032..8871.433 rows=516517 loops=1)\n>>> Output: column1, .., column54\n>>> Workers Planned: 2\n>>> Workers Launched: 2\n>>> Buffers: shared hit=576 read=406221\n>>> -> Sort (cost=542949.70..543475.75 rows=210418 width=2542) (actual\n>>> time=7451.958..7649.480 rows=172172 loops=3)\n>>> Output: column1, .., column54\n>>> Sort Key: logtable.timestampcol DESC\n>>> Sort Method: quicksort Memory: 58867kB\n>>> Worker 0: Sort Method: quicksort Memory: 58510kB\n>>> Worker 1: Sort Method: quicksort Memory: 58788kB\n>>> Buffers: shared hit=576 read=406221\n>>> Worker 0: actual time=7438.271..7644.241 rows=172085 loops=1\n>>> Buffers: shared hit=203 read=135166\n>>> Worker 1: actual time=7407.574..7609.922 rows=172948 loops=1\n>>> Buffers: shared hit=202 read=135225\n>>> -> Parallel Seq Scan on schema.logtable (cost=0.00..524345.70\n>>> rows=210418 width=2542) (actual time=2.839..7017.729 rows=172172 loops=3)\n>>> Output: column1, .., column54\n>>> Filter: ((logtable.acrhivestatus <= 1) AND\n>>> ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR\n>>> (logtable.entrytype = 4002)))\n>>> Rows Removed by Filter: 4533459\n>>> Buffers: shared hit=480 read=406221\n>>> Worker 0: actual time=2.628..7006.420 rows=172085 loops=1\n>>> Buffers: shared hit=155 read=135166\n>>> Worker 1: actual time=3.948..6978.154 rows=172948 loops=1\n>>> Buffers: shared hit=154 read=135225\n>>> Settings: enable_bitmapscan = 'off', random_page_cost = '1',\n>>> temp_buffers = '80MB', work_mem = '1GB'\n>>> Planning Time: 0.621 ms\n>>> Execution Time: 9339.457 ms\n>>>\n>>> Have you tune shared buffers enough? Each block is of 8k by default.\n>>>> BTW, please try to reset random_page_cost.\n>>>\n>>>\n>>> Look above.\n>>>\n>>> I will try upgrading the minor version next.\n>>> I will also try setting up a 13.X version locally and import the data\n>>> from 12.2 to 13.X and see if it might be faster.\n>>>\n>>>\n>>> Am Do., 6. Mai 2021 um 23:16 Uhr schrieb Imre Samu <[email protected]\n>>> >:\n>>>\n>>>> *> Postgres Version : *PostgreSQL 12.2,\n>>>> > ... ON ... USING btree\n>>>>\n>>>> IMHO:\n>>>> The next minor (bugix&security) release is near ( expected ~ May 13th,\n>>>> 2021 ) https://www.postgresql.org/developer/roadmap/\n>>>> so you can update your PostgreSQL to 12.7 ( + full Reindexing\n>>>> recommended ! )\n>>>>\n>>>> You can find a lot of B-tree index-related fixes.\n>>>> https://www.postgresql.org/docs/12/release-12-3.html Release date:\n>>>> 2020-05-14\n>>>> - Fix possible undercounting of deleted B-tree index pages in VACUUM\n>>>> VERBOSE output\n>>>> - Fix wrong bookkeeping for oldest deleted page in a B-tree index\n>>>> - Ensure INCLUDE'd columns are always removed from B-tree pivot tuples\n>>>> https://www.postgresql.org/docs/12/release-12-4.html\n>>>> - Avoid repeated marking of dead btree index entries as dead\n>>>> https://www.postgresql.org/docs/12/release-12-5.html\n>>>> - Fix failure of parallel B-tree index scans when the index condition\n>>>> is unsatisfiable\n>>>> https://www.postgresql.org/docs/12/release-12-6.html Release date:\n>>>> 2021-02-11\n>>>>\n>>>>\n>>>> > COLLATE pg_catalog.\"default\"\n>>>>\n>>>> You can test the \"C\" Collation in some columns (keys ? ) ; in\n>>>> theory, it should be faster :\n>>>> \"The drawback of using locales other than C or POSIX in PostgreSQL is\n>>>> its performance impact. It slows character handling and prevents ordinary\n>>>> indexes from being used by LIKE. For this reason use locales only if you\n>>>> actually need them.\"\n>>>> https://www.postgresql.org/docs/12/locale.html\n>>>>\n>>>> https://www.postgresql.org/message-id/flat/CAF6DVKNU0vb4ZeQQ-%3Dagg69QJU3wdjPnMYYrPYY7CKc6iOU7eQ%40mail.gmail.com\n>>>>\n>>>> Best,\n>>>> Imre\n>>>>\n>>>>\n>>>> Semen Yefimenko <[email protected]> ezt írta (időpont: 2021.\n>>>> máj. 6., Cs, 16:38):\n>>>>\n>>>>> Hi there,\n>>>>>\n>>>>> I've recently been involved in migrating our old system to SQL Server\n>>>>> and then PostgreSQL. Everything has been working fine so far but now after\n>>>>> executing our tests on Postgres, we saw a very slow running query on a\n>>>>> large table in our database.\n>>>>> I have tried asking on other platforms but no one has been able to\n>>>>> give me a satisfying answer.\n>>>>>\n>>>>> *Postgres Version : *PostgreSQL 12.2, compiled by Visual C++ build\n>>>>> 1914, 64-bit\n>>>>> No notable errors in the Server log and the Postgres Server itself.\n>>>>>\n>>>>> The table structure :\n>>>>>\n>>>>> CREATE TABLE logtable\n>>>>> (\n>>>>> key character varying(20) COLLATE pg_catalog.\"default\" NOT NULL,\n>>>>> id integer,\n>>>>> column3 integer,\n>>>>> column4 integer,\n>>>>> column5 integer,\n>>>>> column6 integer,\n>>>>> column7 integer,\n>>>>> column8 integer,\n>>>>> column9 character varying(128) COLLATE pg_catalog.\"default\",\n>>>>> column10 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>>> column11 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>>> column12 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>>> column13 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>>> column14 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>>> column15 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>>> column16 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>>> column17 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>>> column18 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>>> column19 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>>> column21 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>> column22 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>> column23 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>> column24 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>> column25 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>> column26 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>> column27 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>> column28 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>> column29 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>> column30 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>> column31 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>> column32 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>> column33 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>> column34 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>> column35 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>> entrytype integer,\n>>>>> column37 bigint,\n>>>>> column38 bigint,\n>>>>> column39 bigint,\n>>>>> column40 bigint,\n>>>>> column41 bigint,\n>>>>> column42 bigint,\n>>>>> column43 bigint,\n>>>>> column44 bigint,\n>>>>> column45 bigint,\n>>>>> column46 bigint,\n>>>>> column47 character varying(128) COLLATE pg_catalog.\"default\",\n>>>>> timestampcol timestamp without time zone,\n>>>>> column49 timestamp without time zone,\n>>>>> column50 timestamp without time zone,\n>>>>> column51 timestamp without time zone,\n>>>>> column52 timestamp without time zone,\n>>>>> archivestatus integer,\n>>>>> column54 integer,\n>>>>> column55 character varying(20) COLLATE pg_catalog.\"default\",\n>>>>> CONSTRAINT pkey PRIMARY KEY (key)\n>>>>> USING INDEX TABLESPACE tablespace\n>>>>> )\n>>>>>\n>>>>> TABLESPACE tablespace;\n>>>>>\n>>>>> ALTER TABLE schema.logtable\n>>>>> OWNER to user;\n>>>>>\n>>>>> CREATE INDEX idx_timestampcol\n>>>>> ON schema.logtable USING btree\n>>>>> ( timestampcol ASC NULLS LAST )\n>>>>> TABLESPACE tablespace ;\n>>>>>\n>>>>> CREATE INDEX idx_test2\n>>>>> ON schema.logtable USING btree\n>>>>> ( entrytype ASC NULLS LAST)\n>>>>> TABLESPACE tablespace\n>>>>> WHERE archivestatus <= 1;\n>>>>>\n>>>>> CREATE INDEX idx_arcstatus\n>>>>> ON schema.logtable USING btree\n>>>>> ( archivestatus ASC NULLS LAST)\n>>>>> TABLESPACE tablespace;\n>>>>>\n>>>>> CREATE INDEX idx_entrytype\n>>>>> ON schema.logtable USING btree\n>>>>> ( entrytype ASC NULLS LAST)\n>>>>> TABLESPACE tablespace ;\n>>>>>\n>>>>>\n>>>>> The table contains 14.000.000 entries and has about 3.3 GB of data:\n>>>>> No triggers, inserts per day, probably 5-20 K per day.\n>>>>>\n>>>>> SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\n>>>>> relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE\n>>>>> relname='logtable';\n>>>>>\n>>>>> relname\n>>>>> |relpages|reltuples|relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|\n>>>>>\n>>>>> ------------------|--------|---------|-------------|-------|--------|--------------|----------|-------------|\n>>>>> logtable | 405988| 14091424| 405907|r |\n>>>>> 54|false |NULL | 3326803968|\n>>>>>\n>>>>>\n>>>>> The slow running query:\n>>>>>\n>>>>> SELECT column1,..., column54 where ((entrytype = 4000 or entrytype =\n>>>>> 4001 or entrytype = 4002) and (archivestatus <= 1)) order by timestampcol\n>>>>> desc;\n>>>>>\n>>>>>\n>>>>> This query runs in about 45-60 seconds.\n>>>>> The same query runs in about 289 ms Oracle and 423 ms in SQL-Server.\n>>>>> Now I understand that actually loading all results would take a while.\n>>>>> (about 520K or so rows)\n>>>>> But that shouldn't be exactly what happens right? There should be a\n>>>>> resultset iterator which can retrieve all data but doesn't from the get go.\n>>>>>\n>>>>> With the help of some people in the slack and so thread, I've found a\n>>>>> configuration parameter which helps performance :\n>>>>>\n>>>>> set random_page_cost = 1;\n>>>>>\n>>>>> This improved performance from 45-60 s to 15-35 s. (since we are using\n>>>>> ssd's)\n>>>>> Still not acceptable but definitely an improvement.\n>>>>> Some maybe relevant system parameters:\n>>>>>\n>>>>> effective_cache_size 4GB\n>>>>> maintenance_work_mem 1GB\n>>>>> shared_buffers 2GB\n>>>>> work_mem 1GB\n>>>>>\n>>>>>\n>>>>> Currently I'm accessing the data through DbBeaver (JDBC -\n>>>>> postgresql-42.2.5.jar) and our JAVA application (JDBC -\n>>>>> postgresql-42.2.19.jar). Both use the defaultRowFetchSize=5000 to not load\n>>>>> everything into memory and limit the results.\n>>>>> The explain plan:\n>>>>>\n>>>>> EXPLAIN (ANALYZE, BUFFERS, SETTINGS, VERBOSE)...\n>>>>> (Above Query)\n>>>>>\n>>>>>\n>>>>> Gather Merge (cost=347142.71..397196.91 rows=429006 width=2558)\n>>>>> (actual time=21210.019..22319.444 rows=515841 loops=1)\n>>>>> Output: column1, .. , column54\n>>>>> Workers Planned: 2\n>>>>> Workers Launched: 2\n>>>>> Buffers: shared hit=141487 read=153489\n>>>>> -> Sort (cost=346142.69..346678.95 rows=214503 width=2558) (actual\n>>>>> time=21148.887..21297.428 rows=171947 loops=3)\n>>>>> Output: column1, .. , column54\n>>>>> Sort Key: logtable.timestampcol DESC\n>>>>> Sort Method: quicksort Memory: 62180kB\n>>>>> Worker 0: Sort Method: quicksort Memory: 56969kB\n>>>>> Worker 1: Sort Method: quicksort Memory: 56837kB\n>>>>> Buffers: shared hit=141487 read=153489\n>>>>> Worker 0: actual time=21129.973..21296.839 rows=166776 loops=1\n>>>>> Buffers: shared hit=45558 read=49514\n>>>>> Worker 1: actual time=21114.439..21268.117 rows=165896 loops=1\n>>>>> Buffers: shared hit=45104 read=49506\n>>>>> -> Parallel Bitmap Heap Scan on schema.logtable\n>>>>> (cost=5652.74..327147.77 rows=214503 width=2558) (actual\n>>>>> time=1304.813..20637.462 rows=171947 loops=3)\n>>>>> Output: column1, .. , column54\n>>>>> Recheck Cond: ((logtable.entrytype = 4000) OR\n>>>>> (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n>>>>> Filter: (logtable.archivestatus <= 1)\n>>>>> Heap Blocks: exact=103962\n>>>>> Buffers: shared hit=141473 read=153489\n>>>>> Worker 0: actual time=1280.472..20638.620 rows=166776\n>>>>> loops=1\n>>>>> Buffers: shared hit=45551 read=49514\n>>>>> Worker 1: actual time=1275.274..20626.219 rows=165896\n>>>>> loops=1\n>>>>> Buffers: shared hit=45097 read=49506\n>>>>> -> BitmapOr (cost=5652.74..5652.74 rows=520443\n>>>>> width=0) (actual time=1179.438..1179.438 rows=0 loops=1)\n>>>>> Buffers: shared hit=9 read=1323\n>>>>> -> Bitmap Index Scan on idx_entrytype\n>>>>> (cost=0.00..556.61 rows=54957 width=0) (actual time=161.939..161.940\n>>>>> rows=65970 loops=1)\n>>>>> Index Cond: (logtable.entrytype = 4000)\n>>>>> Buffers: shared hit=1 read=171\n>>>>> -> Bitmap Index Scan on idx_entrytype\n>>>>> (cost=0.00..2243.22 rows=221705 width=0) (actual time=548.849..548.849\n>>>>> rows=224945 loops=1)\n>>>>> Index Cond: (logtable.entrytype = 4001)\n>>>>> Buffers: shared hit=4 read=576\n>>>>> -> Bitmap Index Scan on idx_entrytype\n>>>>> (cost=0.00..2466.80 rows=243782 width=0) (actual time=468.637..468.637\n>>>>> rows=224926 loops=1)\n>>>>> Index Cond: (logtable.entrytype = 4002)\n>>>>> Buffers: shared hit=4 read=576\n>>>>> Settings: random_page_cost = '1', search_path = '\"$user\", schema,\n>>>>> public', temp_buffers = '80MB', work_mem = '1GB'\n>>>>> Planning Time: 0.578 ms\n>>>>> Execution Time: 22617.351 ms\n>>>>>\n>>>>> As mentioned before, oracle does this much faster.\n>>>>>\n>>>>>\n>>>>> -------------------------------------------------------------------------------------------------------------------------\n>>>>> | Id | Operation | Name\n>>>>> | Rows | Bytes |TempSpc| Cost (%CPU)| Time |\n>>>>>\n>>>>> -------------------------------------------------------------------------------------------------------------------------\n>>>>> | 0 | SELECT STATEMENT |\n>>>>> | 6878 | 2491K| | 2143 (1)| 00:00:01 |\n>>>>> | 1 | SORT ORDER BY |\n>>>>> | 6878 | 2491K| 3448K| 2143 (1)| 00:00:01 |\n>>>>> | 2 | INLIST ITERATOR |\n>>>>> | | | | | |\n>>>>> |* 3 | TABLE ACCESS BY INDEX ROWID BATCHED| logtable\n>>>>> | 6878 | 2491K| | 1597 (1)| 00:00:01 |\n>>>>> |* 4 | INDEX RANGE SCAN | idx_entrytype\n>>>>> | 6878 | | | 23 (0)| 00:00:01 |\n>>>>>\n>>>>> -------------------------------------------------------------------------------------------------------------------------\n>>>>>\n>>>>> Is there much I can analyze, any information you might need to further\n>>>>> analyze this?\n>>>>>\n>>>>\n>>\n>> --\n>> Thanks,\n>> Vijay\n>> Mumbai, India\n>>\n> --\nThanks,\nVijay\nMumbai, India\n\nIs this on windows ?I see a thread that mentions of performance penalty due to parallel worker There is a mailing thread with subject line - Huge performance penalty with parallel queries in Windows x64 v. Linux x64On Fri, 7 May 2021 at 2:33 PM Semen Yefimenko <[email protected]> wrote:As mentionend in the slack comments :SELECT pg_size_pretty(pg_relation_size('logtable')) as table_size, pg_size_pretty(pg_relation_size('idx_entrytype')) as index_size, (pgstattuple('logtable')).dead_tuple_percent;table_size | index_size | dead_tuple_percent------------+------------+--------------------3177 MB | 289 MB | 0I have roughly 6 indexes which all have around 300 MBSELECT pg_relation_size('logtable') as table_size, pg_relation_size(idx_entrytype) as index_size, 100-(pgstatindex('idx_entrytype')).avg_leaf_density as bloat_ratiotable_size | index_size | bloat_ratio------------+------------+-------------------3331694592 | 302555136 | 5.219999999999999Your queries:n_live_tup n_dead_tup14118380 0For testing, I've also been running VACUUM and ANALYZE pretty much before every test run.Am Fr., 7. Mai 2021 um 10:44 Uhr schrieb Vijaykumar Jain <[email protected]>:ok one last thing, not to be a PITA, but just in case if this helps.postgres=# SELECT * FROM pg_stat_user_indexes where relname = 'logtable';\npostgres=# SELECT * FROM pg_stat_user_tables where relname = 'logtable';\n\nbasically, i just to verify if the table is not bloated.looking at n_live_tup vs n_dead_tup would help understand it.if you see too many dead tuples, vacuum (ANALYZE,verbose) logtable; -- would get rid of dead tuples (if there are no tx using the dead tuples) and then run your query.Thanks,VijayOn Fri, 7 May 2021 at 13:34, Semen Yefimenko <[email protected]> wrote:Sorry if I'm cumulatively answering everyone in one E-Mail, I'm not sure how I'm supposed to do it. (single E-Mails vs many) Can you try tuning by increasing the shared_buffers slowly in steps of 500MB, and running explain analyze against the query.-- 2500 MB shared buffers - random_page_cost = 1;Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual time=2076.329..3737.050 rows=516517 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=295446 -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual time=2007.487..2202.707 rows=172172 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 65154kB Worker 0: Sort Method: quicksort Memory: 55707kB Worker 1: Sort Method: quicksort Memory: 55304kB Buffers: shared hit=295446 Worker 0: actual time=1963.969..2156.624 rows=161205 loops=1 Buffers: shared hit=91028 Worker 1: actual time=1984.700..2179.697 rows=161935 loops=1 Buffers: shared hit=92133 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5546.39..323481.21 rows=210418 width=2542) (actual time=322.125..1618.971 rows=172172 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=110951 Buffers: shared hit=295432 Worker 0: actual time=282.201..1595.117 rows=161205 loops=1 Buffers: shared hit=91021 Worker 1: actual time=303.671..1623.299 rows=161935 loops=1 Buffers: shared hit=92126 -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0) (actual time=199.119..199.119 rows=0 loops=1) Buffers: shared hit=1334 -> Bitmap Index Scan on idx_entrytype (cost=0.00..682.13 rows=67293 width=0) (actual time=28.856..28.857 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=172 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2223.63 rows=219760 width=0) (actual time=108.871..108.872 rows=225283 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=581 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2261.87 rows=223525 width=0) (actual time=61.377..61.377 rows=225264 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=581Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.940 msExecution Time: 4188.083 ms-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3000 MB shared buffers - random_page_cost = 1;Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual time=2062.280..3763.408 rows=516517 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=295446 -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual time=1987.933..2180.422 rows=172172 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 66602kB Worker 0: Sort Method: quicksort Memory: 55149kB Worker 1: Sort Method: quicksort Memory: 54415kB Buffers: shared hit=295446 Worker 0: actual time=1963.059..2147.916 rows=159556 loops=1 Buffers: shared hit=89981 Worker 1: actual time=1949.726..2136.200 rows=158554 loops=1 Buffers: shared hit=90141 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5546.39..323481.21 rows=210418 width=2542) (actual time=340.705..1603.796 rows=172172 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=113990 Buffers: shared hit=295432 Worker 0: actual time=317.918..1605.548 rows=159556 loops=1 Buffers: shared hit=89974 Worker 1: actual time=304.744..1589.221 rows=158554 loops=1 Buffers: shared hit=90134 -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0) (actual time=218.972..218.973 rows=0 loops=1) Buffers: shared hit=1334 -> Bitmap Index Scan on idx_entrytype (cost=0.00..682.13 rows=67293 width=0) (actual time=37.741..37.742 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=172 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2223.63 rows=219760 width=0) (actual time=119.120..119.121 rows=225283 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=581 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2261.87 rows=223525 width=0) (actual time=62.097..62.098 rows=225264 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=581Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 2.717 msExecution Time: 4224.670 ms-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3500 MB shared buffers - random_page_cost = 1;Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual time=3578.155..4932.858 rows=516517 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=14 read=295432 written=67 -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual time=3482.159..3677.227 rows=172172 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 58533kB Worker 0: Sort Method: quicksort Memory: 56878kB Worker 1: Sort Method: quicksort Memory: 60755kB Buffers: shared hit=14 read=295432 written=67 Worker 0: actual time=3435.131..3632.985 rows=166842 loops=1 Buffers: shared hit=7 read=95783 written=25 Worker 1: actual time=3441.545..3649.345 rows=179354 loops=1 Buffers: shared hit=5 read=101608 written=20 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5546.39..323481.21 rows=210418 width=2542) (actual time=345.111..3042.932 rows=172172 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=96709 Buffers: shared hit=2 read=295430 written=67 Worker 0: actual time=300.525..2999.403 rows=166842 loops=1 Buffers: shared read=95783 written=25 Worker 1: actual time=300.552..3004.859 rows=179354 loops=1 Buffers: shared read=101606 written=20 -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0) (actual time=241.996..241.997 rows=0 loops=1) Buffers: shared hit=2 read=1332 -> Bitmap Index Scan on idx_entrytype (cost=0.00..682.13 rows=67293 width=0) (actual time=37.129..37.130 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared read=172 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2223.63 rows=219760 width=0) (actual time=131.051..131.052 rows=225283 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=1 read=580 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2261.87 rows=223525 width=0) (actual time=73.800..73.800 rows=225264 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=1 read=580Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.597 msExecution Time: 5389.811 ms This doesn't seem to have had an effect. Thanks for the suggestion. Have you try of excluding not null from index? Can you give dispersion of archivestatus?Yes I have, it yielded the same performance boost as : create index test on logtable(entrytype) where archivestatus <= 1;I wonder what the old query plan was...Would you include links to your prior correspondance ?So prior Execution Plans are present in the SO.The other forums I've tried are the official slack channel : https://postgresteam.slack.com/archives/C0FS3UTAP/p1620286295228600And SO : https://stackoverflow.com/questions/67401792/slow-running-postgresql-queryBut I think most of the points discussed in these posts have already been mentionend by you except bloating of indexes. Oracle is apparently doing a single scan on \"entrytype\".As a test, you could try forcing that, like:begin; SET enable_bitmapscan=off ; explain (analyze) [...]; rollback;orbegin; DROP INDEX idx_arcstatus; explain (analyze) [...]; rollback;I've tried enable_bitmapscan=off but it didn't yield any good results.-- 2000 MB shared buffers - random_page_cost = 4 - enable_bitmapscan to offGather Merge (cost=543949.72..593050.69 rows=420836 width=2542) (actual time=7716.031..9043.399 rows=516517 loops=1) Output: column1, .., column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=192 read=406605 -> Sort (cost=542949.70..543475.75 rows=210418 width=2542) (actual time=7642.666..7835.527 rows=172172 loops=3) Output: column1, .., column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 58803kB Worker 0: Sort Method: quicksort Memory: 60376kB Worker 1: Sort Method: quicksort Memory: 56988kB Buffers: shared hit=192 read=406605 Worker 0: actual time=7610.482..7814.905 rows=177637 loops=1 Buffers: shared hit=78 read=137826 Worker 1: actual time=7607.645..7803.561 rows=167316 loops=1 Buffers: shared hit=80 read=132672 -> Parallel Seq Scan on schema.logtable (cost=0.00..524345.70 rows=210418 width=2542) (actual time=1.669..7189.365 rows=172172 loops=3) Output: column1, .., column54 Filter: ((logtable.acrhivestatus <= 1) AND ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))) Rows Removed by Filter: 4533459 Buffers: shared hit=96 read=406605 Worker 0: actual time=1.537..7158.286 rows=177637 loops=1 Buffers: shared hit=30 read=137826 Worker 1: actual time=1.414..7161.670 rows=167316 loops=1 Buffers: shared hit=32 read=132672Settings: enable_bitmapscan = 'off', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.725 msExecution Time: 9500.928 ms---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2000 MB shared buffers - random_page_cost = 4 - -- 2000 -- 2000 MB shared buffers - random_page_cost = 1 - enable_bitmapscan to offGather Merge (cost=543949.72..593050.69 rows=420836 width=2542) (actual time=7519.032..8871.433 rows=516517 loops=1) Output: column1, .., column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=576 read=406221 -> Sort (cost=542949.70..543475.75 rows=210418 width=2542) (actual time=7451.958..7649.480 rows=172172 loops=3) Output: column1, .., column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 58867kB Worker 0: Sort Method: quicksort Memory: 58510kB Worker 1: Sort Method: quicksort Memory: 58788kB Buffers: shared hit=576 read=406221 Worker 0: actual time=7438.271..7644.241 rows=172085 loops=1 Buffers: shared hit=203 read=135166 Worker 1: actual time=7407.574..7609.922 rows=172948 loops=1 Buffers: shared hit=202 read=135225 -> Parallel Seq Scan on schema.logtable (cost=0.00..524345.70 rows=210418 width=2542) (actual time=2.839..7017.729 rows=172172 loops=3) Output: column1, .., column54 Filter: ((logtable.acrhivestatus <= 1) AND ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))) Rows Removed by Filter: 4533459 Buffers: shared hit=480 read=406221 Worker 0: actual time=2.628..7006.420 rows=172085 loops=1 Buffers: shared hit=155 read=135166 Worker 1: actual time=3.948..6978.154 rows=172948 loops=1 Buffers: shared hit=154 read=135225Settings: enable_bitmapscan = 'off', random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.621 msExecution Time: 9339.457 msHave you tune shared buffers enough? Each block is of 8k by default.BTW, please try to reset random_page_cost.Look above. I will try upgrading the minor version next.I will also try setting up a 13.X version locally and import the data from 12.2 to 13.X and see if it might be faster.Am Do., 6. Mai 2021 um 23:16 Uhr schrieb Imre Samu <[email protected]>:> Postgres Version : PostgreSQL 12.2,> ... ON ... USING btreeIMHO:The next minor (bugix&security) release is near ( expected ~ May 13th, 2021 ) https://www.postgresql.org/developer/roadmap/so you can update your PostgreSQL to 12.7 ( + full Reindexing recommended ! ) You can find a lot of B-tree index-related fixes.https://www.postgresql.org/docs/12/release-12-3.html Release date: 2020-05-14 - Fix possible undercounting of deleted B-tree index pages in VACUUM VERBOSE output - Fix wrong bookkeeping for oldest deleted page in a B-tree index- Ensure INCLUDE'd columns are always removed from B-tree pivot tupleshttps://www.postgresql.org/docs/12/release-12-4.html - Avoid repeated marking of dead btree index entries as dead https://www.postgresql.org/docs/12/release-12-5.html - Fix failure of parallel B-tree index scans when the index condition is unsatisfiablehttps://www.postgresql.org/docs/12/release-12-6.html Release date: 2021-02-11> COLLATE pg_catalog.\"default\"You can test the \"C\" Collation in some columns (keys ? ) ; in theory, it should be faster :\"The drawback of using locales other than C or POSIX in PostgreSQL is its performance impact. It slows character handling and prevents ordinary indexes from being used by LIKE. For this reason use locales only if you actually need them.\"https://www.postgresql.org/docs/12/locale.htmlhttps://www.postgresql.org/message-id/flat/CAF6DVKNU0vb4ZeQQ-%3Dagg69QJU3wdjPnMYYrPYY7CKc6iOU7eQ%40mail.gmail.comBest, ImreSemen Yefimenko <[email protected]> ezt írta (időpont: 2021. máj. 6., Cs, 16:38):Hi there,I've recently been involved in migrating our old system to SQL Server and then PostgreSQL. Everything has been working fine so far but now after executing our tests on Postgres, we saw a very slow running query on a large table in our database. I have tried asking on other platforms but no one has been able to give me a satisfying answer. Postgres Version : PostgreSQL 12.2, compiled by Visual C++ build 1914, 64-bitNo notable errors in the Server log and the Postgres Server itself.The table structure :CREATE TABLE logtable( key character varying(20) COLLATE pg_catalog.\"default\" NOT NULL, id integer, \n\ncolumn3 integer, \n\ncolumn4 integer, \n\ncolumn5 integer, \n\ncolumn6 integer, \n\ncolumn7 integer, \n\ncolumn8 integer, \n\ncolumn9 character varying(128) COLLATE pg_catalog.\"default\", \n\ncolumn10 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn11 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn12 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn13 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn14 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn15 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn16 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn17 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn18 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn19 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn21 \n\ncharacter varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn22 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn23 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn24 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn25 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn26 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn27 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn28 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn29 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn30 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn31 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn32 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn33 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn34 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn35 character varying(256) COLLATE pg_catalog.\"default\", \n\nentrytype integer, \n\ncolumn37 bigint, \n\ncolumn38 bigint, \n\ncolumn39 bigint, \n\ncolumn40 bigint, \n\ncolumn41 bigint, \n\ncolumn42 bigint, \n\ncolumn43 bigint, \n\ncolumn44 bigint, \n\ncolumn45 bigint, \n\ncolumn46 bigint, \n\ncolumn47 character varying(128) COLLATE pg_catalog.\"default\", \n\ntimestampcol timestamp without time zone, \n\ncolumn49 timestamp without time zone, \n\ncolumn50 timestamp without time zone, \n\ncolumn51 timestamp without time zone, \n\ncolumn52 timestamp without time zone, \n\n\n\narchivestatus \n\ninteger, \n\ncolumn54 integer, \n\ncolumn55 character varying(20) COLLATE pg_catalog.\"default\", CONSTRAINT pkey PRIMARY KEY (key) USING INDEX TABLESPACE tablespace)TABLESPACE tablespace;ALTER TABLE schema.logtable OWNER to user;CREATE INDEX idx_timestampcol ON schema.logtable USING btree (\n\n\n\ntimestampcol \n\n\n\n ASC NULLS LAST ) TABLESPACE \n\ntablespace\n\n;CREATE INDEX idx_test2 ON schema.logtable USING btree ( entrytype ASC NULLS LAST) TABLESPACE tablespace WHERE archivestatus <= 1;CREATE INDEX idx_arcstatus ON \n\nschema.logtable USING btree ( archivestatus ASC NULLS LAST) TABLESPACE tablespace;CREATE INDEX \n\nidx_entrytype ON schema.logtable USING btree ( entrytype ASC NULLS LAST) TABLESPACE \n\ntablespace\n\n;The table contains 14.000.000 entries and has about 3.3 GB of data:No triggers, inserts per day, probably 5-20 K per day.SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='logtable';relname |relpages|reltuples|relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|------------------|--------|---------|-------------|-------|--------|--------------|----------|-------------|logtable | 405988| 14091424| 405907|r | 54|false |NULL | 3326803968|The slow running query:SELECT column1,..., column54 where ((entrytype = 4000 or \n\nentrytype \n\n= 4001 or \n\nentrytype \n\n= 4002) and (archivestatus <= 1)) order by timestampcol desc;This query runs in about 45-60 seconds.The same query runs in about 289 ms Oracle and 423 ms in SQL-Server. Now I understand that actually loading all results would take a while. (about 520K or so rows) But that shouldn't be exactly what happens right? There should be a resultset iterator which can retrieve all data but doesn't from the get go. With the help of some people in the slack and so thread, I've found a configuration parameter which helps performance : set random_page_cost = 1;This improved performance from 45-60 s to 15-35 s. (since we are using ssd's) Still not acceptable but definitely an improvement. Some maybe relevant system parameters:effective_cache_size\t4GBmaintenance_work_mem\t1GBshared_buffers\t2GBwork_mem\t1GBCurrently I'm accessing the data through DbBeaver (JDBC - postgresql-42.2.5.jar) and our JAVA application (JDBC - postgresql-42.2.19.jar). Both use the defaultRowFetchSize=5000 to not load everything into memory and limit the results. The explain plan:EXPLAIN (ANALYZE, BUFFERS, SETTINGS, VERBOSE)...(Above Query)Gather Merge (cost=347142.71..397196.91 rows=429006 width=2558) (actual time=21210.019..22319.444 rows=515841 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=141487 read=153489 -> Sort (cost=346142.69..346678.95 rows=214503 width=2558) (actual time=21148.887..21297.428 rows=171947 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 62180kB Worker 0: Sort Method: quicksort Memory: 56969kB Worker 1: Sort Method: quicksort Memory: 56837kB Buffers: shared hit=141487 read=153489 Worker 0: actual time=21129.973..21296.839 rows=166776 loops=1 Buffers: shared hit=45558 read=49514 Worker 1: actual time=21114.439..21268.117 rows=165896 loops=1 Buffers: shared hit=45104 read=49506 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5652.74..327147.77 rows=214503 width=2558) (actual time=1304.813..20637.462 rows=171947 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=103962 Buffers: shared hit=141473 read=153489 Worker 0: actual time=1280.472..20638.620 rows=166776 loops=1 Buffers: shared hit=45551 read=49514 Worker 1: actual time=1275.274..20626.219 rows=165896 loops=1 Buffers: shared hit=45097 read=49506 -> BitmapOr (cost=5652.74..5652.74 rows=520443 width=0) (actual time=1179.438..1179.438 rows=0 loops=1) Buffers: shared hit=9 read=1323 -> Bitmap Index Scan on idx_entrytype (cost=0.00..556.61 rows=54957 width=0) (actual time=161.939..161.940 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=1 read=171 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2243.22 rows=221705 width=0) (actual time=548.849..548.849 rows=224945 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=4 read=576 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2466.80 rows=243782 width=0) (actual time=468.637..468.637 rows=224926 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=4 read=576Settings: random_page_cost = '1', search_path = '\"$user\", schema, public', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.578 msExecution Time: 22617.351 msAs mentioned before, oracle does this much faster. -------------------------------------------------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |-------------------------------------------------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 6878 | 2491K| | 2143 (1)| 00:00:01 || 1 | SORT ORDER BY | | 6878 | 2491K| 3448K| 2143 (1)| 00:00:01 || 2 | INLIST ITERATOR | | | | | | ||* 3 | TABLE ACCESS BY INDEX ROWID BATCHED| logtable | 6878 | 2491K| | 1597 (1)| 00:00:01 ||* 4 | INDEX RANGE SCAN | idx_entrytype | 6878 | | | 23 (0)| 00:00:01 |-------------------------------------------------------------------------------------------------------------------------Is there much I can analyze, any information you might need to further analyze this? \n\n\n-- Thanks,VijayMumbai, India\n\n\n\n-- Thanks,VijayMumbai, India",
"msg_date": "Fri, 7 May 2021 15:25:58 +0530",
"msg_from": "Vijaykumar Jain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "On Thu, May 6, 2021 at 10:38 AM Semen Yefimenko <[email protected]>\nwrote:\n\n> Hi there,\n>\n> I've recently been involved in migrating our old system to SQL Server and\n> then PostgreSQL. Everything has been working fine so far but now after\n> executing our tests on Postgres, we saw a very slow running query on a\n> large table in our database.\n> I have tried asking on other platforms but no one has been able to give me\n> a satisfying answer.\n> ...\n>\n> SELECT column1,..., column54 where ((entrytype = 4000 or entrytype = 4001\n> or entrytype = 4002) and (archivestatus <= 1)) order by timestampcol desc;\n>\n>\n>\nI know several people have suggested using `IN` to replace the or\nstatements, that would be my first go-to also. Another approach I have\nfound helpful is to keep in mind whenever you have an `OR` in a where\nclause it can be replaced with a `UNION ALL`. Usually the `UNION ALL` is\nfaster.\n\nI recommend avoiding `OR` in where clauses as much as possible. -\nSometimes it can't be helped, especially if you need an exclusive or, but\nmost of the time there is another way that is usually better.\n\nAnother thought is \"archivestatus\" really a boolean or does it have\nmultiple states? If it is actually a boolean, then can you change the\ncolumn data type?\n\nOn Thu, May 6, 2021 at 10:38 AM Semen Yefimenko <[email protected]> wrote:Hi there,I've recently been involved in migrating our old system to SQL Server and then PostgreSQL. Everything has been working fine so far but now after executing our tests on Postgres, we saw a very slow running query on a large table in our database. I have tried asking on other platforms but no one has been able to give me a satisfying answer. ...SELECT column1,..., column54 where ((entrytype = 4000 or \n\nentrytype \n\n= 4001 or \n\nentrytype \n\n= 4002) and (archivestatus <= 1)) order by timestampcol desc;I know several people have suggested using `IN` to replace the or statements, that would be my first go-to also. Another approach I have found helpful is to keep in mind whenever you have an `OR` in a where clause it can be replaced with a `UNION ALL`. Usually the `UNION ALL` is faster.I recommend avoiding `OR` in where clauses as much as possible. - Sometimes it can't be helped, especially if you need an exclusive or, but most of the time there is another way that is usually better.Another thought is \"archivestatus\" really a boolean or does it have multiple states? If it is actually a boolean, then can you change the column data type?",
"msg_date": "Fri, 7 May 2021 08:50:08 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "For testing purposes I set up a separate postgres 13.2 instance on windows.\nTo my surprise, it works perfectly fine. Also indexes, have about 1/4 of\nthe size they had on 12.6.\nI'll try setting up a new 12.6 instance and see if I can reproduce\nanything.\n\nThis explain plan is on a SSD local postgres 13.2 instance with default\nsettings and not setting random_page_cost.\n\nGather Merge (cost=19444.07..19874.60 rows=3690 width=2638) (actual\ntime=41.633..60.538 rows=7087 loops=1)\n Output: column1, .. ,column54\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=2123\n -> Sort (cost=18444.05..18448.66 rows=1845 width=2638) (actual\ntime=4.057..4.595 rows=2362 loops=3)\n Output: column1, .. ,column54\n Sort Key: logtable.timestampcol1 DESC\n Sort Method: quicksort Memory: 3555kB\n Buffers: shared hit=2123\n Worker 0: actual time=0.076..0.077 rows=0 loops=1\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=7\n Worker 1: actual time=0.090..0.091 rows=0 loops=1\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=7\n -> Parallel Bitmap Heap Scan on schema.logtable\n (cost=61.84..16243.96 rows=1845 width=2638) (actual time=0.350..2.419\nrows=2362 loops=3)\n Output: column1, .. ,column54\n Recheck Cond: ((logtable.entrytype = 4000) OR\n(logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n Filter: (logtable.tfnlogent_archivestatus <= 1)\n Heap Blocks: exact=2095\n Buffers: shared hit=2109\n Worker 0: actual time=0.030..0.030 rows=0 loops=1\n Worker 1: actual time=0.035..0.036 rows=0 loops=1\n -> BitmapOr (cost=61.84..61.84 rows=4428 width=0) (actual\ntime=0.740..0.742 rows=0 loops=1)\n Buffers: shared hit=14\n -> Bitmap Index Scan on idx_entrytype\n (cost=0.00..19.50 rows=1476 width=0) (actual time=0.504..0.504 rows=5475\nloops=1)\n Index Cond: (logtable.entrytype = 4000)\n Buffers: shared hit=7\n -> Bitmap Index Scan on idx_entrytype\n (cost=0.00..19.50 rows=1476 width=0) (actual time=0.056..0.056 rows=830\nloops=1)\n Index Cond: (logtable.entrytype = 4001)\n Buffers: shared hit=3\n -> Bitmap Index Scan on idx_entrytype\n (cost=0.00..19.50 rows=1476 width=0) (actual time=0.178..0.179 rows=782\nloops=1)\n Index Cond: (logtable.entrytype = 4002)\n Buffers: shared hit=4\nPlanning Time: 0.212 ms\nExecution Time: 61.692 ms\n\nI've also installed a locally running 12.6 on windows.\nUnfortunately I couldn't reproduce the issue. I loaded the data with a tool\nthat I wrote a few months ago which basically independently from that\ndatabase inserts data and creates sequences and indexes.\nQuery also finishes in like 70 ~ ms. Then I've tried pg_dump into a\ndifferent database on the same dev database (where the slow query still\nexists). The performance is just as bad on this database and indexes are\nalso all 300 MB big (whereas on my locally running instance they're at\naround 80 MB)\nNow I'm trying to insert the data with the same tool I've used for my local\ninstallations on the remote dev database.\nThis will still take some time so I will update once I have this tested.\nSeems like there is something skewed going on with the development database\nso far.\n\n\nAm Fr., 7. Mai 2021 um 11:56 Uhr schrieb Vijaykumar Jain <\[email protected]>:\n\n> Is this on windows ?\n>\n> I see a thread that mentions of performance penalty due to parallel worker\n>\n>\n> There is a mailing thread with subject line -\n> Huge performance penalty with parallel queries in Windows x64 v. Linux x64\n>\n>\n>\n> On Fri, 7 May 2021 at 2:33 PM Semen Yefimenko <[email protected]>\n> wrote:\n>\n>> As mentionend in the slack comments :\n>>\n>> SELECT pg_size_pretty(pg_relation_size('logtable')) as table_size,\n>> pg_size_pretty(pg_relation_size('idx_entrytype')) as index_size,\n>> (pgstattuple('logtable')).dead_tuple_percent;\n>>\n>> table_size | index_size | dead_tuple_percent\n>> ------------+------------+--------------------\n>> 3177 MB | 289 MB | 0\n>>\n>> I have roughly 6 indexes which all have around 300 MB\n>>\n>> SELECT pg_relation_size('logtable') as table_size,\n>> pg_relation_size(idx_entrytype) as index_size,\n>> 100-(pgstatindex('idx_entrytype')).avg_leaf_density as bloat_ratio\n>>\n>> table_size | index_size | bloat_ratio\n>> ------------+------------+-------------------\n>> 3331694592 | 302555136 | 5.219999999999999\n>>\n>> Your queries:\n>>\n>> n_live_tup n_dead_tup\n>> 14118380 0\n>>\n>>\n>> For testing, I've also been running VACUUM and ANALYZE pretty much before\n>> every test run.\n>>\n>> Am Fr., 7. Mai 2021 um 10:44 Uhr schrieb Vijaykumar Jain <\n>> [email protected]>:\n>>\n>>> ok one last thing, not to be a PITA, but just in case if this helps.\n>>>\n>>> postgres=# SELECT * FROM pg_stat_user_indexes where relname =\n>>> 'logtable'; postgres=# SELECT * FROM pg_stat_user_tables where relname =\n>>> 'logtable';\n>>> basically, i just to verify if the table is not bloated.\n>>> looking at *n_live_tup* vs *n_dead_tup* would help understand it.\n>>>\n>>> if you see too many dead tuples,\n>>> vacuum (ANALYZE,verbose) logtable; -- would get rid of dead tuples (if\n>>> there are no tx using the dead tuples)\n>>>\n>>> and then run your query.\n>>>\n>>> Thanks,\n>>> Vijay\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>\n>>> On Fri, 7 May 2021 at 13:34, Semen Yefimenko <[email protected]>\n>>> wrote:\n>>>\n>>>> Sorry if I'm cumulatively answering everyone in one E-Mail, I'm not\n>>>> sure how I'm supposed to do it. (single E-Mails vs many)\n>>>>\n>>>>\n>>>>> Can you try tuning by increasing the shared_buffers slowly in steps\n>>>>> of 500MB, and running explain analyze against the query.\n>>>>\n>>>>\n>>>> -- 2500 MB shared buffers - random_page_cost = 1;\n>>>> Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542)\n>>>> (actual time=2076.329..3737.050 rows=516517 loops=1)\n>>>> Output: column1, .. , column54\n>>>> Workers Planned: 2\n>>>> Workers Launched: 2\n>>>> Buffers: shared hit=295446\n>>>> -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual\n>>>> time=2007.487..2202.707 rows=172172 loops=3)\n>>>> Output: column1, .. , column54\n>>>> Sort Key: logtable.timestampcol DESC\n>>>> Sort Method: quicksort Memory: 65154kB\n>>>> Worker 0: Sort Method: quicksort Memory: 55707kB\n>>>> Worker 1: Sort Method: quicksort Memory: 55304kB\n>>>> Buffers: shared hit=295446\n>>>> Worker 0: actual time=1963.969..2156.624 rows=161205 loops=1\n>>>> Buffers: shared hit=91028\n>>>> Worker 1: actual time=1984.700..2179.697 rows=161935 loops=1\n>>>> Buffers: shared hit=92133\n>>>> -> Parallel Bitmap Heap Scan on schema.logtable\n>>>> (cost=5546.39..323481.21 rows=210418 width=2542) (actual\n>>>> time=322.125..1618.971 rows=172172 loops=3)\n>>>> Output: column1, .. , column54\n>>>> Recheck Cond: ((logtable.entrytype = 4000) OR\n>>>> (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n>>>> Filter: (logtable.archivestatus <= 1)\n>>>> Heap Blocks: exact=110951\n>>>> Buffers: shared hit=295432\n>>>> Worker 0: actual time=282.201..1595.117 rows=161205\n>>>> loops=1\n>>>> Buffers: shared hit=91021\n>>>> Worker 1: actual time=303.671..1623.299 rows=161935\n>>>> loops=1\n>>>> Buffers: shared hit=92126\n>>>> -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0)\n>>>> (actual time=199.119..199.119 rows=0 loops=1)\n>>>> Buffers: shared hit=1334\n>>>> -> Bitmap Index Scan on idx_entrytype\n>>>> (cost=0.00..682.13 rows=67293 width=0) (actual time=28.856..28.857\n>>>> rows=65970 loops=1)\n>>>> Index Cond: (logtable.entrytype = 4000)\n>>>> Buffers: shared hit=172\n>>>> -> Bitmap Index Scan on idx_entrytype\n>>>> (cost=0.00..2223.63 rows=219760 width=0) (actual time=108.871..108.872\n>>>> rows=225283 loops=1)\n>>>> Index Cond: (logtable.entrytype = 4001)\n>>>> Buffers: shared hit=581\n>>>> -> Bitmap Index Scan on idx_entrytype\n>>>> (cost=0.00..2261.87 rows=223525 width=0) (actual time=61.377..61.377\n>>>> rows=225264 loops=1)\n>>>> Index Cond: (logtable.entrytype = 4002)\n>>>> Buffers: shared hit=581\n>>>> Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem =\n>>>> '1GB'\n>>>> Planning Time: 0.940 ms\n>>>> Execution Time: 4188.083 ms\n>>>>\n>>>> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>> -- 3000 MB shared buffers - random_page_cost = 1;\n>>>> Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542)\n>>>> (actual time=2062.280..3763.408 rows=516517 loops=1)\n>>>> Output: column1, .. , column54\n>>>> Workers Planned: 2\n>>>> Workers Launched: 2\n>>>> Buffers: shared hit=295446\n>>>> -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual\n>>>> time=1987.933..2180.422 rows=172172 loops=3)\n>>>> Output: column1, .. , column54\n>>>> Sort Key: logtable.timestampcol DESC\n>>>> Sort Method: quicksort Memory: 66602kB\n>>>> Worker 0: Sort Method: quicksort Memory: 55149kB\n>>>> Worker 1: Sort Method: quicksort Memory: 54415kB\n>>>> Buffers: shared hit=295446\n>>>> Worker 0: actual time=1963.059..2147.916 rows=159556 loops=1\n>>>> Buffers: shared hit=89981\n>>>> Worker 1: actual time=1949.726..2136.200 rows=158554 loops=1\n>>>> Buffers: shared hit=90141\n>>>> -> Parallel Bitmap Heap Scan on schema.logtable\n>>>> (cost=5546.39..323481.21 rows=210418 width=2542) (actual\n>>>> time=340.705..1603.796 rows=172172 loops=3)\n>>>> Output: column1, .. , column54\n>>>> Recheck Cond: ((logtable.entrytype = 4000) OR\n>>>> (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n>>>> Filter: (logtable.archivestatus <= 1)\n>>>> Heap Blocks: exact=113990\n>>>> Buffers: shared hit=295432\n>>>> Worker 0: actual time=317.918..1605.548 rows=159556\n>>>> loops=1\n>>>> Buffers: shared hit=89974\n>>>> Worker 1: actual time=304.744..1589.221 rows=158554\n>>>> loops=1\n>>>> Buffers: shared hit=90134\n>>>> -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0)\n>>>> (actual time=218.972..218.973 rows=0 loops=1)\n>>>> Buffers: shared hit=1334\n>>>> -> Bitmap Index Scan on idx_entrytype\n>>>> (cost=0.00..682.13 rows=67293 width=0) (actual time=37.741..37.742\n>>>> rows=65970 loops=1)\n>>>> Index Cond: (logtable.entrytype = 4000)\n>>>> Buffers: shared hit=172\n>>>> -> Bitmap Index Scan on idx_entrytype\n>>>> (cost=0.00..2223.63 rows=219760 width=0) (actual time=119.120..119.121\n>>>> rows=225283 loops=1)\n>>>> Index Cond: (logtable.entrytype = 4001)\n>>>> Buffers: shared hit=581\n>>>> -> Bitmap Index Scan on idx_entrytype\n>>>> (cost=0.00..2261.87 rows=223525 width=0) (actual time=62.097..62.098\n>>>> rows=225264 loops=1)\n>>>> Index Cond: (logtable.entrytype = 4002)\n>>>> Buffers: shared hit=581\n>>>> Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem =\n>>>> '1GB'\n>>>> Planning Time: 2.717 ms\n>>>> Execution Time: 4224.670 ms\n>>>>\n>>>> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>> -- 3500 MB shared buffers - random_page_cost = 1;\n>>>> Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542)\n>>>> (actual time=3578.155..4932.858 rows=516517 loops=1)\n>>>> Output: column1, .. , column54\n>>>> Workers Planned: 2\n>>>> Workers Launched: 2\n>>>> Buffers: shared hit=14 read=295432 written=67\n>>>> -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual\n>>>> time=3482.159..3677.227 rows=172172 loops=3)\n>>>> Output: column1, .. , column54\n>>>> Sort Key: logtable.timestampcol DESC\n>>>> Sort Method: quicksort Memory: 58533kB\n>>>> Worker 0: Sort Method: quicksort Memory: 56878kB\n>>>> Worker 1: Sort Method: quicksort Memory: 60755kB\n>>>> Buffers: shared hit=14 read=295432 written=67\n>>>> Worker 0: actual time=3435.131..3632.985 rows=166842 loops=1\n>>>> Buffers: shared hit=7 read=95783 written=25\n>>>> Worker 1: actual time=3441.545..3649.345 rows=179354 loops=1\n>>>> Buffers: shared hit=5 read=101608 written=20\n>>>> -> Parallel Bitmap Heap Scan on schema.logtable\n>>>> (cost=5546.39..323481.21 rows=210418 width=2542) (actual\n>>>> time=345.111..3042.932 rows=172172 loops=3)\n>>>> Output: column1, .. , column54\n>>>> Recheck Cond: ((logtable.entrytype = 4000) OR\n>>>> (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n>>>> Filter: (logtable.archivestatus <= 1)\n>>>> Heap Blocks: exact=96709\n>>>> Buffers: shared hit=2 read=295430 written=67\n>>>> Worker 0: actual time=300.525..2999.403 rows=166842\n>>>> loops=1\n>>>> Buffers: shared read=95783 written=25\n>>>> Worker 1: actual time=300.552..3004.859 rows=179354\n>>>> loops=1\n>>>> Buffers: shared read=101606 written=20\n>>>> -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0)\n>>>> (actual time=241.996..241.997 rows=0 loops=1)\n>>>> Buffers: shared hit=2 read=1332\n>>>> -> Bitmap Index Scan on idx_entrytype\n>>>> (cost=0.00..682.13 rows=67293 width=0) (actual time=37.129..37.130\n>>>> rows=65970 loops=1)\n>>>> Index Cond: (logtable.entrytype = 4000)\n>>>> Buffers: shared read=172\n>>>> -> Bitmap Index Scan on idx_entrytype\n>>>> (cost=0.00..2223.63 rows=219760 width=0) (actual time=131.051..131.052\n>>>> rows=225283 loops=1)\n>>>> Index Cond: (logtable.entrytype = 4001)\n>>>> Buffers: shared hit=1 read=580\n>>>> -> Bitmap Index Scan on idx_entrytype\n>>>> (cost=0.00..2261.87 rows=223525 width=0) (actual time=73.800..73.800\n>>>> rows=225264 loops=1)\n>>>> Index Cond: (logtable.entrytype = 4002)\n>>>> Buffers: shared hit=1 read=580\n>>>> Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem =\n>>>> '1GB'\n>>>> Planning Time: 0.597 ms\n>>>> Execution Time: 5389.811 ms\n>>>>\n>>>>\n>>>> This doesn't seem to have had an effect.\n>>>> Thanks for the suggestion.\n>>>>\n>>>> Have you try of excluding not null from index? Can you give dispersion\n>>>>> of archivestatus?\n>>>>>\n>>>>\n>>>> Yes I have, it yielded the same performance boost as :\n>>>>\n>>>> create index test on logtable(entrytype) where archivestatus <= 1;\n>>>>\n>>>> I wonder what the old query plan was...\n>>>>> Would you include links to your prior correspondance ?\n>>>>\n>>>>\n>>>> So prior Execution Plans are present in the SO.\n>>>> The other forums I've tried are the official slack channel :\n>>>> https://postgresteam.slack.com/archives/C0FS3UTAP/p1620286295228600\n>>>> And SO :\n>>>> https://stackoverflow.com/questions/67401792/slow-running-postgresql-query\n>>>> But I think most of the points discussed in these posts have already\n>>>> been mentionend by you except bloating of indexes.\n>>>>\n>>>> Oracle is apparently doing a single scan on \"entrytype\".\n>>>>> As a test, you could try forcing that, like:\n>>>>> begin; SET enable_bitmapscan=off ; explain (analyze) [...]; rollback;\n>>>>> or\n>>>>> begin; DROP INDEX idx_arcstatus; explain (analyze) [...]; rollback;\n>>>>\n>>>>\n>>>> I've tried enable_bitmapscan=off but it didn't yield any good results.\n>>>>\n>>>> -- 2000 MB shared buffers - random_page_cost = 4 - enable_bitmapscan to\n>>>> off\n>>>> Gather Merge (cost=543949.72..593050.69 rows=420836 width=2542)\n>>>> (actual time=7716.031..9043.399 rows=516517 loops=1)\n>>>> Output: column1, .., column54\n>>>> Workers Planned: 2\n>>>> Workers Launched: 2\n>>>> Buffers: shared hit=192 read=406605\n>>>> -> Sort (cost=542949.70..543475.75 rows=210418 width=2542) (actual\n>>>> time=7642.666..7835.527 rows=172172 loops=3)\n>>>> Output: column1, .., column54\n>>>> Sort Key: logtable.timestampcol DESC\n>>>> Sort Method: quicksort Memory: 58803kB\n>>>> Worker 0: Sort Method: quicksort Memory: 60376kB\n>>>> Worker 1: Sort Method: quicksort Memory: 56988kB\n>>>> Buffers: shared hit=192 read=406605\n>>>> Worker 0: actual time=7610.482..7814.905 rows=177637 loops=1\n>>>> Buffers: shared hit=78 read=137826\n>>>> Worker 1: actual time=7607.645..7803.561 rows=167316 loops=1\n>>>> Buffers: shared hit=80 read=132672\n>>>> -> Parallel Seq Scan on schema.logtable (cost=0.00..524345.70\n>>>> rows=210418 width=2542) (actual time=1.669..7189.365 rows=172172 loops=3)\n>>>> Output: column1, .., column54\n>>>> Filter: ((logtable.acrhivestatus <= 1) AND\n>>>> ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR\n>>>> (logtable.entrytype = 4002)))\n>>>> Rows Removed by Filter: 4533459\n>>>> Buffers: shared hit=96 read=406605\n>>>> Worker 0: actual time=1.537..7158.286 rows=177637 loops=1\n>>>> Buffers: shared hit=30 read=137826\n>>>> Worker 1: actual time=1.414..7161.670 rows=167316 loops=1\n>>>> Buffers: shared hit=32 read=132672\n>>>> Settings: enable_bitmapscan = 'off', temp_buffers = '80MB', work_mem =\n>>>> '1GB'\n>>>> Planning Time: 0.725 ms\n>>>> Execution Time: 9500.928 ms\n>>>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>> 2000 MB shared buffers - random_page_cost = 4 - -- 2000 -- 2000 MB shared\n>>>> buffers - random_page_cost = 1 - enable_bitmapscan to off\n>>>> Gather Merge (cost=543949.72..593050.69 rows=420836 width=2542)\n>>>> (actual time=7519.032..8871.433 rows=516517 loops=1)\n>>>> Output: column1, .., column54\n>>>> Workers Planned: 2\n>>>> Workers Launched: 2\n>>>> Buffers: shared hit=576 read=406221\n>>>> -> Sort (cost=542949.70..543475.75 rows=210418 width=2542) (actual\n>>>> time=7451.958..7649.480 rows=172172 loops=3)\n>>>> Output: column1, .., column54\n>>>> Sort Key: logtable.timestampcol DESC\n>>>> Sort Method: quicksort Memory: 58867kB\n>>>> Worker 0: Sort Method: quicksort Memory: 58510kB\n>>>> Worker 1: Sort Method: quicksort Memory: 58788kB\n>>>> Buffers: shared hit=576 read=406221\n>>>> Worker 0: actual time=7438.271..7644.241 rows=172085 loops=1\n>>>> Buffers: shared hit=203 read=135166\n>>>> Worker 1: actual time=7407.574..7609.922 rows=172948 loops=1\n>>>> Buffers: shared hit=202 read=135225\n>>>> -> Parallel Seq Scan on schema.logtable (cost=0.00..524345.70\n>>>> rows=210418 width=2542) (actual time=2.839..7017.729 rows=172172 loops=3)\n>>>> Output: column1, .., column54\n>>>> Filter: ((logtable.acrhivestatus <= 1) AND\n>>>> ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR\n>>>> (logtable.entrytype = 4002)))\n>>>> Rows Removed by Filter: 4533459\n>>>> Buffers: shared hit=480 read=406221\n>>>> Worker 0: actual time=2.628..7006.420 rows=172085 loops=1\n>>>> Buffers: shared hit=155 read=135166\n>>>> Worker 1: actual time=3.948..6978.154 rows=172948 loops=1\n>>>> Buffers: shared hit=154 read=135225\n>>>> Settings: enable_bitmapscan = 'off', random_page_cost = '1',\n>>>> temp_buffers = '80MB', work_mem = '1GB'\n>>>> Planning Time: 0.621 ms\n>>>> Execution Time: 9339.457 ms\n>>>>\n>>>> Have you tune shared buffers enough? Each block is of 8k by default.\n>>>>> BTW, please try to reset random_page_cost.\n>>>>\n>>>>\n>>>> Look above.\n>>>>\n>>>> I will try upgrading the minor version next.\n>>>> I will also try setting up a 13.X version locally and import the data\n>>>> from 12.2 to 13.X and see if it might be faster.\n>>>>\n>>>>\n>>>> Am Do., 6. Mai 2021 um 23:16 Uhr schrieb Imre Samu <\n>>>> [email protected]>:\n>>>>\n>>>>> *> Postgres Version : *PostgreSQL 12.2,\n>>>>> > ... ON ... USING btree\n>>>>>\n>>>>> IMHO:\n>>>>> The next minor (bugix&security) release is near ( expected ~ May 13th,\n>>>>> 2021 ) https://www.postgresql.org/developer/roadmap/\n>>>>> so you can update your PostgreSQL to 12.7 ( + full Reindexing\n>>>>> recommended ! )\n>>>>>\n>>>>> You can find a lot of B-tree index-related fixes.\n>>>>> https://www.postgresql.org/docs/12/release-12-3.html Release date:\n>>>>> 2020-05-14\n>>>>> - Fix possible undercounting of deleted B-tree index pages in VACUUM\n>>>>> VERBOSE output\n>>>>> - Fix wrong bookkeeping for oldest deleted page in a B-tree index\n>>>>> - Ensure INCLUDE'd columns are always removed from B-tree pivot tuples\n>>>>> https://www.postgresql.org/docs/12/release-12-4.html\n>>>>> - Avoid repeated marking of dead btree index entries as dead\n>>>>> https://www.postgresql.org/docs/12/release-12-5.html\n>>>>> - Fix failure of parallel B-tree index scans when the index\n>>>>> condition is unsatisfiable\n>>>>> https://www.postgresql.org/docs/12/release-12-6.html Release date:\n>>>>> 2021-02-11\n>>>>>\n>>>>>\n>>>>> > COLLATE pg_catalog.\"default\"\n>>>>>\n>>>>> You can test the \"C\" Collation in some columns (keys ? ) ; in\n>>>>> theory, it should be faster :\n>>>>> \"The drawback of using locales other than C or POSIX in PostgreSQL is\n>>>>> its performance impact. It slows character handling and prevents ordinary\n>>>>> indexes from being used by LIKE. For this reason use locales only if you\n>>>>> actually need them.\"\n>>>>> https://www.postgresql.org/docs/12/locale.html\n>>>>>\n>>>>> https://www.postgresql.org/message-id/flat/CAF6DVKNU0vb4ZeQQ-%3Dagg69QJU3wdjPnMYYrPYY7CKc6iOU7eQ%40mail.gmail.com\n>>>>>\n>>>>> Best,\n>>>>> Imre\n>>>>>\n>>>>>\n>>>>> Semen Yefimenko <[email protected]> ezt írta (időpont: 2021.\n>>>>> máj. 6., Cs, 16:38):\n>>>>>\n>>>>>> Hi there,\n>>>>>>\n>>>>>> I've recently been involved in migrating our old system to SQL Server\n>>>>>> and then PostgreSQL. Everything has been working fine so far but now after\n>>>>>> executing our tests on Postgres, we saw a very slow running query on a\n>>>>>> large table in our database.\n>>>>>> I have tried asking on other platforms but no one has been able to\n>>>>>> give me a satisfying answer.\n>>>>>>\n>>>>>> *Postgres Version : *PostgreSQL 12.2, compiled by Visual C++ build\n>>>>>> 1914, 64-bit\n>>>>>> No notable errors in the Server log and the Postgres Server itself.\n>>>>>>\n>>>>>> The table structure :\n>>>>>>\n>>>>>> CREATE TABLE logtable\n>>>>>> (\n>>>>>> key character varying(20) COLLATE pg_catalog.\"default\" NOT NULL,\n>>>>>> id integer,\n>>>>>> column3 integer,\n>>>>>> column4 integer,\n>>>>>> column5 integer,\n>>>>>> column6 integer,\n>>>>>> column7 integer,\n>>>>>> column8 integer,\n>>>>>> column9 character varying(128) COLLATE pg_catalog.\"default\",\n>>>>>> column10 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>>>> column11 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>>>> column12 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>>>> column13 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>>>> column14 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>>>> column15 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>>>> column16 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>>>> column17 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>>>> column18 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>>>> column19 character varying(2048) COLLATE pg_catalog.\"default\",\n>>>>>> column21 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>>> column22 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>>> column23 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>>> column24 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>>> column25 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>>> column26 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>>> column27 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>>> column28 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>>> column29 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>>> column30 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>>> column31 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>>> column32 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>>> column33 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>>> column34 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>>> column35 character varying(256) COLLATE pg_catalog.\"default\",\n>>>>>> entrytype integer,\n>>>>>> column37 bigint,\n>>>>>> column38 bigint,\n>>>>>> column39 bigint,\n>>>>>> column40 bigint,\n>>>>>> column41 bigint,\n>>>>>> column42 bigint,\n>>>>>> column43 bigint,\n>>>>>> column44 bigint,\n>>>>>> column45 bigint,\n>>>>>> column46 bigint,\n>>>>>> column47 character varying(128) COLLATE pg_catalog.\"default\",\n>>>>>> timestampcol timestamp without time zone,\n>>>>>> column49 timestamp without time zone,\n>>>>>> column50 timestamp without time zone,\n>>>>>> column51 timestamp without time zone,\n>>>>>> column52 timestamp without time zone,\n>>>>>> archivestatus integer,\n>>>>>> column54 integer,\n>>>>>> column55 character varying(20) COLLATE pg_catalog.\"default\",\n>>>>>> CONSTRAINT pkey PRIMARY KEY (key)\n>>>>>> USING INDEX TABLESPACE tablespace\n>>>>>> )\n>>>>>>\n>>>>>> TABLESPACE tablespace;\n>>>>>>\n>>>>>> ALTER TABLE schema.logtable\n>>>>>> OWNER to user;\n>>>>>>\n>>>>>> CREATE INDEX idx_timestampcol\n>>>>>> ON schema.logtable USING btree\n>>>>>> ( timestampcol ASC NULLS LAST )\n>>>>>> TABLESPACE tablespace ;\n>>>>>>\n>>>>>> CREATE INDEX idx_test2\n>>>>>> ON schema.logtable USING btree\n>>>>>> ( entrytype ASC NULLS LAST)\n>>>>>> TABLESPACE tablespace\n>>>>>> WHERE archivestatus <= 1;\n>>>>>>\n>>>>>> CREATE INDEX idx_arcstatus\n>>>>>> ON schema.logtable USING btree\n>>>>>> ( archivestatus ASC NULLS LAST)\n>>>>>> TABLESPACE tablespace;\n>>>>>>\n>>>>>> CREATE INDEX idx_entrytype\n>>>>>> ON schema.logtable USING btree\n>>>>>> ( entrytype ASC NULLS LAST)\n>>>>>> TABLESPACE tablespace ;\n>>>>>>\n>>>>>>\n>>>>>> The table contains 14.000.000 entries and has about 3.3 GB of data:\n>>>>>> No triggers, inserts per day, probably 5-20 K per day.\n>>>>>>\n>>>>>> SELECT relname, relpages, reltuples, relallvisible, relkind,\n>>>>>> relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class\n>>>>>> WHERE relname='logtable';\n>>>>>>\n>>>>>> relname\n>>>>>> |relpages|reltuples|relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|\n>>>>>>\n>>>>>> ------------------|--------|---------|-------------|-------|--------|--------------|----------|-------------|\n>>>>>> logtable | 405988| 14091424| 405907|r |\n>>>>>> 54|false |NULL | 3326803968|\n>>>>>>\n>>>>>>\n>>>>>> The slow running query:\n>>>>>>\n>>>>>> SELECT column1,..., column54 where ((entrytype = 4000 or entrytype =\n>>>>>> 4001 or entrytype = 4002) and (archivestatus <= 1)) order by timestampcol\n>>>>>> desc;\n>>>>>>\n>>>>>>\n>>>>>> This query runs in about 45-60 seconds.\n>>>>>> The same query runs in about 289 ms Oracle and 423 ms in SQL-Server.\n>>>>>> Now I understand that actually loading all results would take a\n>>>>>> while. (about 520K or so rows)\n>>>>>> But that shouldn't be exactly what happens right? There should be a\n>>>>>> resultset iterator which can retrieve all data but doesn't from the get go.\n>>>>>>\n>>>>>> With the help of some people in the slack and so thread, I've found a\n>>>>>> configuration parameter which helps performance :\n>>>>>>\n>>>>>> set random_page_cost = 1;\n>>>>>>\n>>>>>> This improved performance from 45-60 s to 15-35 s. (since we are\n>>>>>> using ssd's)\n>>>>>> Still not acceptable but definitely an improvement.\n>>>>>> Some maybe relevant system parameters:\n>>>>>>\n>>>>>> effective_cache_size 4GB\n>>>>>> maintenance_work_mem 1GB\n>>>>>> shared_buffers 2GB\n>>>>>> work_mem 1GB\n>>>>>>\n>>>>>>\n>>>>>> Currently I'm accessing the data through DbBeaver (JDBC -\n>>>>>> postgresql-42.2.5.jar) and our JAVA application (JDBC -\n>>>>>> postgresql-42.2.19.jar). Both use the defaultRowFetchSize=5000 to not load\n>>>>>> everything into memory and limit the results.\n>>>>>> The explain plan:\n>>>>>>\n>>>>>> EXPLAIN (ANALYZE, BUFFERS, SETTINGS, VERBOSE)...\n>>>>>> (Above Query)\n>>>>>>\n>>>>>>\n>>>>>> Gather Merge (cost=347142.71..397196.91 rows=429006 width=2558)\n>>>>>> (actual time=21210.019..22319.444 rows=515841 loops=1)\n>>>>>> Output: column1, .. , column54\n>>>>>> Workers Planned: 2\n>>>>>> Workers Launched: 2\n>>>>>> Buffers: shared hit=141487 read=153489\n>>>>>> -> Sort (cost=346142.69..346678.95 rows=214503 width=2558)\n>>>>>> (actual time=21148.887..21297.428 rows=171947 loops=3)\n>>>>>> Output: column1, .. , column54\n>>>>>> Sort Key: logtable.timestampcol DESC\n>>>>>> Sort Method: quicksort Memory: 62180kB\n>>>>>> Worker 0: Sort Method: quicksort Memory: 56969kB\n>>>>>> Worker 1: Sort Method: quicksort Memory: 56837kB\n>>>>>> Buffers: shared hit=141487 read=153489\n>>>>>> Worker 0: actual time=21129.973..21296.839 rows=166776 loops=1\n>>>>>> Buffers: shared hit=45558 read=49514\n>>>>>> Worker 1: actual time=21114.439..21268.117 rows=165896 loops=1\n>>>>>> Buffers: shared hit=45104 read=49506\n>>>>>> -> Parallel Bitmap Heap Scan on schema.logtable\n>>>>>> (cost=5652.74..327147.77 rows=214503 width=2558) (actual\n>>>>>> time=1304.813..20637.462 rows=171947 loops=3)\n>>>>>> Output: column1, .. , column54\n>>>>>> Recheck Cond: ((logtable.entrytype = 4000) OR\n>>>>>> (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n>>>>>> Filter: (logtable.archivestatus <= 1)\n>>>>>> Heap Blocks: exact=103962\n>>>>>> Buffers: shared hit=141473 read=153489\n>>>>>> Worker 0: actual time=1280.472..20638.620 rows=166776\n>>>>>> loops=1\n>>>>>> Buffers: shared hit=45551 read=49514\n>>>>>> Worker 1: actual time=1275.274..20626.219 rows=165896\n>>>>>> loops=1\n>>>>>> Buffers: shared hit=45097 read=49506\n>>>>>> -> BitmapOr (cost=5652.74..5652.74 rows=520443\n>>>>>> width=0) (actual time=1179.438..1179.438 rows=0 loops=1)\n>>>>>> Buffers: shared hit=9 read=1323\n>>>>>> -> Bitmap Index Scan on idx_entrytype\n>>>>>> (cost=0.00..556.61 rows=54957 width=0) (actual time=161.939..161.940\n>>>>>> rows=65970 loops=1)\n>>>>>> Index Cond: (logtable.entrytype = 4000)\n>>>>>> Buffers: shared hit=1 read=171\n>>>>>> -> Bitmap Index Scan on idx_entrytype\n>>>>>> (cost=0.00..2243.22 rows=221705 width=0) (actual time=548.849..548.849\n>>>>>> rows=224945 loops=1)\n>>>>>> Index Cond: (logtable.entrytype = 4001)\n>>>>>> Buffers: shared hit=4 read=576\n>>>>>> -> Bitmap Index Scan on idx_entrytype\n>>>>>> (cost=0.00..2466.80 rows=243782 width=0) (actual time=468.637..468.637\n>>>>>> rows=224926 loops=1)\n>>>>>> Index Cond: (logtable.entrytype = 4002)\n>>>>>> Buffers: shared hit=4 read=576\n>>>>>> Settings: random_page_cost = '1', search_path = '\"$user\", schema,\n>>>>>> public', temp_buffers = '80MB', work_mem = '1GB'\n>>>>>> Planning Time: 0.578 ms\n>>>>>> Execution Time: 22617.351 ms\n>>>>>>\n>>>>>> As mentioned before, oracle does this much faster.\n>>>>>>\n>>>>>>\n>>>>>> -------------------------------------------------------------------------------------------------------------------------\n>>>>>> | Id | Operation | Name\n>>>>>> | Rows | Bytes |TempSpc| Cost (%CPU)| Time |\n>>>>>>\n>>>>>> -------------------------------------------------------------------------------------------------------------------------\n>>>>>> | 0 | SELECT STATEMENT |\n>>>>>> | 6878 | 2491K| | 2143 (1)| 00:00:01 |\n>>>>>> | 1 | SORT ORDER BY |\n>>>>>> | 6878 | 2491K| 3448K| 2143 (1)| 00:00:01 |\n>>>>>> | 2 | INLIST ITERATOR |\n>>>>>> | | | | | |\n>>>>>> |* 3 | TABLE ACCESS BY INDEX ROWID BATCHED| logtable\n>>>>>> | 6878 | 2491K| | 1597 (1)| 00:00:01 |\n>>>>>> |* 4 | INDEX RANGE SCAN | idx_entrytype\n>>>>>> | 6878 | | | 23 (0)| 00:00:01 |\n>>>>>>\n>>>>>> -------------------------------------------------------------------------------------------------------------------------\n>>>>>>\n>>>>>> Is there much I can analyze, any information you might need to\n>>>>>> further analyze this?\n>>>>>>\n>>>>>\n>>>\n>>> --\n>>> Thanks,\n>>> Vijay\n>>> Mumbai, India\n>>>\n>> --\n> Thanks,\n> Vijay\n> Mumbai, India\n>\n\nFor testing purposes I set up a separate postgres 13.2 instance on windows.To my surprise, it works perfectly fine. Also indexes, have about 1/4 of the size they had on 12.6. I'll try setting up a new 12.6 instance and see if I can reproduce anything. This explain plan is on a SSD local postgres 13.2 instance with default settings and not setting random_page_cost.Gather Merge (cost=19444.07..19874.60 rows=3690 width=2638) (actual time=41.633..60.538 rows=7087 loops=1) Output: column1, .. ,column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=2123 -> Sort (cost=18444.05..18448.66 rows=1845 width=2638) (actual time=4.057..4.595 rows=2362 loops=3) Output: column1, .. ,column54 Sort Key: logtable.timestampcol1 DESC Sort Method: quicksort Memory: 3555kB Buffers: shared hit=2123 Worker 0: actual time=0.076..0.077 rows=0 loops=1 Sort Method: quicksort Memory: 25kB Buffers: shared hit=7 Worker 1: actual time=0.090..0.091 rows=0 loops=1 Sort Method: quicksort Memory: 25kB Buffers: shared hit=7 -> Parallel Bitmap Heap Scan on schema.logtable (cost=61.84..16243.96 rows=1845 width=2638) (actual time=0.350..2.419 rows=2362 loops=3) Output: column1, .. ,column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.tfnlogent_archivestatus <= 1) Heap Blocks: exact=2095 Buffers: shared hit=2109 Worker 0: actual time=0.030..0.030 rows=0 loops=1 Worker 1: actual time=0.035..0.036 rows=0 loops=1 -> BitmapOr (cost=61.84..61.84 rows=4428 width=0) (actual time=0.740..0.742 rows=0 loops=1) Buffers: shared hit=14 -> Bitmap Index Scan on idx_entrytype (cost=0.00..19.50 rows=1476 width=0) (actual time=0.504..0.504 rows=5475 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=7 -> Bitmap Index Scan on idx_entrytype (cost=0.00..19.50 rows=1476 width=0) (actual time=0.056..0.056 rows=830 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=3 -> Bitmap Index Scan on idx_entrytype (cost=0.00..19.50 rows=1476 width=0) (actual time=0.178..0.179 rows=782 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=4Planning Time: 0.212 msExecution Time: 61.692 msI've also installed a locally running 12.6 on windows. Unfortunately I couldn't reproduce the issue. I loaded the data with a tool that I wrote a few months ago which basically independently from that database inserts data and creates sequences and indexes.Query also finishes in like 70 ~ ms. Then I've tried pg_dump into a different database on the same dev database (where the slow query still exists). The performance is just as bad on this database and indexes are also all 300 MB big (whereas on my locally running instance they're at around 80 MB) Now I'm trying to insert the data with the same tool I've used for my local installations on the remote dev database.This will still take some time so I will update once I have this tested. Seems like there is something skewed going on with the development database so far. Am Fr., 7. Mai 2021 um 11:56 Uhr schrieb Vijaykumar Jain <[email protected]>:Is this on windows ?I see a thread that mentions of performance penalty due to parallel worker There is a mailing thread with subject line - Huge performance penalty with parallel queries in Windows x64 v. Linux x64On Fri, 7 May 2021 at 2:33 PM Semen Yefimenko <[email protected]> wrote:As mentionend in the slack comments :SELECT pg_size_pretty(pg_relation_size('logtable')) as table_size, pg_size_pretty(pg_relation_size('idx_entrytype')) as index_size, (pgstattuple('logtable')).dead_tuple_percent;table_size | index_size | dead_tuple_percent------------+------------+--------------------3177 MB | 289 MB | 0I have roughly 6 indexes which all have around 300 MBSELECT pg_relation_size('logtable') as table_size, pg_relation_size(idx_entrytype) as index_size, 100-(pgstatindex('idx_entrytype')).avg_leaf_density as bloat_ratiotable_size | index_size | bloat_ratio------------+------------+-------------------3331694592 | 302555136 | 5.219999999999999Your queries:n_live_tup n_dead_tup14118380 0For testing, I've also been running VACUUM and ANALYZE pretty much before every test run.Am Fr., 7. Mai 2021 um 10:44 Uhr schrieb Vijaykumar Jain <[email protected]>:ok one last thing, not to be a PITA, but just in case if this helps.postgres=# SELECT * FROM pg_stat_user_indexes where relname = 'logtable';\npostgres=# SELECT * FROM pg_stat_user_tables where relname = 'logtable';\n\nbasically, i just to verify if the table is not bloated.looking at n_live_tup vs n_dead_tup would help understand it.if you see too many dead tuples, vacuum (ANALYZE,verbose) logtable; -- would get rid of dead tuples (if there are no tx using the dead tuples) and then run your query.Thanks,VijayOn Fri, 7 May 2021 at 13:34, Semen Yefimenko <[email protected]> wrote:Sorry if I'm cumulatively answering everyone in one E-Mail, I'm not sure how I'm supposed to do it. (single E-Mails vs many) Can you try tuning by increasing the shared_buffers slowly in steps of 500MB, and running explain analyze against the query.-- 2500 MB shared buffers - random_page_cost = 1;Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual time=2076.329..3737.050 rows=516517 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=295446 -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual time=2007.487..2202.707 rows=172172 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 65154kB Worker 0: Sort Method: quicksort Memory: 55707kB Worker 1: Sort Method: quicksort Memory: 55304kB Buffers: shared hit=295446 Worker 0: actual time=1963.969..2156.624 rows=161205 loops=1 Buffers: shared hit=91028 Worker 1: actual time=1984.700..2179.697 rows=161935 loops=1 Buffers: shared hit=92133 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5546.39..323481.21 rows=210418 width=2542) (actual time=322.125..1618.971 rows=172172 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=110951 Buffers: shared hit=295432 Worker 0: actual time=282.201..1595.117 rows=161205 loops=1 Buffers: shared hit=91021 Worker 1: actual time=303.671..1623.299 rows=161935 loops=1 Buffers: shared hit=92126 -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0) (actual time=199.119..199.119 rows=0 loops=1) Buffers: shared hit=1334 -> Bitmap Index Scan on idx_entrytype (cost=0.00..682.13 rows=67293 width=0) (actual time=28.856..28.857 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=172 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2223.63 rows=219760 width=0) (actual time=108.871..108.872 rows=225283 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=581 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2261.87 rows=223525 width=0) (actual time=61.377..61.377 rows=225264 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=581Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.940 msExecution Time: 4188.083 ms-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3000 MB shared buffers - random_page_cost = 1;Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual time=2062.280..3763.408 rows=516517 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=295446 -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual time=1987.933..2180.422 rows=172172 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 66602kB Worker 0: Sort Method: quicksort Memory: 55149kB Worker 1: Sort Method: quicksort Memory: 54415kB Buffers: shared hit=295446 Worker 0: actual time=1963.059..2147.916 rows=159556 loops=1 Buffers: shared hit=89981 Worker 1: actual time=1949.726..2136.200 rows=158554 loops=1 Buffers: shared hit=90141 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5546.39..323481.21 rows=210418 width=2542) (actual time=340.705..1603.796 rows=172172 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=113990 Buffers: shared hit=295432 Worker 0: actual time=317.918..1605.548 rows=159556 loops=1 Buffers: shared hit=89974 Worker 1: actual time=304.744..1589.221 rows=158554 loops=1 Buffers: shared hit=90134 -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0) (actual time=218.972..218.973 rows=0 loops=1) Buffers: shared hit=1334 -> Bitmap Index Scan on idx_entrytype (cost=0.00..682.13 rows=67293 width=0) (actual time=37.741..37.742 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=172 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2223.63 rows=219760 width=0) (actual time=119.120..119.121 rows=225283 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=581 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2261.87 rows=223525 width=0) (actual time=62.097..62.098 rows=225264 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=581Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 2.717 msExecution Time: 4224.670 ms-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3500 MB shared buffers - random_page_cost = 1;Gather Merge (cost=343085.23..392186.19 rows=420836 width=2542) (actual time=3578.155..4932.858 rows=516517 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=14 read=295432 written=67 -> Sort (cost=342085.21..342611.25 rows=210418 width=2542) (actual time=3482.159..3677.227 rows=172172 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 58533kB Worker 0: Sort Method: quicksort Memory: 56878kB Worker 1: Sort Method: quicksort Memory: 60755kB Buffers: shared hit=14 read=295432 written=67 Worker 0: actual time=3435.131..3632.985 rows=166842 loops=1 Buffers: shared hit=7 read=95783 written=25 Worker 1: actual time=3441.545..3649.345 rows=179354 loops=1 Buffers: shared hit=5 read=101608 written=20 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5546.39..323481.21 rows=210418 width=2542) (actual time=345.111..3042.932 rows=172172 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=96709 Buffers: shared hit=2 read=295430 written=67 Worker 0: actual time=300.525..2999.403 rows=166842 loops=1 Buffers: shared read=95783 written=25 Worker 1: actual time=300.552..3004.859 rows=179354 loops=1 Buffers: shared read=101606 written=20 -> BitmapOr (cost=5546.39..5546.39 rows=510578 width=0) (actual time=241.996..241.997 rows=0 loops=1) Buffers: shared hit=2 read=1332 -> Bitmap Index Scan on idx_entrytype (cost=0.00..682.13 rows=67293 width=0) (actual time=37.129..37.130 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared read=172 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2223.63 rows=219760 width=0) (actual time=131.051..131.052 rows=225283 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=1 read=580 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2261.87 rows=223525 width=0) (actual time=73.800..73.800 rows=225264 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=1 read=580Settings: random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.597 msExecution Time: 5389.811 ms This doesn't seem to have had an effect. Thanks for the suggestion. Have you try of excluding not null from index? Can you give dispersion of archivestatus?Yes I have, it yielded the same performance boost as : create index test on logtable(entrytype) where archivestatus <= 1;I wonder what the old query plan was...Would you include links to your prior correspondance ?So prior Execution Plans are present in the SO.The other forums I've tried are the official slack channel : https://postgresteam.slack.com/archives/C0FS3UTAP/p1620286295228600And SO : https://stackoverflow.com/questions/67401792/slow-running-postgresql-queryBut I think most of the points discussed in these posts have already been mentionend by you except bloating of indexes. Oracle is apparently doing a single scan on \"entrytype\".As a test, you could try forcing that, like:begin; SET enable_bitmapscan=off ; explain (analyze) [...]; rollback;orbegin; DROP INDEX idx_arcstatus; explain (analyze) [...]; rollback;I've tried enable_bitmapscan=off but it didn't yield any good results.-- 2000 MB shared buffers - random_page_cost = 4 - enable_bitmapscan to offGather Merge (cost=543949.72..593050.69 rows=420836 width=2542) (actual time=7716.031..9043.399 rows=516517 loops=1) Output: column1, .., column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=192 read=406605 -> Sort (cost=542949.70..543475.75 rows=210418 width=2542) (actual time=7642.666..7835.527 rows=172172 loops=3) Output: column1, .., column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 58803kB Worker 0: Sort Method: quicksort Memory: 60376kB Worker 1: Sort Method: quicksort Memory: 56988kB Buffers: shared hit=192 read=406605 Worker 0: actual time=7610.482..7814.905 rows=177637 loops=1 Buffers: shared hit=78 read=137826 Worker 1: actual time=7607.645..7803.561 rows=167316 loops=1 Buffers: shared hit=80 read=132672 -> Parallel Seq Scan on schema.logtable (cost=0.00..524345.70 rows=210418 width=2542) (actual time=1.669..7189.365 rows=172172 loops=3) Output: column1, .., column54 Filter: ((logtable.acrhivestatus <= 1) AND ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))) Rows Removed by Filter: 4533459 Buffers: shared hit=96 read=406605 Worker 0: actual time=1.537..7158.286 rows=177637 loops=1 Buffers: shared hit=30 read=137826 Worker 1: actual time=1.414..7161.670 rows=167316 loops=1 Buffers: shared hit=32 read=132672Settings: enable_bitmapscan = 'off', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.725 msExecution Time: 9500.928 ms---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2000 MB shared buffers - random_page_cost = 4 - -- 2000 -- 2000 MB shared buffers - random_page_cost = 1 - enable_bitmapscan to offGather Merge (cost=543949.72..593050.69 rows=420836 width=2542) (actual time=7519.032..8871.433 rows=516517 loops=1) Output: column1, .., column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=576 read=406221 -> Sort (cost=542949.70..543475.75 rows=210418 width=2542) (actual time=7451.958..7649.480 rows=172172 loops=3) Output: column1, .., column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 58867kB Worker 0: Sort Method: quicksort Memory: 58510kB Worker 1: Sort Method: quicksort Memory: 58788kB Buffers: shared hit=576 read=406221 Worker 0: actual time=7438.271..7644.241 rows=172085 loops=1 Buffers: shared hit=203 read=135166 Worker 1: actual time=7407.574..7609.922 rows=172948 loops=1 Buffers: shared hit=202 read=135225 -> Parallel Seq Scan on schema.logtable (cost=0.00..524345.70 rows=210418 width=2542) (actual time=2.839..7017.729 rows=172172 loops=3) Output: column1, .., column54 Filter: ((logtable.acrhivestatus <= 1) AND ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))) Rows Removed by Filter: 4533459 Buffers: shared hit=480 read=406221 Worker 0: actual time=2.628..7006.420 rows=172085 loops=1 Buffers: shared hit=155 read=135166 Worker 1: actual time=3.948..6978.154 rows=172948 loops=1 Buffers: shared hit=154 read=135225Settings: enable_bitmapscan = 'off', random_page_cost = '1', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.621 msExecution Time: 9339.457 msHave you tune shared buffers enough? Each block is of 8k by default.BTW, please try to reset random_page_cost.Look above. I will try upgrading the minor version next.I will also try setting up a 13.X version locally and import the data from 12.2 to 13.X and see if it might be faster.Am Do., 6. Mai 2021 um 23:16 Uhr schrieb Imre Samu <[email protected]>:> Postgres Version : PostgreSQL 12.2,> ... ON ... USING btreeIMHO:The next minor (bugix&security) release is near ( expected ~ May 13th, 2021 ) https://www.postgresql.org/developer/roadmap/so you can update your PostgreSQL to 12.7 ( + full Reindexing recommended ! ) You can find a lot of B-tree index-related fixes.https://www.postgresql.org/docs/12/release-12-3.html Release date: 2020-05-14 - Fix possible undercounting of deleted B-tree index pages in VACUUM VERBOSE output - Fix wrong bookkeeping for oldest deleted page in a B-tree index- Ensure INCLUDE'd columns are always removed from B-tree pivot tupleshttps://www.postgresql.org/docs/12/release-12-4.html - Avoid repeated marking of dead btree index entries as dead https://www.postgresql.org/docs/12/release-12-5.html - Fix failure of parallel B-tree index scans when the index condition is unsatisfiablehttps://www.postgresql.org/docs/12/release-12-6.html Release date: 2021-02-11> COLLATE pg_catalog.\"default\"You can test the \"C\" Collation in some columns (keys ? ) ; in theory, it should be faster :\"The drawback of using locales other than C or POSIX in PostgreSQL is its performance impact. It slows character handling and prevents ordinary indexes from being used by LIKE. For this reason use locales only if you actually need them.\"https://www.postgresql.org/docs/12/locale.htmlhttps://www.postgresql.org/message-id/flat/CAF6DVKNU0vb4ZeQQ-%3Dagg69QJU3wdjPnMYYrPYY7CKc6iOU7eQ%40mail.gmail.comBest, ImreSemen Yefimenko <[email protected]> ezt írta (időpont: 2021. máj. 6., Cs, 16:38):Hi there,I've recently been involved in migrating our old system to SQL Server and then PostgreSQL. Everything has been working fine so far but now after executing our tests on Postgres, we saw a very slow running query on a large table in our database. I have tried asking on other platforms but no one has been able to give me a satisfying answer. Postgres Version : PostgreSQL 12.2, compiled by Visual C++ build 1914, 64-bitNo notable errors in the Server log and the Postgres Server itself.The table structure :CREATE TABLE logtable( key character varying(20) COLLATE pg_catalog.\"default\" NOT NULL, id integer, \n\ncolumn3 integer, \n\ncolumn4 integer, \n\ncolumn5 integer, \n\ncolumn6 integer, \n\ncolumn7 integer, \n\ncolumn8 integer, \n\ncolumn9 character varying(128) COLLATE pg_catalog.\"default\", \n\ncolumn10 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn11 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn12 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn13 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn14 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn15 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn16 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn17 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn18 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn19 character varying(2048) COLLATE pg_catalog.\"default\", \n\ncolumn21 \n\ncharacter varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn22 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn23 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn24 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn25 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn26 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn27 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn28 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn29 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn30 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn31 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn32 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn33 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn34 character varying(256) COLLATE pg_catalog.\"default\", \n\ncolumn35 character varying(256) COLLATE pg_catalog.\"default\", \n\nentrytype integer, \n\ncolumn37 bigint, \n\ncolumn38 bigint, \n\ncolumn39 bigint, \n\ncolumn40 bigint, \n\ncolumn41 bigint, \n\ncolumn42 bigint, \n\ncolumn43 bigint, \n\ncolumn44 bigint, \n\ncolumn45 bigint, \n\ncolumn46 bigint, \n\ncolumn47 character varying(128) COLLATE pg_catalog.\"default\", \n\ntimestampcol timestamp without time zone, \n\ncolumn49 timestamp without time zone, \n\ncolumn50 timestamp without time zone, \n\ncolumn51 timestamp without time zone, \n\ncolumn52 timestamp without time zone, \n\n\n\narchivestatus \n\ninteger, \n\ncolumn54 integer, \n\ncolumn55 character varying(20) COLLATE pg_catalog.\"default\", CONSTRAINT pkey PRIMARY KEY (key) USING INDEX TABLESPACE tablespace)TABLESPACE tablespace;ALTER TABLE schema.logtable OWNER to user;CREATE INDEX idx_timestampcol ON schema.logtable USING btree (\n\n\n\ntimestampcol \n\n\n\n ASC NULLS LAST ) TABLESPACE \n\ntablespace\n\n;CREATE INDEX idx_test2 ON schema.logtable USING btree ( entrytype ASC NULLS LAST) TABLESPACE tablespace WHERE archivestatus <= 1;CREATE INDEX idx_arcstatus ON \n\nschema.logtable USING btree ( archivestatus ASC NULLS LAST) TABLESPACE tablespace;CREATE INDEX \n\nidx_entrytype ON schema.logtable USING btree ( entrytype ASC NULLS LAST) TABLESPACE \n\ntablespace\n\n;The table contains 14.000.000 entries and has about 3.3 GB of data:No triggers, inserts per day, probably 5-20 K per day.SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='logtable';relname |relpages|reltuples|relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|------------------|--------|---------|-------------|-------|--------|--------------|----------|-------------|logtable | 405988| 14091424| 405907|r | 54|false |NULL | 3326803968|The slow running query:SELECT column1,..., column54 where ((entrytype = 4000 or \n\nentrytype \n\n= 4001 or \n\nentrytype \n\n= 4002) and (archivestatus <= 1)) order by timestampcol desc;This query runs in about 45-60 seconds.The same query runs in about 289 ms Oracle and 423 ms in SQL-Server. Now I understand that actually loading all results would take a while. (about 520K or so rows) But that shouldn't be exactly what happens right? There should be a resultset iterator which can retrieve all data but doesn't from the get go. With the help of some people in the slack and so thread, I've found a configuration parameter which helps performance : set random_page_cost = 1;This improved performance from 45-60 s to 15-35 s. (since we are using ssd's) Still not acceptable but definitely an improvement. Some maybe relevant system parameters:effective_cache_size\t4GBmaintenance_work_mem\t1GBshared_buffers\t2GBwork_mem\t1GBCurrently I'm accessing the data through DbBeaver (JDBC - postgresql-42.2.5.jar) and our JAVA application (JDBC - postgresql-42.2.19.jar). Both use the defaultRowFetchSize=5000 to not load everything into memory and limit the results. The explain plan:EXPLAIN (ANALYZE, BUFFERS, SETTINGS, VERBOSE)...(Above Query)Gather Merge (cost=347142.71..397196.91 rows=429006 width=2558) (actual time=21210.019..22319.444 rows=515841 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=141487 read=153489 -> Sort (cost=346142.69..346678.95 rows=214503 width=2558) (actual time=21148.887..21297.428 rows=171947 loops=3) Output: column1, .. , column54 Sort Key: logtable.timestampcol DESC Sort Method: quicksort Memory: 62180kB Worker 0: Sort Method: quicksort Memory: 56969kB Worker 1: Sort Method: quicksort Memory: 56837kB Buffers: shared hit=141487 read=153489 Worker 0: actual time=21129.973..21296.839 rows=166776 loops=1 Buffers: shared hit=45558 read=49514 Worker 1: actual time=21114.439..21268.117 rows=165896 loops=1 Buffers: shared hit=45104 read=49506 -> Parallel Bitmap Heap Scan on schema.logtable (cost=5652.74..327147.77 rows=214503 width=2558) (actual time=1304.813..20637.462 rows=171947 loops=3) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=103962 Buffers: shared hit=141473 read=153489 Worker 0: actual time=1280.472..20638.620 rows=166776 loops=1 Buffers: shared hit=45551 read=49514 Worker 1: actual time=1275.274..20626.219 rows=165896 loops=1 Buffers: shared hit=45097 read=49506 -> BitmapOr (cost=5652.74..5652.74 rows=520443 width=0) (actual time=1179.438..1179.438 rows=0 loops=1) Buffers: shared hit=9 read=1323 -> Bitmap Index Scan on idx_entrytype (cost=0.00..556.61 rows=54957 width=0) (actual time=161.939..161.940 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=1 read=171 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2243.22 rows=221705 width=0) (actual time=548.849..548.849 rows=224945 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=4 read=576 -> Bitmap Index Scan on idx_entrytype (cost=0.00..2466.80 rows=243782 width=0) (actual time=468.637..468.637 rows=224926 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=4 read=576Settings: random_page_cost = '1', search_path = '\"$user\", schema, public', temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.578 msExecution Time: 22617.351 msAs mentioned before, oracle does this much faster. -------------------------------------------------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |-------------------------------------------------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 6878 | 2491K| | 2143 (1)| 00:00:01 || 1 | SORT ORDER BY | | 6878 | 2491K| 3448K| 2143 (1)| 00:00:01 || 2 | INLIST ITERATOR | | | | | | ||* 3 | TABLE ACCESS BY INDEX ROWID BATCHED| logtable | 6878 | 2491K| | 1597 (1)| 00:00:01 ||* 4 | INDEX RANGE SCAN | idx_entrytype | 6878 | | | 23 (0)| 00:00:01 |-------------------------------------------------------------------------------------------------------------------------Is there much I can analyze, any information you might need to further analyze this? \n\n\n-- Thanks,VijayMumbai, India\n\n\n\n-- Thanks,VijayMumbai, India",
"msg_date": "Fri, 7 May 2021 17:57:19 +0200",
"msg_from": "Semen Yefimenko <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "On Fri, May 07, 2021 at 05:57:19PM +0200, Semen Yefimenko wrote:\n> For testing purposes I set up a separate postgres 13.2 instance on windows.\n> To my surprise, it works perfectly fine. Also indexes, have about 1/4 of\n> the size they had on 12.6.\n\nIn pg13, indexes are de-duplicated by default.\n\nBut I suspect the performance is better because data was reload, and the\nsmaller indexes are a small, additional benefit.\n\n> This explain plan is on a SSD local postgres 13.2 instance with default\n> settings and not setting random_page_cost.\n\n> -> Parallel Bitmap Heap Scan on schema.logtable (cost=61.84..16243.96 rows=1845 width=2638) (actual time=0.350..2.419 rows=2362 loops=3)\n> Output: column1, .. ,column54\n> Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))\n> Filter: (logtable.tfnlogent_archivestatus <= 1)\n> Heap Blocks: exact=2095\n> Buffers: shared hit=2109\n\nIn the pg13 instance, the index *and heap* scans hit only 2109 buffers (16MB).\n\nOn your original instance, it took 300k buffers (2.4GB), mostly uncached and\nread from disk.\n\n> This will still take some time so I will update once I have this tested.\n> Seems like there is something skewed going on with the development database\n> so far.\n\nI still think you should try to cluster, or at least reindex (which cluster\nalso does) and then analyze. The bitmap scan is probably happening because 1)\nyou're reading a large number of tuples; and, 2) the index is \"uncorrelated\",\nso a straight index scan would randomly access 300k disk pages, which is much\nworse even than reading 2400MB to get just 16MB of data.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 7 May 2021 11:15:51 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "Justin Pryzby <[email protected]> writes:\n> On Fri, May 07, 2021 at 05:57:19PM +0200, Semen Yefimenko wrote:\n>> For testing purposes I set up a separate postgres 13.2 instance on windows.\n>> To my surprise, it works perfectly fine. Also indexes, have about 1/4 of\n>> the size they had on 12.6.\n\n> In pg13, indexes are de-duplicated by default.\n> But I suspect the performance is better because data was reload, and the\n> smaller indexes are a small, additional benefit.\n\nIndex bloat is often a consequence of inadequate vacuuming. You might\nneed to dial up autovacuum's aggressiveness to keep their sizes in check.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 07 May 2021 12:28:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "On Fri, May 7, 2021 at 9:16 AM Justin Pryzby <[email protected]> wrote:\n> In pg13, indexes are de-duplicated by default.\n>\n> But I suspect the performance is better because data was reload, and the\n> smaller indexes are a small, additional benefit.\n\nThat's a very reasonable interpretation, since the bitmap index scans\nthemselves just aren't doing that much I/O -- we see that there is\nmuch more I/O for the heap scan, which is likely to be what the\ngeneral picture looks like no matter how much bloat there is.\n\nHowever, I'm not sure if that reasonable interpretation is actually\ncorrect. The nbtinsert.c code that handles deleting LP_DEAD index\ntuples no longer relies on having a page-level garbage item flag set\nin Postgres 13 -- it just scans the line pointer array for LP_DEAD\nitems each time. VACUUM has a rather unhelpful tendency to unset the\nflag when it shouldn't, which we're no longer affected by. So that's\none possible explanation.\n\nAnother possible explanation is that smaller indexes (due to\ndeduplication) are more likely to get index scans, which leads to\nsetting the LP_DEAD bit of known-dead index tuples in passing more\noften (bitmap index scans won't do the kill_prior_tuple optimization).\nThere could even be a virtuous circle over time. (Note that the index\ndeletion stuff in Postgres 14 pretty much makes sure that this\nhappens, but it is probably at least much more likely in Postgres 13\ncompared to 12.)\n\nI could easily be very wrong about all of this in this instance,\nthough, because the behavior I've described is highly non-linear and\ntherefore highly unpredictable in general (not to mention highly\nsensitive to workload characteristics). I'm sure that I've thought\nabout this stuff way more than any other individual Postgres\ncontributor, but it's easy to be wrong in any given instance. The real\nexplanation might be something else entirely. Though it's hard not to\nimagine that what really matters here is avoiding all of that bitmap\nheap scan I/O.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 7 May 2021 14:28:33 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "On Fri, May 7, 2021 at 2:28 PM Peter Geoghegan <[email protected]> wrote:\n> That's a very reasonable interpretation, since the bitmap index scans\n> themselves just aren't doing that much I/O -- we see that there is\n> much more I/O for the heap scan, which is likely to be what the\n> general picture looks like no matter how much bloat there is.\n>\n> However, I'm not sure if that reasonable interpretation is actually\n> correct. The nbtinsert.c code that handles deleting LP_DEAD index\n> tuples no longer relies on having a page-level garbage item flag set\n> in Postgres 13 -- it just scans the line pointer array for LP_DEAD\n> items each time.\n\nBTW, I am pointing all of this out because I've heard informal reports\nof big improvements following an upgrade to Postgres 13 that seem\nunlikely to be related to the simple fact that indexes are smaller\n(most of the time you cannot save that much I/O by shrinking indexes\nwithout affected when and how TIDs/heap tuples are scanned).\n\nIt's necessary to simulate the production workload to have *any* idea\nif LP_DEAD index tuple deletion might be a factor. If the OP is just\ntesting this one query on Postgres 13 in isolation, without anything\nbloating up (or cleaning up) indexes, then that doesn't really tell us\nanything about how Postgres 13 compares to Postgres 12. As you said,\nsimply shrinking the indexes is nice, but not enough -- we'd need some\nsecond of second order effect to get acceptable performance over time\nand under real world conditions.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 7 May 2021 14:43:17 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "Are you sure you're using the same data det ?\n\nUnless I'm overlooking something obvious one result has 500 000 rows the\nother 7 000.\n\n\n\n>\n\nAre you sure you're using the same data det ?Unless I'm overlooking something obvious one result has 500 000 rows the other 7 000.",
"msg_date": "Sat, 8 May 2021 00:17:02 +0200",
"msg_from": "didier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": ">\n> Unless I'm overlooking something obvious one result has 500 000 rows the\n> other 7 000.\n\nYou are right, it wasn't. I have 2 datasets, one containing 12 mil entries\nand the other 14 mil entries. I accidentally used the one with 12 mil\nentries in that table which actually only contains 7000~ entries for that\nsql query.\nFor now I have tested the 12.6 Postgres with default values and it finished\nin 12 seconds. I'll do some thorough testing and let you know once I\nfinish, sorry for the confusion.\n\nAm Sa., 8. Mai 2021 um 00:17 Uhr schrieb didier <[email protected]>:\n\n> Are you sure you're using the same data det ?\n>\n> Unless I'm overlooking something obvious one result has 500 000 rows the\n> other 7 000.\n>\n>\n>\n>>\n\nUnless I'm overlooking something obvious one result has 500 000 rows the other 7 000.You are right, it wasn't. I have 2 datasets, one containing 12 mil entries and the other 14 mil entries. I accidentally used the one with 12 mil entries in that table which actually only contains 7000~ entries for that sql query.For now I have tested the 12.6 Postgres with default values and it finished in 12 seconds. I'll do some thorough testing and let you know once I finish, sorry for the confusion. Am Sa., 8. Mai 2021 um 00:17 Uhr schrieb didier <[email protected]>:Are you sure you're using the same data det ?Unless I'm overlooking something obvious one result has 500 000 rows the other 7 000.",
"msg_date": "Sat, 8 May 2021 14:06:11 +0200",
"msg_from": "Semen Yefimenko <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
},
{
"msg_contents": "I've done some testing on different versions of postgres.\nUnfortunately after the weekend, the problem vanished.\nThe systems are running as usual and the query finishes in 500 MS.\nIt must have been an issue with the VMs or the DISKs.\nEither way, thank you for your support.\n\nHere are btw. some testing results.\n\n----------------------------------------------------Linux PG_13.2 (docker\ninstance)\nGather (cost=1000.00..573832.18 rows=486255 width=2567) (actual\ntime=232.444..23682.816 rows=516517 loops=1)\n Output: column1, .. , column54\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=15883 read=390758\n -> Parallel Seq Scan on schema.logtable (cost=0.00..524206.67\nrows=202606 width=2567) (actual time=256.462..23522.715 rows=172172 loops=3)\n Output: column1, .. , column54\n Filter: ((logtable.archivestatus <= 1) AND ((logtable.entrytype =\n4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)))\n Rows Removed by Filter: 4533459\n Buffers: shared hit=15883 read=390758\n Worker 0: actual time=266.613..23529.215 rows=171917 loops=1\n JIT:\n Functions: 2\n Options: Inlining true, Optimization true, Expressions true,\nDeforming true\n Timing: Generation 0.805 ms, Inlining 52.127 ms, Optimization\n150.748 ms, Emission 63.482 ms, Total 267.162 ms\n Buffers: shared hit=5354 read=130007\n Worker 1: actual time=270.921..23527.953 rows=172273 loops=1\n JIT:\n Functions: 2\n Options: Inlining true, Optimization true, Expressions true,\nDeforming true\n Timing: Generation 1.217 ms, Inlining 49.556 ms, Optimization\n154.765 ms, Emission 65.153 ms, Total 270.690 ms\n Buffers: shared hit=5162 read=130108\nPlanning Time: 0.356 ms\nJIT:\n Functions: 6\n Options: Inlining true, Optimization true, Expressions true, Deforming\ntrue\n Timing: Generation 3.578 ms, Inlining 106.136 ms, Optimization 443.580\nms, Emission 217.728 ms, Total 771.021 ms\nExecution Time: 23736.150 ms\n-----------Query Takes 245 MS\n-----------IndexSize: average 80 MB\n----------------------------------------------------Windows PG_12.6 (local\ninstance)\nGather (cost=1000.00..575262.60 rows=499935 width=2526) (actual\ntime=2.155..2555.388 rows=516517 loops=1)\n Output: column1, .. , column54\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=128 read=406517\n -> Parallel Seq Scan on schema.logtable (cost=0.00..524269.10\nrows=208306 width=2526) (actual time=0.651..2469.220 rows=172172 loops=3)\n Output: column1, .. , column54\n Filter: ((logtable.archivestatus <= 1) AND ((logtable.entrytype =\n4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)))\n Rows Removed by Filter: 4533459\n Buffers: shared hit=128 read=406517\n Worker 0: actual time=0.637..2478.110 rows=172580 loops=1\n Buffers: shared hit=41 read=135683\n Worker 1: actual time=0.084..2474.863 rows=173408 loops=1\n Buffers: shared hit=42 read=135837\nPlanning Time: 0.201 ms\nExecution Time: 2572.065 ms\n-----------Query Takes 18 MS\n-----------IndexSize: average 300 MB\n----------------------------------------------------Windows PG_13.2 (local\ninstance)\nGather (cost=1000.00..575680.37 rows=503383 width=2531) (actual\ntime=1.045..2586.700 rows=516517 loops=1)\n Output: column1, .. , column54\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=8620 read=398025\n -> Parallel Seq Scan on schema.logtable (cost=0.00..524342.07\nrows=209743 width=2531) (actual time=0.346..2485.163 rows=172172 loops=3)\n Output: column1, .. , column54\n Filter: ((logtable.archivestatus <= 1) AND ((logtable.entrytype =\n4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)))\n Rows Removed by Filter: 4533459\n Buffers: shared hit=8620 read=398025\n Worker 0: actual time=0.155..2487.411 rows=174277 loops=1\n Buffers: shared hit=2954 read=133173\n Worker 1: actual time=0.746..2492.533 rows=173762 loops=1\n Buffers: shared hit=2813 read=133935\nPlanning Time: 0.154 ms\nExecution Time: 2604.983 ms\n-----------Query Takes 18 MS\n-----------IndexSize: average 80 MB\n----------------------------------------------------Windows PG_12.6 (remote\ninstance)\nBitmap Heap Scan on schema.logtable (cost=10326.36..449509.96 rows=530847\nwidth=2540) (actual time=406.235..6770.263 rows=516517 loops=1)\n Output: column1, .. , column54\n Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001)\nOR (logtable.entrytype = 4002))\n Filter: (logtable.archivestatus <= 1)\n Heap Blocks: exact=294098\n Buffers: shared hit=3632 read=291886\n -> BitmapOr (cost=10326.36..10326.36 rows=536922 width=0) (actual\ntime=212.117..212.124 rows=0 loops=1)\n Buffers: shared hit=1420\n -> Bitmap Index Scan on idx_entrytype (cost=0.00..1196.37\nrows=64525 width=0) (actual time=30.677..30.678 rows=65970 loops=1)\n Index Cond: (logtable.entrytype = 4000)\n Buffers: shared hit=183\n -> Bitmap Index Scan on idx_entrytype (cost=0.00..4605.07\nrows=249151 width=0) (actual time=110.538..110.539 rows=225283 loops=1)\n Index Cond: (logtable.entrytype = 4001)\n Buffers: shared hit=619\n -> Bitmap Index Scan on idx_entrytype (cost=0.00..4126.79\nrows=223247 width=0) (actual time=70.887..70.888 rows=225264 loops=1)\n Index Cond: (logtable.entrytype = 4002)\n Buffers: shared hit=618\nSettings: temp_buffers = '80MB', work_mem = '1GB'\nPlanning Time: 0.409 ms\nExecution Time: 7259.515 ms\n-----------Query Takes 570 MS\n-----------IndexSize: average 300 MB\n\n\n\n\n\nAm Sa., 8. Mai 2021 um 14:06 Uhr schrieb Semen Yefimenko <\[email protected]>:\n\n> Unless I'm overlooking something obvious one result has 500 000 rows the\n>> other 7 000.\n>\n> You are right, it wasn't. I have 2 datasets, one containing 12 mil entries\n> and the other 14 mil entries. I accidentally used the one with 12 mil\n> entries in that table which actually only contains 7000~ entries for that\n> sql query.\n> For now I have tested the 12.6 Postgres with default values and it\n> finished in 12 seconds. I'll do some thorough testing and let you know once\n> I finish, sorry for the confusion.\n>\n> Am Sa., 8. Mai 2021 um 00:17 Uhr schrieb didier <[email protected]>:\n>\n>> Are you sure you're using the same data det ?\n>>\n>> Unless I'm overlooking something obvious one result has 500 000 rows the\n>> other 7 000.\n>>\n>>\n>>\n>>>\n\nI've done some testing on different versions of postgres.Unfortunately after the weekend, the problem vanished. The systems are running as usual and the query finishes in 500 MS.It must have been an issue with the VMs or the DISKs.Either way, thank you for your support. Here are btw. some testing results. ----------------------------------------------------Linux PG_13.2 (docker instance) Gather (cost=1000.00..573832.18 rows=486255 width=2567) (actual time=232.444..23682.816 rows=516517 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=15883 read=390758 -> Parallel Seq Scan on schema.logtable (cost=0.00..524206.67 rows=202606 width=2567) (actual time=256.462..23522.715 rows=172172 loops=3) Output: column1, .. , column54 Filter: ((logtable.archivestatus <= 1) AND ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))) Rows Removed by Filter: 4533459 Buffers: shared hit=15883 read=390758 Worker 0: actual time=266.613..23529.215 rows=171917 loops=1 JIT: Functions: 2 Options: Inlining true, Optimization true, Expressions true, Deforming true Timing: Generation 0.805 ms, Inlining 52.127 ms, Optimization 150.748 ms, Emission 63.482 ms, Total 267.162 ms Buffers: shared hit=5354 read=130007 Worker 1: actual time=270.921..23527.953 rows=172273 loops=1 JIT: Functions: 2 Options: Inlining true, Optimization true, Expressions true, Deforming true Timing: Generation 1.217 ms, Inlining 49.556 ms, Optimization 154.765 ms, Emission 65.153 ms, Total 270.690 ms Buffers: shared hit=5162 read=130108Planning Time: 0.356 msJIT: Functions: 6 Options: Inlining true, Optimization true, Expressions true, Deforming true Timing: Generation 3.578 ms, Inlining 106.136 ms, Optimization 443.580 ms, Emission 217.728 ms, Total 771.021 msExecution Time: 23736.150 ms-----------Query Takes 245 MS-----------IndexSize: average 80 MB----------------------------------------------------Windows PG_12.6 (local instance)Gather (cost=1000.00..575262.60 rows=499935 width=2526) (actual time=2.155..2555.388 rows=516517 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=128 read=406517 -> Parallel Seq Scan on schema.logtable (cost=0.00..524269.10 rows=208306 width=2526) (actual time=0.651..2469.220 rows=172172 loops=3) Output: column1, .. , column54 Filter: ((logtable.archivestatus <= 1) AND ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))) Rows Removed by Filter: 4533459 Buffers: shared hit=128 read=406517 Worker 0: actual time=0.637..2478.110 rows=172580 loops=1 Buffers: shared hit=41 read=135683 Worker 1: actual time=0.084..2474.863 rows=173408 loops=1 Buffers: shared hit=42 read=135837Planning Time: 0.201 msExecution Time: 2572.065 ms-----------Query Takes 18 MS-----------IndexSize: average 300 MB----------------------------------------------------Windows PG_13.2 (local instance)Gather (cost=1000.00..575680.37 rows=503383 width=2531) (actual time=1.045..2586.700 rows=516517 loops=1) Output: column1, .. , column54 Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=8620 read=398025 -> Parallel Seq Scan on schema.logtable (cost=0.00..524342.07 rows=209743 width=2531) (actual time=0.346..2485.163 rows=172172 loops=3) Output: column1, .. , column54 Filter: ((logtable.archivestatus <= 1) AND ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002))) Rows Removed by Filter: 4533459 Buffers: shared hit=8620 read=398025 Worker 0: actual time=0.155..2487.411 rows=174277 loops=1 Buffers: shared hit=2954 read=133173 Worker 1: actual time=0.746..2492.533 rows=173762 loops=1 Buffers: shared hit=2813 read=133935Planning Time: 0.154 msExecution Time: 2604.983 ms-----------Query Takes 18 MS-----------IndexSize: average 80 MB----------------------------------------------------Windows PG_12.6 (remote instance)Bitmap Heap Scan on schema.logtable (cost=10326.36..449509.96 rows=530847 width=2540) (actual time=406.235..6770.263 rows=516517 loops=1) Output: column1, .. , column54 Recheck Cond: ((logtable.entrytype = 4000) OR (logtable.entrytype = 4001) OR (logtable.entrytype = 4002)) Filter: (logtable.archivestatus <= 1) Heap Blocks: exact=294098 Buffers: shared hit=3632 read=291886 -> BitmapOr (cost=10326.36..10326.36 rows=536922 width=0) (actual time=212.117..212.124 rows=0 loops=1) Buffers: shared hit=1420 -> Bitmap Index Scan on idx_entrytype (cost=0.00..1196.37 rows=64525 width=0) (actual time=30.677..30.678 rows=65970 loops=1) Index Cond: (logtable.entrytype = 4000) Buffers: shared hit=183 -> Bitmap Index Scan on idx_entrytype (cost=0.00..4605.07 rows=249151 width=0) (actual time=110.538..110.539 rows=225283 loops=1) Index Cond: (logtable.entrytype = 4001) Buffers: shared hit=619 -> Bitmap Index Scan on idx_entrytype (cost=0.00..4126.79 rows=223247 width=0) (actual time=70.887..70.888 rows=225264 loops=1) Index Cond: (logtable.entrytype = 4002) Buffers: shared hit=618Settings: temp_buffers = '80MB', work_mem = '1GB'Planning Time: 0.409 msExecution Time: 7259.515 ms-----------Query Takes 570 MS-----------IndexSize: average 300 MBAm Sa., 8. Mai 2021 um 14:06 Uhr schrieb Semen Yefimenko <[email protected]>:Unless I'm overlooking something obvious one result has 500 000 rows the other 7 000.You are right, it wasn't. I have 2 datasets, one containing 12 mil entries and the other 14 mil entries. I accidentally used the one with 12 mil entries in that table which actually only contains 7000~ entries for that sql query.For now I have tested the 12.6 Postgres with default values and it finished in 12 seconds. I'll do some thorough testing and let you know once I finish, sorry for the confusion. Am Sa., 8. Mai 2021 um 00:17 Uhr schrieb didier <[email protected]>:Are you sure you're using the same data det ?Unless I'm overlooking something obvious one result has 500 000 rows the other 7 000.",
"msg_date": "Mon, 10 May 2021 13:14:40 +0200",
"msg_from": "Semen Yefimenko <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow Query compared to Oracle / SQL - Server"
}
] |
[
{
"msg_contents": "Hi Team,\n\nI have query in terms of lock monitoring in PostgreSQL where I am not able to find a way to figure out what value has been passed in SQL statement (from JDBC driver as prepared statement).\n\nI am using PostgreSQL 13 version.\n\nThe following is the SQL statement I am running in PGAdmin\n\n\nSELECT\n activity.pid as BlockedPid,\n activity.usename,\n activity.query,\n blocking.pid AS Blocking_Pid,\n to_char(now() - activity.query_start, 'HH24:MI:SS') as elapsed\nFROM pg_stat_activity AS activity\nJOIN pg_stat_activity AS blocking ON blocking.pid = ANY(pg_blocking_pids(activity.pid));\n\n\nThe output I am getting is below, where in the SQL query is with $1.\n\n[cid:[email protected]]\n\nIs there a way to compute what is being passed as value for the above SQL statement ?\n\n\n\nThanks\n\n\nThe information in this e-mail and any attachments is confidential and may be legally privileged. It is intended solely for the addressee or addressees. Any use or disclosure of the contents of this e-mail/attachments by a not intended recipient is unauthorized and may be unlawful. If you have received this e-mail in error please notify the sender. Please note that any views or opinions presented in this e-mail are solely those of the author and do not necessarily represent those of TEMENOS. We recommend that you check this e-mail and any attachments against viruses. TEMENOS accepts no liability for any damage caused by any malicious code or virus transmitted by this e-mail.",
"msg_date": "Thu, 13 May 2021 13:54:32 +0000",
"msg_from": "Manoj Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL blocked locks query"
},
{
"msg_contents": "On Thu, May 13, 2021 at 01:54:32PM +0000, Manoj Kumar wrote:\n> I have query in terms of lock monitoring in PostgreSQL where I am not able to find a way to figure out what value has been passed in SQL statement (from JDBC driver as prepared statement).\n> \n> I am using PostgreSQL 13 version.\n> \n> The following is the SQL statement I am running in PGAdmin\n> \n> The output I am getting is below, where in the SQL query is with $1.\n> \n> [cid:[email protected]]\n> \n> Is there a way to compute what is being passed as value for the above SQL statement ?\n\nYou should enable query logging, and pull the params out of the log.\n\nNote that v13 has log_parameter_max_length, which defaults to showing params in\nfull.\n\n[pryzbyj@telsasoft2019 ~]$ PGOPTIONS='-c log_min_duration_statement=0 -c client_min_messages=debug' python3 -c \"import pg; db=pg.DB('postgres'); q=db.query('SELECT \\$1', 1)\"\nDEBUG: loaded library \"auto_explain\"\nDEBUG: parse <unnamed>: SELECT $1\nLOG: duration: 0.230 ms parse <unnamed>: SELECT $1\nDEBUG: bind <unnamed> to <unnamed>\nLOG: duration: 0.141 ms bind <unnamed>: SELECT $1\nDETAIL: parameters: $1 = '1'\nLOG: duration: 0.029 ms execute <unnamed>: SELECT $1\nDETAIL: parameters: $1 = '1'\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 13 May 2021 15:09:25 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL blocked locks query"
},
{
"msg_contents": "Hi ,\n\nThank you for the reply. Is there a similar way to extract the same from a SQL command ?\n\nThanks\n\n-----Original Message-----\nFrom: Justin Pryzby <[email protected]>\nSent: Thursday, May 13, 2021 10:09 PM\nTo: Manoj Kumar <[email protected]>\nCc: [email protected]\nSubject: [EXT MSG] Re: PostgreSQL blocked locks query\n\nEXTERNAL source. Be CAREFUL with links / attachments\n\nOn Thu, May 13, 2021 at 01:54:32PM +0000, Manoj Kumar wrote:\n> I have query in terms of lock monitoring in PostgreSQL where I am not able to find a way to figure out what value has been passed in SQL statement (from JDBC driver as prepared statement).\n>\n> I am using PostgreSQL 13 version.\n>\n> The following is the SQL statement I am running in PGAdmin\n>\n> The output I am getting is below, where in the SQL query is with $1.\n>\n> [cid:[email protected]]\n>\n> Is there a way to compute what is being passed as value for the above SQL statement ?\n\nYou should enable query logging, and pull the params out of the log.\n\nNote that v13 has log_parameter_max_length, which defaults to showing params in full.\n\n[pryzbyj@telsasoft2019 ~]$ PGOPTIONS='-c log_min_duration_statement=0 -c client_min_messages=debug' python3 -c \"import pg; db=pg.DB('postgres'); q=db.query('SELECT \\$1', 1)\"\nDEBUG: loaded library \"auto_explain\"\nDEBUG: parse <unnamed>: SELECT $1\nLOG: duration: 0.230 ms parse <unnamed>: SELECT $1\nDEBUG: bind <unnamed> to <unnamed>\nLOG: duration: 0.141 ms bind <unnamed>: SELECT $1\nDETAIL: parameters: $1 = '1'\nLOG: duration: 0.029 ms execute <unnamed>: SELECT $1\nDETAIL: parameters: $1 = '1'\n\n--\nJustin\n\nThe information in this e-mail and any attachments is confidential and may be legally privileged. It is intended solely for the addressee or addressees. Any use or disclosure of the contents of this e-mail/attachments by a not intended recipient is unauthorized and may be unlawful. If you have received this e-mail in error please notify the sender. Please note that any views or opinions presented in this e-mail are solely those of the author and do not necessarily represent those of TEMENOS. We recommend that you check this e-mail and any attachments against viruses. TEMENOS accepts no liability for any damage caused by any malicious code or virus transmitted by this e-mail.\n\n\n",
"msg_date": "Fri, 14 May 2021 06:51:36 +0000",
"msg_from": "Manoj Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Re: PostgreSQL blocked locks query"
}
] |
[
{
"msg_contents": "Hi\n\nI am trying to use `pgmetrics` on a big (10TB+), busy (1GB/s RW) database.\nIt takes around 5 minutes for pgmetrics to run. I traced the problem to the\n\"bloat query\" (version of\nhttps://wiki.postgresql.org/wiki/Show_database_bloat) spinning in CPU,\ndoing no I/O.\n\nI have traced the problem to the bloated `pg_class` (the irony: `pgmetrics`\ndoes not collect bloat on `pg_catalog`):\n`vacuum (full, analyze, verbose) pg_class;`\n```\nINFO: vacuuming \"pg_catalog.pg_class\"\nINFO: \"pg_class\": found 1 removable, 7430805 nonremovable row versions in\n158870 pages\nDETAIL: 7429943 dead row versions cannot be removed yet.\nCPU 1.36s/6.40u sec elapsed 9.85 sec.\nINFO: analyzing \"pg_catalog.pg_class\"\nINFO: \"pg_class\": scanned 60000 of 158869 pages, containing 295 live rows\nand 2806547 dead rows; 295 rows in sample, 781 estimated total rows\nVACUUM\n```\n\n`pg_class` has so many dead rows because the workload is temp-table heavy\n(creating/destroying 1M+ temporary tables per day) and has long running\nanalytics queries running for 24h+.\n\nPG query planner assumes that index scan on `pg_class` will be very quick\nand plans Nested loop with Index scan. However, the index scan has 7M dead\ntuples to filter out and the query takes more than 200 seconds (\nhttps://explain.depesz.com/s/bw2G).\n\nIf I create a temp table from `pg_class` to contain only the live tuples:\n```\nCREATE TEMPORARY TABLE pg_class_alive AS SELECT oid,* from pg_class;\nCREATE UNIQUE INDEX pg_class_alive_oid_index ON pg_class_alive(oid);\nCREATE UNIQUE INDEX pg_class_alive_relname_nsp_index ON\npg_class_alive(relname, relnamespace);\nCREATE INDEX pg_class_tblspc_relfilenode_index ON\npg_class_alive(reltablespace, relfilenode);\nANALYZE pg_class_alive;\n```\n\nand run the bloat query on `pg_class_alive` instead of `pg_class`:\n```\nSELECT\n nn.nspname AS schemaname,\n cc.relname AS tablename,\n COALESCE(cc.reltuples,0) AS reltuples,\n COALESCE(cc.relpages,0) AS relpages,\n COALESCE(CEIL((cc.reltuples*((datahdr+8-\n (CASE WHEN datahdr%8=0 THEN 8 ELSE datahdr%8\nEND))+nullhdr2+4))/(8192-20::float)),0) AS otta\n FROM\n pg_class_alive cc\n JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname <>\n'information_schema'\n LEFT JOIN\n (\n SELECT\n foo.nspname,foo.relname,\n (datawidth+32)::numeric AS datahdr,\n (maxfracsum*(nullhdr+8-(case when nullhdr%8=0 THEN 8 ELSE nullhdr%8\nEND))) AS nullhdr2\n FROM (\n SELECT\n ns.nspname, tbl.relname,\n SUM((1-coalesce(null_frac,0))*coalesce(avg_width, 2048)) AS\ndatawidth,\n MAX(coalesce(null_frac,0)) AS maxfracsum,\n 23+(\n SELECT 1+count(*)/8\n FROM pg_stats s2\n WHERE null_frac<>0 AND s2.schemaname = ns.nspname AND s2.tablename\n= tbl.relname\n ) AS nullhdr\n FROM pg_attribute att\n JOIN pg_class_alive tbl ON att.attrelid = tbl.oid\n JOIN pg_namespace ns ON ns.oid = tbl.relnamespace\n LEFT JOIN pg_stats s ON s.schemaname=ns.nspname\n AND s.tablename = tbl.relname\n AND s.inherited=false\n AND s.attname=att.attname\n WHERE att.attnum > 0 AND tbl.relkind='r'\n GROUP BY 1,2\n ) AS foo\n ) AS rs\n ON cc.relname = rs.relname AND nn.nspname = rs.nspname\n LEFT JOIN pg_index i ON indrelid = cc.oid\n LEFT JOIN pg_class_alive c2 ON c2.oid = i.indexrelid\n```\n\nit runs in 10s, 20x faster (https://explain.depesz.com/s/K4SH)\n\nThe rabbit hole probably goes deeper (e.g. should do the same for\npg_statistic and pg_attribute and create a new pg_stats view).\n\nI am not able (at least not quickly) change the amount of temporary tables\ncreated or make the analytics queries finish quicker. Apart from the above\nhack of filtering out live tuples to a separate table is there anything I\ncould do?\n\nThank you,\nMarcin Gozdalik\n\n-- \nMarcin Gozdalik\n\nHiI am trying to use `pgmetrics` on a big (10TB+), busy (1GB/s RW) database. It takes around 5 minutes for pgmetrics to run. I traced the problem to the \"bloat query\" (version of https://wiki.postgresql.org/wiki/Show_database_bloat) spinning in CPU, doing no I/O.I have traced the problem to the bloated `pg_class` (the irony: `pgmetrics` does not collect bloat on `pg_catalog`):`vacuum (full, analyze, verbose) pg_class;````INFO: vacuuming \"pg_catalog.pg_class\"INFO: \"pg_class\": found 1 removable, 7430805 nonremovable row versions in 158870 pagesDETAIL: 7429943 dead row versions cannot be removed yet.CPU 1.36s/6.40u sec elapsed 9.85 sec.INFO: analyzing \"pg_catalog.pg_class\"INFO: \"pg_class\": scanned 60000 of 158869 pages, containing 295 live rows and 2806547 dead rows; 295 rows in sample, 781 estimated total rowsVACUUM````pg_class` has so many dead rows because the workload is temp-table heavy (creating/destroying 1M+ temporary tables per day) and has long running analytics queries running for 24h+.PG query planner assumes that index scan on `pg_class` will be very quick and plans Nested loop with Index scan. However, the index scan has 7M dead tuples to filter out and the query takes more than 200 seconds (https://explain.depesz.com/s/bw2G).If I create a temp table from `pg_class` to contain only the live tuples:```CREATE TEMPORARY TABLE pg_class_alive AS SELECT oid,* from pg_class;CREATE UNIQUE INDEX pg_class_alive_oid_index ON pg_class_alive(oid);CREATE UNIQUE INDEX pg_class_alive_relname_nsp_index ON pg_class_alive(relname, relnamespace);CREATE INDEX pg_class_tblspc_relfilenode_index ON pg_class_alive(reltablespace, relfilenode);ANALYZE pg_class_alive;```and run the bloat query on `pg_class_alive` instead of `pg_class`:```SELECT nn.nspname AS schemaname, cc.relname AS tablename, COALESCE(cc.reltuples,0) AS reltuples, COALESCE(cc.relpages,0) AS relpages, COALESCE(CEIL((cc.reltuples*((datahdr+8- (CASE WHEN datahdr%8=0 THEN 8 ELSE datahdr%8 END))+nullhdr2+4))/(8192-20::float)),0) AS otta FROM pg_class_alive cc JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname <> 'information_schema' LEFT JOIN ( SELECT foo.nspname,foo.relname, (datawidth+32)::numeric AS datahdr, (maxfracsum*(nullhdr+8-(case when nullhdr%8=0 THEN 8 ELSE nullhdr%8 END))) AS nullhdr2 FROM ( SELECT ns.nspname, tbl.relname, SUM((1-coalesce(null_frac,0))*coalesce(avg_width, 2048)) AS datawidth, MAX(coalesce(null_frac,0)) AS maxfracsum, 23+( SELECT 1+count(*)/8 FROM pg_stats s2 WHERE null_frac<>0 AND s2.schemaname = ns.nspname AND s2.tablename = tbl.relname ) AS nullhdr FROM pg_attribute att JOIN pg_class_alive tbl ON att.attrelid = tbl.oid JOIN pg_namespace ns ON ns.oid = tbl.relnamespace LEFT JOIN pg_stats s ON s.schemaname=ns.nspname AND s.tablename = tbl.relname AND s.inherited=false AND s.attname=att.attname WHERE att.attnum > 0 AND tbl.relkind='r' GROUP BY 1,2 ) AS foo ) AS rs ON cc.relname = rs.relname AND nn.nspname = rs.nspname LEFT JOIN pg_index i ON indrelid = cc.oid LEFT JOIN pg_class_alive c2 ON c2.oid = i.indexrelid```it runs in 10s, 20x faster (https://explain.depesz.com/s/K4SH)The rabbit hole probably goes deeper (e.g. should do the same for pg_statistic and pg_attribute and create a new pg_stats view).I am not able (at least not quickly) change the amount of temporary tables created or make the analytics queries finish quicker. Apart from the above hack of filtering out live tuples to a separate table is there anything I could do?Thank you,Marcin Gozdalik-- Marcin Gozdalik",
"msg_date": "Fri, 14 May 2021 11:06:33 +0000",
"msg_from": "Marcin Gozdalik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Very slow \"bloat query\""
},
{
"msg_contents": "Le 14/05/2021 à 13:06, Marcin Gozdalik a écrit :\n> Hi\n>\n> I am trying to use `pgmetrics` on a big (10TB+), busy (1GB/s RW) \n> database. It takes around 5 minutes for pgmetrics to run. I traced the \n> problem to the \"bloat query\" (version of \n> https://wiki.postgresql.org/wiki/Show_database_bloat \n> <https://wiki.postgresql.org/wiki/Show_database_bloat>) spinning in \n> CPU, doing no I/O.\n>\n> I have traced the problem to the bloated `pg_class` (the irony: \n> `pgmetrics` does not collect bloat on `pg_catalog`):\n> `vacuum (full, analyze, verbose) pg_class;`\n> ```\n> INFO: vacuuming \"pg_catalog.pg_class\"\n> INFO: \"pg_class\": found 1 removable, 7430805 nonremovable row \n> versions in 158870 pages\n> DETAIL: 7429943 dead row versions cannot be removed yet.\n> CPU 1.36s/6.40u sec elapsed 9.85 sec.\n> INFO: analyzing \"pg_catalog.pg_class\"\n> INFO: \"pg_class\": scanned 60000 of 158869 pages, containing 295 live \n> rows and 2806547 dead rows; 295 rows in sample, 781 estimated total rows\n> VACUUM\n> ```\n>\n> `pg_class` has so many dead rows because the workload is temp-table \n> heavy (creating/destroying 1M+ temporary tables per day) and has long \n> running analytics queries running for 24h+.\n>\n> PG query planner assumes that index scan on `pg_class` will be very \n> quick and plans Nested loop with Index scan. However, the index scan \n> has 7M dead tuples to filter out and the query takes more than 200 \n> seconds (https://explain.depesz.com/s/bw2G \n> <https://explain.depesz.com/s/bw2G>).\n>\n> If I create a temp table from `pg_class` to contain only the live tuples:\n> ```\n> CREATE TEMPORARY TABLE pg_class_alive AS SELECT oid,* from pg_class;\n> CREATE UNIQUE INDEX pg_class_alive_oid_index ON pg_class_alive(oid);\n> CREATE UNIQUE INDEX pg_class_alive_relname_nsp_index ON \n> pg_class_alive(relname, relnamespace);\n> CREATE INDEX pg_class_tblspc_relfilenode_index ON \n> pg_class_alive(reltablespace, relfilenode);\n> ANALYZE pg_class_alive;\n> ```\n>\n> and run the bloat query on `pg_class_alive` instead of `pg_class`:\n> ```\n> SELECT\n> nn.nspname AS schemaname,\n> cc.relname AS tablename,\n> COALESCE(cc.reltuples,0) AS reltuples,\n> COALESCE(cc.relpages,0) AS relpages,\n> COALESCE(CEIL((cc.reltuples*((datahdr+8-\n> (CASE WHEN datahdr%8=0 THEN 8 ELSE datahdr%8 \n> END))+nullhdr2+4))/(8192-20::float)),0) AS otta\n> FROM\n> pg_class_alive cc\n> JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname <> \n> 'information_schema'\n> LEFT JOIN\n> (\n> SELECT\n> foo.nspname,foo.relname,\n> (datawidth+32)::numeric AS datahdr,\n> (maxfracsum*(nullhdr+8-(case when nullhdr%8=0 THEN 8 ELSE \n> nullhdr%8 END))) AS nullhdr2\n> FROM (\n> SELECT\n> ns.nspname, tbl.relname,\n> SUM((1-coalesce(null_frac,0))*coalesce(avg_width, 2048)) AS \n> datawidth,\n> MAX(coalesce(null_frac,0)) AS maxfracsum,\n> 23+(\n> SELECT 1+count(*)/8\n> FROM pg_stats s2\n> WHERE null_frac<>0 AND s2.schemaname = ns.nspname AND \n> s2.tablename = tbl.relname\n> ) AS nullhdr\n> FROM pg_attribute att\n> JOIN pg_class_alive tbl ON att.attrelid = tbl.oid\n> JOIN pg_namespace ns ON ns.oid = tbl.relnamespace\n> LEFT JOIN pg_stats s ON s.schemaname=ns.nspname\n> AND s.tablename = tbl.relname\n> AND s.inherited=false\n> AND s.attname=att.attname\n> WHERE att.attnum > 0 AND tbl.relkind='r'\n> GROUP BY 1,2\n> ) AS foo\n> ) AS rs\n> ON cc.relname = rs.relname AND nn.nspname = rs.nspname\n> LEFT JOIN pg_index i ON indrelid = cc.oid\n> LEFT JOIN pg_class_alive c2 ON c2.oid = i.indexrelid\n> ```\n>\n> it runs in 10s, 20x faster (https://explain.depesz.com/s/K4SH \n> <https://explain.depesz.com/s/K4SH>)\n>\n> The rabbit hole probably goes deeper (e.g. should do the same for \n> pg_statistic and pg_attribute and create a new pg_stats view).\n>\n> I am not able (at least not quickly) change the amount of temporary \n> tables created or make the analytics queries finish quicker. Apart \n> from the above hack of filtering out live tuples to a separate table \n> is there anything I could do?\n\n\nHi,\n\n\nTo avoid bloating your catalog with temporary tables you can try using \nhttps://github.com/darold/pgtt-rsl I don't know if it will fit the \nperformances but at least you will not bloat the catalog anymore.\n\n\nAbout your hack, I don't see other solution except running vacuum on the \ncatalog tables more often, but I guess that this is already done or not \npossible. But not bloating the catalog at such level is the right solution.\n\n\n-- \nGilles Darold\nhttp://www.darold.net/\n\n\n\n\n",
"msg_date": "Fri, 14 May 2021 13:50:54 +0200",
"msg_from": "Gilles Darold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow \"bloat query\""
},
{
"msg_contents": "> Apart from the above hack of filtering out live tuples to a separate\ntable is there anything I could do?\n\nThis is the latest PG13.3 version?\n\nIMHO: If not, maybe worth updating to the latest patch release, as soon\nas possible\n\nhttps://www.postgresql.org/docs/release/13.3/\nRelease date: 2021-05-13\n*\"Disable the vacuum_cleanup_index_scale_factor parameter and storage\noption (Peter Geoghegan)*\n*The notion of tracking “stale” index statistics proved to interact badly\nwith the autovacuum_vacuum_insert_threshold parameter, resulting in\nunnecessary full-index scans and consequent degradation of autovacuum\nperformance. The latter mechanism seems superior, so remove the\nstale-statistics logic. The control parameter for that,\nvacuum_cleanup_index_scale_factor, will be removed entirely in v14. In v13,\nit remains present to avoid breaking existing configuration files, but it\nno longer does anything.\"*\n\nbest,\n Imre\n\n\nMarcin Gozdalik <[email protected]> ezt írta (időpont: 2021. máj. 14., P,\n13:20):\n\n> Hi\n>\n> I am trying to use `pgmetrics` on a big (10TB+), busy (1GB/s RW) database.\n> It takes around 5 minutes for pgmetrics to run. I traced the problem to the\n> \"bloat query\" (version of\n> https://wiki.postgresql.org/wiki/Show_database_bloat) spinning in CPU,\n> doing no I/O.\n>\n> I have traced the problem to the bloated `pg_class` (the irony:\n> `pgmetrics` does not collect bloat on `pg_catalog`):\n> `vacuum (full, analyze, verbose) pg_class;`\n> ```\n> INFO: vacuuming \"pg_catalog.pg_class\"\n> INFO: \"pg_class\": found 1 removable, 7430805 nonremovable row versions in\n> 158870 pages\n> DETAIL: 7429943 dead row versions cannot be removed yet.\n> CPU 1.36s/6.40u sec elapsed 9.85 sec.\n> INFO: analyzing \"pg_catalog.pg_class\"\n> INFO: \"pg_class\": scanned 60000 of 158869 pages, containing 295 live rows\n> and 2806547 dead rows; 295 rows in sample, 781 estimated total rows\n> VACUUM\n> ```\n>\n> `pg_class` has so many dead rows because the workload is temp-table heavy\n> (creating/destroying 1M+ temporary tables per day) and has long running\n> analytics queries running for 24h+.\n>\n> PG query planner assumes that index scan on `pg_class` will be very quick\n> and plans Nested loop with Index scan. However, the index scan has 7M dead\n> tuples to filter out and the query takes more than 200 seconds (\n> https://explain.depesz.com/s/bw2G).\n>\n> If I create a temp table from `pg_class` to contain only the live tuples:\n> ```\n> CREATE TEMPORARY TABLE pg_class_alive AS SELECT oid,* from pg_class;\n> CREATE UNIQUE INDEX pg_class_alive_oid_index ON pg_class_alive(oid);\n> CREATE UNIQUE INDEX pg_class_alive_relname_nsp_index ON\n> pg_class_alive(relname, relnamespace);\n> CREATE INDEX pg_class_tblspc_relfilenode_index ON\n> pg_class_alive(reltablespace, relfilenode);\n> ANALYZE pg_class_alive;\n> ```\n>\n> and run the bloat query on `pg_class_alive` instead of `pg_class`:\n> ```\n> SELECT\n> nn.nspname AS schemaname,\n> cc.relname AS tablename,\n> COALESCE(cc.reltuples,0) AS reltuples,\n> COALESCE(cc.relpages,0) AS relpages,\n> COALESCE(CEIL((cc.reltuples*((datahdr+8-\n> (CASE WHEN datahdr%8=0 THEN 8 ELSE datahdr%8\n> END))+nullhdr2+4))/(8192-20::float)),0) AS otta\n> FROM\n> pg_class_alive cc\n> JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname <>\n> 'information_schema'\n> LEFT JOIN\n> (\n> SELECT\n> foo.nspname,foo.relname,\n> (datawidth+32)::numeric AS datahdr,\n> (maxfracsum*(nullhdr+8-(case when nullhdr%8=0 THEN 8 ELSE nullhdr%8\n> END))) AS nullhdr2\n> FROM (\n> SELECT\n> ns.nspname, tbl.relname,\n> SUM((1-coalesce(null_frac,0))*coalesce(avg_width, 2048)) AS\n> datawidth,\n> MAX(coalesce(null_frac,0)) AS maxfracsum,\n> 23+(\n> SELECT 1+count(*)/8\n> FROM pg_stats s2\n> WHERE null_frac<>0 AND s2.schemaname = ns.nspname AND\n> s2.tablename = tbl.relname\n> ) AS nullhdr\n> FROM pg_attribute att\n> JOIN pg_class_alive tbl ON att.attrelid = tbl.oid\n> JOIN pg_namespace ns ON ns.oid = tbl.relnamespace\n> LEFT JOIN pg_stats s ON s.schemaname=ns.nspname\n> AND s.tablename = tbl.relname\n> AND s.inherited=false\n> AND s.attname=att.attname\n> WHERE att.attnum > 0 AND tbl.relkind='r'\n> GROUP BY 1,2\n> ) AS foo\n> ) AS rs\n> ON cc.relname = rs.relname AND nn.nspname = rs.nspname\n> LEFT JOIN pg_index i ON indrelid = cc.oid\n> LEFT JOIN pg_class_alive c2 ON c2.oid = i.indexrelid\n> ```\n>\n> it runs in 10s, 20x faster (https://explain.depesz.com/s/K4SH)\n>\n> The rabbit hole probably goes deeper (e.g. should do the same for\n> pg_statistic and pg_attribute and create a new pg_stats view).\n>\n> I am not able (at least not quickly) change the amount of temporary tables\n> created or make the analytics queries finish quicker. Apart from the above\n> hack of filtering out live tuples to a separate table is there anything I\n> could do?\n>\n> Thank you,\n> Marcin Gozdalik\n>\n> --\n> Marcin Gozdalik\n>\n\n> Apart from the above hack of filtering out live tuples to a separate table is there anything I could do?This is the latest PG13.3 version?IMHO: If not, maybe worth updating to the latest patch release, as soon as possiblehttps://www.postgresql.org/docs/release/13.3/Release date: 2021-05-13\"Disable the vacuum_cleanup_index_scale_factor parameter and storage option (Peter Geoghegan)The notion of tracking “stale” index statistics proved to interact badly with the autovacuum_vacuum_insert_threshold parameter, resulting in unnecessary full-index scans and consequent degradation of autovacuum performance. The latter mechanism seems superior, so remove the stale-statistics logic. The control parameter for that, vacuum_cleanup_index_scale_factor, will be removed entirely in v14. In v13, it remains present to avoid breaking existing configuration files, but it no longer does anything.\"best, ImreMarcin Gozdalik <[email protected]> ezt írta (időpont: 2021. máj. 14., P, 13:20):HiI am trying to use `pgmetrics` on a big (10TB+), busy (1GB/s RW) database. It takes around 5 minutes for pgmetrics to run. I traced the problem to the \"bloat query\" (version of https://wiki.postgresql.org/wiki/Show_database_bloat) spinning in CPU, doing no I/O.I have traced the problem to the bloated `pg_class` (the irony: `pgmetrics` does not collect bloat on `pg_catalog`):`vacuum (full, analyze, verbose) pg_class;````INFO: vacuuming \"pg_catalog.pg_class\"INFO: \"pg_class\": found 1 removable, 7430805 nonremovable row versions in 158870 pagesDETAIL: 7429943 dead row versions cannot be removed yet.CPU 1.36s/6.40u sec elapsed 9.85 sec.INFO: analyzing \"pg_catalog.pg_class\"INFO: \"pg_class\": scanned 60000 of 158869 pages, containing 295 live rows and 2806547 dead rows; 295 rows in sample, 781 estimated total rowsVACUUM````pg_class` has so many dead rows because the workload is temp-table heavy (creating/destroying 1M+ temporary tables per day) and has long running analytics queries running for 24h+.PG query planner assumes that index scan on `pg_class` will be very quick and plans Nested loop with Index scan. However, the index scan has 7M dead tuples to filter out and the query takes more than 200 seconds (https://explain.depesz.com/s/bw2G).If I create a temp table from `pg_class` to contain only the live tuples:```CREATE TEMPORARY TABLE pg_class_alive AS SELECT oid,* from pg_class;CREATE UNIQUE INDEX pg_class_alive_oid_index ON pg_class_alive(oid);CREATE UNIQUE INDEX pg_class_alive_relname_nsp_index ON pg_class_alive(relname, relnamespace);CREATE INDEX pg_class_tblspc_relfilenode_index ON pg_class_alive(reltablespace, relfilenode);ANALYZE pg_class_alive;```and run the bloat query on `pg_class_alive` instead of `pg_class`:```SELECT nn.nspname AS schemaname, cc.relname AS tablename, COALESCE(cc.reltuples,0) AS reltuples, COALESCE(cc.relpages,0) AS relpages, COALESCE(CEIL((cc.reltuples*((datahdr+8- (CASE WHEN datahdr%8=0 THEN 8 ELSE datahdr%8 END))+nullhdr2+4))/(8192-20::float)),0) AS otta FROM pg_class_alive cc JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname <> 'information_schema' LEFT JOIN ( SELECT foo.nspname,foo.relname, (datawidth+32)::numeric AS datahdr, (maxfracsum*(nullhdr+8-(case when nullhdr%8=0 THEN 8 ELSE nullhdr%8 END))) AS nullhdr2 FROM ( SELECT ns.nspname, tbl.relname, SUM((1-coalesce(null_frac,0))*coalesce(avg_width, 2048)) AS datawidth, MAX(coalesce(null_frac,0)) AS maxfracsum, 23+( SELECT 1+count(*)/8 FROM pg_stats s2 WHERE null_frac<>0 AND s2.schemaname = ns.nspname AND s2.tablename = tbl.relname ) AS nullhdr FROM pg_attribute att JOIN pg_class_alive tbl ON att.attrelid = tbl.oid JOIN pg_namespace ns ON ns.oid = tbl.relnamespace LEFT JOIN pg_stats s ON s.schemaname=ns.nspname AND s.tablename = tbl.relname AND s.inherited=false AND s.attname=att.attname WHERE att.attnum > 0 AND tbl.relkind='r' GROUP BY 1,2 ) AS foo ) AS rs ON cc.relname = rs.relname AND nn.nspname = rs.nspname LEFT JOIN pg_index i ON indrelid = cc.oid LEFT JOIN pg_class_alive c2 ON c2.oid = i.indexrelid```it runs in 10s, 20x faster (https://explain.depesz.com/s/K4SH)The rabbit hole probably goes deeper (e.g. should do the same for pg_statistic and pg_attribute and create a new pg_stats view).I am not able (at least not quickly) change the amount of temporary tables created or make the analytics queries finish quicker. Apart from the above hack of filtering out live tuples to a separate table is there anything I could do?Thank you,Marcin Gozdalik-- Marcin Gozdalik",
"msg_date": "Fri, 14 May 2021 14:07:49 +0200",
"msg_from": "Imre Samu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow \"bloat query\""
},
{
"msg_contents": "Unfortunately it's still 9.6. Upgrade to latest 13 is planned for this year.\n\npt., 14 maj 2021 o 12:08 Imre Samu <[email protected]> napisał(a):\n\n> > Apart from the above hack of filtering out live tuples to a separate\n> table is there anything I could do?\n>\n> This is the latest PG13.3 version?\n>\n> IMHO: If not, maybe worth updating to the latest patch release, as soon\n> as possible\n>\n> https://www.postgresql.org/docs/release/13.3/\n> Release date: 2021-05-13\n> *\"Disable the vacuum_cleanup_index_scale_factor parameter and storage\n> option (Peter Geoghegan)*\n> *The notion of tracking “stale” index statistics proved to interact badly\n> with the autovacuum_vacuum_insert_threshold parameter, resulting in\n> unnecessary full-index scans and consequent degradation of autovacuum\n> performance. The latter mechanism seems superior, so remove the\n> stale-statistics logic. The control parameter for that,\n> vacuum_cleanup_index_scale_factor, will be removed entirely in v14. In v13,\n> it remains present to avoid breaking existing configuration files, but it\n> no longer does anything.\"*\n>\n> best,\n> Imre\n>\n>\n> Marcin Gozdalik <[email protected]> ezt írta (időpont: 2021. máj. 14., P,\n> 13:20):\n>\n>> Hi\n>>\n>> I am trying to use `pgmetrics` on a big (10TB+), busy (1GB/s RW)\n>> database. It takes around 5 minutes for pgmetrics to run. I traced the\n>> problem to the \"bloat query\" (version of\n>> https://wiki.postgresql.org/wiki/Show_database_bloat) spinning in CPU,\n>> doing no I/O.\n>>\n>> I have traced the problem to the bloated `pg_class` (the irony:\n>> `pgmetrics` does not collect bloat on `pg_catalog`):\n>> `vacuum (full, analyze, verbose) pg_class;`\n>> ```\n>> INFO: vacuuming \"pg_catalog.pg_class\"\n>> INFO: \"pg_class\": found 1 removable, 7430805 nonremovable row versions\n>> in 158870 pages\n>> DETAIL: 7429943 dead row versions cannot be removed yet.\n>> CPU 1.36s/6.40u sec elapsed 9.85 sec.\n>> INFO: analyzing \"pg_catalog.pg_class\"\n>> INFO: \"pg_class\": scanned 60000 of 158869 pages, containing 295 live\n>> rows and 2806547 dead rows; 295 rows in sample, 781 estimated total rows\n>> VACUUM\n>> ```\n>>\n>> `pg_class` has so many dead rows because the workload is temp-table heavy\n>> (creating/destroying 1M+ temporary tables per day) and has long running\n>> analytics queries running for 24h+.\n>>\n>> PG query planner assumes that index scan on `pg_class` will be very quick\n>> and plans Nested loop with Index scan. However, the index scan has 7M dead\n>> tuples to filter out and the query takes more than 200 seconds (\n>> https://explain.depesz.com/s/bw2G).\n>>\n>> If I create a temp table from `pg_class` to contain only the live tuples:\n>> ```\n>> CREATE TEMPORARY TABLE pg_class_alive AS SELECT oid,* from pg_class;\n>> CREATE UNIQUE INDEX pg_class_alive_oid_index ON pg_class_alive(oid);\n>> CREATE UNIQUE INDEX pg_class_alive_relname_nsp_index ON\n>> pg_class_alive(relname, relnamespace);\n>> CREATE INDEX pg_class_tblspc_relfilenode_index ON\n>> pg_class_alive(reltablespace, relfilenode);\n>> ANALYZE pg_class_alive;\n>> ```\n>>\n>> and run the bloat query on `pg_class_alive` instead of `pg_class`:\n>> ```\n>> SELECT\n>> nn.nspname AS schemaname,\n>> cc.relname AS tablename,\n>> COALESCE(cc.reltuples,0) AS reltuples,\n>> COALESCE(cc.relpages,0) AS relpages,\n>> COALESCE(CEIL((cc.reltuples*((datahdr+8-\n>> (CASE WHEN datahdr%8=0 THEN 8 ELSE datahdr%8\n>> END))+nullhdr2+4))/(8192-20::float)),0) AS otta\n>> FROM\n>> pg_class_alive cc\n>> JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname <>\n>> 'information_schema'\n>> LEFT JOIN\n>> (\n>> SELECT\n>> foo.nspname,foo.relname,\n>> (datawidth+32)::numeric AS datahdr,\n>> (maxfracsum*(nullhdr+8-(case when nullhdr%8=0 THEN 8 ELSE nullhdr%8\n>> END))) AS nullhdr2\n>> FROM (\n>> SELECT\n>> ns.nspname, tbl.relname,\n>> SUM((1-coalesce(null_frac,0))*coalesce(avg_width, 2048)) AS\n>> datawidth,\n>> MAX(coalesce(null_frac,0)) AS maxfracsum,\n>> 23+(\n>> SELECT 1+count(*)/8\n>> FROM pg_stats s2\n>> WHERE null_frac<>0 AND s2.schemaname = ns.nspname AND\n>> s2.tablename = tbl.relname\n>> ) AS nullhdr\n>> FROM pg_attribute att\n>> JOIN pg_class_alive tbl ON att.attrelid = tbl.oid\n>> JOIN pg_namespace ns ON ns.oid = tbl.relnamespace\n>> LEFT JOIN pg_stats s ON s.schemaname=ns.nspname\n>> AND s.tablename = tbl.relname\n>> AND s.inherited=false\n>> AND s.attname=att.attname\n>> WHERE att.attnum > 0 AND tbl.relkind='r'\n>> GROUP BY 1,2\n>> ) AS foo\n>> ) AS rs\n>> ON cc.relname = rs.relname AND nn.nspname = rs.nspname\n>> LEFT JOIN pg_index i ON indrelid = cc.oid\n>> LEFT JOIN pg_class_alive c2 ON c2.oid = i.indexrelid\n>> ```\n>>\n>> it runs in 10s, 20x faster (https://explain.depesz.com/s/K4SH)\n>>\n>> The rabbit hole probably goes deeper (e.g. should do the same for\n>> pg_statistic and pg_attribute and create a new pg_stats view).\n>>\n>> I am not able (at least not quickly) change the amount of temporary\n>> tables created or make the analytics queries finish quicker. Apart from the\n>> above hack of filtering out live tuples to a separate table is there\n>> anything I could do?\n>>\n>> Thank you,\n>> Marcin Gozdalik\n>>\n>> --\n>> Marcin Gozdalik\n>>\n>\n\n-- \nMarcin Gozdalik\n\nUnfortunately it's still 9.6. Upgrade to latest 13 is planned for this year.pt., 14 maj 2021 o 12:08 Imre Samu <[email protected]> napisał(a):> Apart from the above hack of filtering out live tuples to a separate table is there anything I could do?This is the latest PG13.3 version?IMHO: If not, maybe worth updating to the latest patch release, as soon as possiblehttps://www.postgresql.org/docs/release/13.3/Release date: 2021-05-13\"Disable the vacuum_cleanup_index_scale_factor parameter and storage option (Peter Geoghegan)The notion of tracking “stale” index statistics proved to interact badly with the autovacuum_vacuum_insert_threshold parameter, resulting in unnecessary full-index scans and consequent degradation of autovacuum performance. The latter mechanism seems superior, so remove the stale-statistics logic. The control parameter for that, vacuum_cleanup_index_scale_factor, will be removed entirely in v14. In v13, it remains present to avoid breaking existing configuration files, but it no longer does anything.\"best, ImreMarcin Gozdalik <[email protected]> ezt írta (időpont: 2021. máj. 14., P, 13:20):HiI am trying to use `pgmetrics` on a big (10TB+), busy (1GB/s RW) database. It takes around 5 minutes for pgmetrics to run. I traced the problem to the \"bloat query\" (version of https://wiki.postgresql.org/wiki/Show_database_bloat) spinning in CPU, doing no I/O.I have traced the problem to the bloated `pg_class` (the irony: `pgmetrics` does not collect bloat on `pg_catalog`):`vacuum (full, analyze, verbose) pg_class;````INFO: vacuuming \"pg_catalog.pg_class\"INFO: \"pg_class\": found 1 removable, 7430805 nonremovable row versions in 158870 pagesDETAIL: 7429943 dead row versions cannot be removed yet.CPU 1.36s/6.40u sec elapsed 9.85 sec.INFO: analyzing \"pg_catalog.pg_class\"INFO: \"pg_class\": scanned 60000 of 158869 pages, containing 295 live rows and 2806547 dead rows; 295 rows in sample, 781 estimated total rowsVACUUM````pg_class` has so many dead rows because the workload is temp-table heavy (creating/destroying 1M+ temporary tables per day) and has long running analytics queries running for 24h+.PG query planner assumes that index scan on `pg_class` will be very quick and plans Nested loop with Index scan. However, the index scan has 7M dead tuples to filter out and the query takes more than 200 seconds (https://explain.depesz.com/s/bw2G).If I create a temp table from `pg_class` to contain only the live tuples:```CREATE TEMPORARY TABLE pg_class_alive AS SELECT oid,* from pg_class;CREATE UNIQUE INDEX pg_class_alive_oid_index ON pg_class_alive(oid);CREATE UNIQUE INDEX pg_class_alive_relname_nsp_index ON pg_class_alive(relname, relnamespace);CREATE INDEX pg_class_tblspc_relfilenode_index ON pg_class_alive(reltablespace, relfilenode);ANALYZE pg_class_alive;```and run the bloat query on `pg_class_alive` instead of `pg_class`:```SELECT nn.nspname AS schemaname, cc.relname AS tablename, COALESCE(cc.reltuples,0) AS reltuples, COALESCE(cc.relpages,0) AS relpages, COALESCE(CEIL((cc.reltuples*((datahdr+8- (CASE WHEN datahdr%8=0 THEN 8 ELSE datahdr%8 END))+nullhdr2+4))/(8192-20::float)),0) AS otta FROM pg_class_alive cc JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname <> 'information_schema' LEFT JOIN ( SELECT foo.nspname,foo.relname, (datawidth+32)::numeric AS datahdr, (maxfracsum*(nullhdr+8-(case when nullhdr%8=0 THEN 8 ELSE nullhdr%8 END))) AS nullhdr2 FROM ( SELECT ns.nspname, tbl.relname, SUM((1-coalesce(null_frac,0))*coalesce(avg_width, 2048)) AS datawidth, MAX(coalesce(null_frac,0)) AS maxfracsum, 23+( SELECT 1+count(*)/8 FROM pg_stats s2 WHERE null_frac<>0 AND s2.schemaname = ns.nspname AND s2.tablename = tbl.relname ) AS nullhdr FROM pg_attribute att JOIN pg_class_alive tbl ON att.attrelid = tbl.oid JOIN pg_namespace ns ON ns.oid = tbl.relnamespace LEFT JOIN pg_stats s ON s.schemaname=ns.nspname AND s.tablename = tbl.relname AND s.inherited=false AND s.attname=att.attname WHERE att.attnum > 0 AND tbl.relkind='r' GROUP BY 1,2 ) AS foo ) AS rs ON cc.relname = rs.relname AND nn.nspname = rs.nspname LEFT JOIN pg_index i ON indrelid = cc.oid LEFT JOIN pg_class_alive c2 ON c2.oid = i.indexrelid```it runs in 10s, 20x faster (https://explain.depesz.com/s/K4SH)The rabbit hole probably goes deeper (e.g. should do the same for pg_statistic and pg_attribute and create a new pg_stats view).I am not able (at least not quickly) change the amount of temporary tables created or make the analytics queries finish quicker. Apart from the above hack of filtering out live tuples to a separate table is there anything I could do?Thank you,Marcin Gozdalik-- Marcin Gozdalik\n\n-- Marcin Gozdalik",
"msg_date": "Fri, 14 May 2021 12:11:02 +0000",
"msg_from": "Marcin Gozdalik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow \"bloat query\""
},
{
"msg_contents": "Marcin Gozdalik <[email protected]> writes:\n> I have traced the problem to the bloated `pg_class` (the irony: `pgmetrics`\n> does not collect bloat on `pg_catalog`):\n> `vacuum (full, analyze, verbose) pg_class;`\n> ```\n> INFO: vacuuming \"pg_catalog.pg_class\"\n> INFO: \"pg_class\": found 1 removable, 7430805 nonremovable row versions in\n> 158870 pages\n> DETAIL: 7429943 dead row versions cannot be removed yet.\n\nUgh. It's understandable that having a lot of temp-table traffic\nwould result in the creation of lots of dead rows in pg_class.\nThe question to be asking is why aren't they vacuumable? You\nmust have a longstanding open transaction somewhere (perhaps\na forgotten prepared transaction?) that is holding back the\nglobal xmin horizon. Closing that out and then doing another\nmanual VACUUM FULL should help.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 May 2021 11:04:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow \"bloat query\""
},
{
"msg_contents": "There is a long running analytics query (which is running usually for 30-40\nhours). I agree that's not the best position to be in but right now can't\ndo anything about it.\n\npt., 14 maj 2021 o 15:04 Tom Lane <[email protected]> napisał(a):\n\n> Marcin Gozdalik <[email protected]> writes:\n> > I have traced the problem to the bloated `pg_class` (the irony:\n> `pgmetrics`\n> > does not collect bloat on `pg_catalog`):\n> > `vacuum (full, analyze, verbose) pg_class;`\n> > ```\n> > INFO: vacuuming \"pg_catalog.pg_class\"\n> > INFO: \"pg_class\": found 1 removable, 7430805 nonremovable row versions\n> in\n> > 158870 pages\n> > DETAIL: 7429943 dead row versions cannot be removed yet.\n>\n> Ugh. It's understandable that having a lot of temp-table traffic\n> would result in the creation of lots of dead rows in pg_class.\n> The question to be asking is why aren't they vacuumable? You\n> must have a longstanding open transaction somewhere (perhaps\n> a forgotten prepared transaction?) that is holding back the\n> global xmin horizon. Closing that out and then doing another\n> manual VACUUM FULL should help.\n>\n> regards, tom lane\n>\n\n\n-- \nMarcin Gozdalik\n\nThere is a long running analytics query (which is running usually for 30-40 hours). I agree that's not the best position to be in but right now can't do anything about it.pt., 14 maj 2021 o 15:04 Tom Lane <[email protected]> napisał(a):Marcin Gozdalik <[email protected]> writes:\n> I have traced the problem to the bloated `pg_class` (the irony: `pgmetrics`\n> does not collect bloat on `pg_catalog`):\n> `vacuum (full, analyze, verbose) pg_class;`\n> ```\n> INFO: vacuuming \"pg_catalog.pg_class\"\n> INFO: \"pg_class\": found 1 removable, 7430805 nonremovable row versions in\n> 158870 pages\n> DETAIL: 7429943 dead row versions cannot be removed yet.\n\nUgh. It's understandable that having a lot of temp-table traffic\nwould result in the creation of lots of dead rows in pg_class.\nThe question to be asking is why aren't they vacuumable? You\nmust have a longstanding open transaction somewhere (perhaps\na forgotten prepared transaction?) that is holding back the\nglobal xmin horizon. Closing that out and then doing another\nmanual VACUUM FULL should help.\n\n regards, tom lane\n-- Marcin Gozdalik",
"msg_date": "Fri, 14 May 2021 15:15:03 +0000",
"msg_from": "Marcin Gozdalik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow \"bloat query\""
},
{
"msg_contents": "> Unfortunately it's still 9.6.\n\nAnd what is your \"*version()*\"?\n\n\nfor example:\npostgres=# select version();\n version\n\n---------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.6.22 on x86_64-pc-linux-gnu (Debian 9.6.22-1.pgdg110+1),\ncompiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\n(1 row)\n\nImre\n\n\nMarcin Gozdalik <[email protected]> ezt írta (időpont: 2021. máj. 14., P,\n14:11):\n\n> Unfortunately it's still 9.6. Upgrade to latest 13 is planned for this\n> year.\n>\n> pt., 14 maj 2021 o 12:08 Imre Samu <[email protected]> napisał(a):\n>\n>> > Apart from the above hack of filtering out live tuples to a separate\n>> table is there anything I could do?\n>>\n>> This is the latest PG13.3 version?\n>>\n>> IMHO: If not, maybe worth updating to the latest patch release, as soon\n>> as possible\n>>\n>> https://www.postgresql.org/docs/release/13.3/\n>> Release date: 2021-05-13\n>> *\"Disable the vacuum_cleanup_index_scale_factor parameter and storage\n>> option (Peter Geoghegan)*\n>> *The notion of tracking “stale” index statistics proved to interact badly\n>> with the autovacuum_vacuum_insert_threshold parameter, resulting in\n>> unnecessary full-index scans and consequent degradation of autovacuum\n>> performance. The latter mechanism seems superior, so remove the\n>> stale-statistics logic. The control parameter for that,\n>> vacuum_cleanup_index_scale_factor, will be removed entirely in v14. In v13,\n>> it remains present to avoid breaking existing configuration files, but it\n>> no longer does anything.\"*\n>>\n>> best,\n>> Imre\n>>\n>>\n>> Marcin Gozdalik <[email protected]> ezt írta (időpont: 2021. máj. 14., P,\n>> 13:20):\n>>\n>>> Hi\n>>>\n>>> I am trying to use `pgmetrics` on a big (10TB+), busy (1GB/s RW)\n>>> database. It takes around 5 minutes for pgmetrics to run. I traced the\n>>> problem to the \"bloat query\" (version of\n>>> https://wiki.postgresql.org/wiki/Show_database_bloat) spinning in CPU,\n>>> doing no I/O.\n>>>\n>>> I have traced the problem to the bloated `pg_class` (the irony:\n>>> `pgmetrics` does not collect bloat on `pg_catalog`):\n>>> `vacuum (full, analyze, verbose) pg_class;`\n>>> ```\n>>> INFO: vacuuming \"pg_catalog.pg_class\"\n>>> INFO: \"pg_class\": found 1 removable, 7430805 nonremovable row versions\n>>> in 158870 pages\n>>> DETAIL: 7429943 dead row versions cannot be removed yet.\n>>> CPU 1.36s/6.40u sec elapsed 9.85 sec.\n>>> INFO: analyzing \"pg_catalog.pg_class\"\n>>> INFO: \"pg_class\": scanned 60000 of 158869 pages, containing 295 live\n>>> rows and 2806547 dead rows; 295 rows in sample, 781 estimated total rows\n>>> VACUUM\n>>> ```\n>>>\n>>> `pg_class` has so many dead rows because the workload is temp-table\n>>> heavy (creating/destroying 1M+ temporary tables per day) and has long\n>>> running analytics queries running for 24h+.\n>>>\n>>> PG query planner assumes that index scan on `pg_class` will be very\n>>> quick and plans Nested loop with Index scan. However, the index scan has 7M\n>>> dead tuples to filter out and the query takes more than 200 seconds (\n>>> https://explain.depesz.com/s/bw2G).\n>>>\n>>> If I create a temp table from `pg_class` to contain only the live tuples:\n>>> ```\n>>> CREATE TEMPORARY TABLE pg_class_alive AS SELECT oid,* from pg_class;\n>>> CREATE UNIQUE INDEX pg_class_alive_oid_index ON pg_class_alive(oid);\n>>> CREATE UNIQUE INDEX pg_class_alive_relname_nsp_index ON\n>>> pg_class_alive(relname, relnamespace);\n>>> CREATE INDEX pg_class_tblspc_relfilenode_index ON\n>>> pg_class_alive(reltablespace, relfilenode);\n>>> ANALYZE pg_class_alive;\n>>> ```\n>>>\n>>> and run the bloat query on `pg_class_alive` instead of `pg_class`:\n>>> ```\n>>> SELECT\n>>> nn.nspname AS schemaname,\n>>> cc.relname AS tablename,\n>>> COALESCE(cc.reltuples,0) AS reltuples,\n>>> COALESCE(cc.relpages,0) AS relpages,\n>>> COALESCE(CEIL((cc.reltuples*((datahdr+8-\n>>> (CASE WHEN datahdr%8=0 THEN 8 ELSE datahdr%8\n>>> END))+nullhdr2+4))/(8192-20::float)),0) AS otta\n>>> FROM\n>>> pg_class_alive cc\n>>> JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname <>\n>>> 'information_schema'\n>>> LEFT JOIN\n>>> (\n>>> SELECT\n>>> foo.nspname,foo.relname,\n>>> (datawidth+32)::numeric AS datahdr,\n>>> (maxfracsum*(nullhdr+8-(case when nullhdr%8=0 THEN 8 ELSE nullhdr%8\n>>> END))) AS nullhdr2\n>>> FROM (\n>>> SELECT\n>>> ns.nspname, tbl.relname,\n>>> SUM((1-coalesce(null_frac,0))*coalesce(avg_width, 2048)) AS\n>>> datawidth,\n>>> MAX(coalesce(null_frac,0)) AS maxfracsum,\n>>> 23+(\n>>> SELECT 1+count(*)/8\n>>> FROM pg_stats s2\n>>> WHERE null_frac<>0 AND s2.schemaname = ns.nspname AND\n>>> s2.tablename = tbl.relname\n>>> ) AS nullhdr\n>>> FROM pg_attribute att\n>>> JOIN pg_class_alive tbl ON att.attrelid = tbl.oid\n>>> JOIN pg_namespace ns ON ns.oid = tbl.relnamespace\n>>> LEFT JOIN pg_stats s ON s.schemaname=ns.nspname\n>>> AND s.tablename = tbl.relname\n>>> AND s.inherited=false\n>>> AND s.attname=att.attname\n>>> WHERE att.attnum > 0 AND tbl.relkind='r'\n>>> GROUP BY 1,2\n>>> ) AS foo\n>>> ) AS rs\n>>> ON cc.relname = rs.relname AND nn.nspname = rs.nspname\n>>> LEFT JOIN pg_index i ON indrelid = cc.oid\n>>> LEFT JOIN pg_class_alive c2 ON c2.oid = i.indexrelid\n>>> ```\n>>>\n>>> it runs in 10s, 20x faster (https://explain.depesz.com/s/K4SH)\n>>>\n>>> The rabbit hole probably goes deeper (e.g. should do the same for\n>>> pg_statistic and pg_attribute and create a new pg_stats view).\n>>>\n>>> I am not able (at least not quickly) change the amount of temporary\n>>> tables created or make the analytics queries finish quicker. Apart from the\n>>> above hack of filtering out live tuples to a separate table is there\n>>> anything I could do?\n>>>\n>>> Thank you,\n>>> Marcin Gozdalik\n>>>\n>>> --\n>>> Marcin Gozdalik\n>>>\n>>\n>\n> --\n> Marcin Gozdalik\n>\n\n> Unfortunately it's still 9.6.And what is your \"version()\"?for example: postgres=# select version(); version --------------------------------------------------------------------------------------------------------------------------------- PostgreSQL 9.6.22 on x86_64-pc-linux-gnu (Debian 9.6.22-1.pgdg110+1), compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit(1 row)ImreMarcin Gozdalik <[email protected]> ezt írta (időpont: 2021. máj. 14., P, 14:11):Unfortunately it's still 9.6. Upgrade to latest 13 is planned for this year.pt., 14 maj 2021 o 12:08 Imre Samu <[email protected]> napisał(a):> Apart from the above hack of filtering out live tuples to a separate table is there anything I could do?This is the latest PG13.3 version?IMHO: If not, maybe worth updating to the latest patch release, as soon as possiblehttps://www.postgresql.org/docs/release/13.3/Release date: 2021-05-13\"Disable the vacuum_cleanup_index_scale_factor parameter and storage option (Peter Geoghegan)The notion of tracking “stale” index statistics proved to interact badly with the autovacuum_vacuum_insert_threshold parameter, resulting in unnecessary full-index scans and consequent degradation of autovacuum performance. The latter mechanism seems superior, so remove the stale-statistics logic. The control parameter for that, vacuum_cleanup_index_scale_factor, will be removed entirely in v14. In v13, it remains present to avoid breaking existing configuration files, but it no longer does anything.\"best, ImreMarcin Gozdalik <[email protected]> ezt írta (időpont: 2021. máj. 14., P, 13:20):HiI am trying to use `pgmetrics` on a big (10TB+), busy (1GB/s RW) database. It takes around 5 minutes for pgmetrics to run. I traced the problem to the \"bloat query\" (version of https://wiki.postgresql.org/wiki/Show_database_bloat) spinning in CPU, doing no I/O.I have traced the problem to the bloated `pg_class` (the irony: `pgmetrics` does not collect bloat on `pg_catalog`):`vacuum (full, analyze, verbose) pg_class;````INFO: vacuuming \"pg_catalog.pg_class\"INFO: \"pg_class\": found 1 removable, 7430805 nonremovable row versions in 158870 pagesDETAIL: 7429943 dead row versions cannot be removed yet.CPU 1.36s/6.40u sec elapsed 9.85 sec.INFO: analyzing \"pg_catalog.pg_class\"INFO: \"pg_class\": scanned 60000 of 158869 pages, containing 295 live rows and 2806547 dead rows; 295 rows in sample, 781 estimated total rowsVACUUM````pg_class` has so many dead rows because the workload is temp-table heavy (creating/destroying 1M+ temporary tables per day) and has long running analytics queries running for 24h+.PG query planner assumes that index scan on `pg_class` will be very quick and plans Nested loop with Index scan. However, the index scan has 7M dead tuples to filter out and the query takes more than 200 seconds (https://explain.depesz.com/s/bw2G).If I create a temp table from `pg_class` to contain only the live tuples:```CREATE TEMPORARY TABLE pg_class_alive AS SELECT oid,* from pg_class;CREATE UNIQUE INDEX pg_class_alive_oid_index ON pg_class_alive(oid);CREATE UNIQUE INDEX pg_class_alive_relname_nsp_index ON pg_class_alive(relname, relnamespace);CREATE INDEX pg_class_tblspc_relfilenode_index ON pg_class_alive(reltablespace, relfilenode);ANALYZE pg_class_alive;```and run the bloat query on `pg_class_alive` instead of `pg_class`:```SELECT nn.nspname AS schemaname, cc.relname AS tablename, COALESCE(cc.reltuples,0) AS reltuples, COALESCE(cc.relpages,0) AS relpages, COALESCE(CEIL((cc.reltuples*((datahdr+8- (CASE WHEN datahdr%8=0 THEN 8 ELSE datahdr%8 END))+nullhdr2+4))/(8192-20::float)),0) AS otta FROM pg_class_alive cc JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname <> 'information_schema' LEFT JOIN ( SELECT foo.nspname,foo.relname, (datawidth+32)::numeric AS datahdr, (maxfracsum*(nullhdr+8-(case when nullhdr%8=0 THEN 8 ELSE nullhdr%8 END))) AS nullhdr2 FROM ( SELECT ns.nspname, tbl.relname, SUM((1-coalesce(null_frac,0))*coalesce(avg_width, 2048)) AS datawidth, MAX(coalesce(null_frac,0)) AS maxfracsum, 23+( SELECT 1+count(*)/8 FROM pg_stats s2 WHERE null_frac<>0 AND s2.schemaname = ns.nspname AND s2.tablename = tbl.relname ) AS nullhdr FROM pg_attribute att JOIN pg_class_alive tbl ON att.attrelid = tbl.oid JOIN pg_namespace ns ON ns.oid = tbl.relnamespace LEFT JOIN pg_stats s ON s.schemaname=ns.nspname AND s.tablename = tbl.relname AND s.inherited=false AND s.attname=att.attname WHERE att.attnum > 0 AND tbl.relkind='r' GROUP BY 1,2 ) AS foo ) AS rs ON cc.relname = rs.relname AND nn.nspname = rs.nspname LEFT JOIN pg_index i ON indrelid = cc.oid LEFT JOIN pg_class_alive c2 ON c2.oid = i.indexrelid```it runs in 10s, 20x faster (https://explain.depesz.com/s/K4SH)The rabbit hole probably goes deeper (e.g. should do the same for pg_statistic and pg_attribute and create a new pg_stats view).I am not able (at least not quickly) change the amount of temporary tables created or make the analytics queries finish quicker. Apart from the above hack of filtering out live tuples to a separate table is there anything I could do?Thank you,Marcin Gozdalik-- Marcin Gozdalik\n\n-- Marcin Gozdalik",
"msg_date": "Fri, 14 May 2021 17:44:55 +0200",
"msg_from": "Imre Samu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow \"bloat query\""
},
{
"msg_contents": "PostgreSQL 9.6.21 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5\n20150623 (Red Hat 4.8.5-44), 64-bit\n\npt., 14 maj 2021 o 15:45 Imre Samu <[email protected]> napisał(a):\n\n> > Unfortunately it's still 9.6.\n>\n> And what is your \"*version()*\"?\n>\n>\n> for example:\n> postgres=# select version();\n> version\n>\n>\n> ---------------------------------------------------------------------------------------------------------------------------------\n> PostgreSQL 9.6.22 on x86_64-pc-linux-gnu (Debian 9.6.22-1.pgdg110+1),\n> compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\n> (1 row)\n>\n> Imre\n>\n>\n> Marcin Gozdalik <[email protected]> ezt írta (időpont: 2021. máj. 14., P,\n> 14:11):\n>\n>> Unfortunately it's still 9.6. Upgrade to latest 13 is planned for this\n>> year.\n>>\n>> pt., 14 maj 2021 o 12:08 Imre Samu <[email protected]> napisał(a):\n>>\n>>> > Apart from the above hack of filtering out live tuples to a separate\n>>> table is there anything I could do?\n>>>\n>>> This is the latest PG13.3 version?\n>>>\n>>> IMHO: If not, maybe worth updating to the latest patch release, as\n>>> soon as possible\n>>>\n>>> https://www.postgresql.org/docs/release/13.3/\n>>> Release date: 2021-05-13\n>>> *\"Disable the vacuum_cleanup_index_scale_factor parameter and storage\n>>> option (Peter Geoghegan)*\n>>> *The notion of tracking “stale” index statistics proved to interact\n>>> badly with the autovacuum_vacuum_insert_threshold parameter, resulting in\n>>> unnecessary full-index scans and consequent degradation of autovacuum\n>>> performance. The latter mechanism seems superior, so remove the\n>>> stale-statistics logic. The control parameter for that,\n>>> vacuum_cleanup_index_scale_factor, will be removed entirely in v14. In v13,\n>>> it remains present to avoid breaking existing configuration files, but it\n>>> no longer does anything.\"*\n>>>\n>>> best,\n>>> Imre\n>>>\n>>>\n>>> Marcin Gozdalik <[email protected]> ezt írta (időpont: 2021. máj. 14.,\n>>> P, 13:20):\n>>>\n>>>> Hi\n>>>>\n>>>> I am trying to use `pgmetrics` on a big (10TB+), busy (1GB/s RW)\n>>>> database. It takes around 5 minutes for pgmetrics to run. I traced the\n>>>> problem to the \"bloat query\" (version of\n>>>> https://wiki.postgresql.org/wiki/Show_database_bloat) spinning in CPU,\n>>>> doing no I/O.\n>>>>\n>>>> I have traced the problem to the bloated `pg_class` (the irony:\n>>>> `pgmetrics` does not collect bloat on `pg_catalog`):\n>>>> `vacuum (full, analyze, verbose) pg_class;`\n>>>> ```\n>>>> INFO: vacuuming \"pg_catalog.pg_class\"\n>>>> INFO: \"pg_class\": found 1 removable, 7430805 nonremovable row versions\n>>>> in 158870 pages\n>>>> DETAIL: 7429943 dead row versions cannot be removed yet.\n>>>> CPU 1.36s/6.40u sec elapsed 9.85 sec.\n>>>> INFO: analyzing \"pg_catalog.pg_class\"\n>>>> INFO: \"pg_class\": scanned 60000 of 158869 pages, containing 295 live\n>>>> rows and 2806547 dead rows; 295 rows in sample, 781 estimated total rows\n>>>> VACUUM\n>>>> ```\n>>>>\n>>>> `pg_class` has so many dead rows because the workload is temp-table\n>>>> heavy (creating/destroying 1M+ temporary tables per day) and has long\n>>>> running analytics queries running for 24h+.\n>>>>\n>>>> PG query planner assumes that index scan on `pg_class` will be very\n>>>> quick and plans Nested loop with Index scan. However, the index scan has 7M\n>>>> dead tuples to filter out and the query takes more than 200 seconds (\n>>>> https://explain.depesz.com/s/bw2G).\n>>>>\n>>>> If I create a temp table from `pg_class` to contain only the live\n>>>> tuples:\n>>>> ```\n>>>> CREATE TEMPORARY TABLE pg_class_alive AS SELECT oid,* from pg_class;\n>>>> CREATE UNIQUE INDEX pg_class_alive_oid_index ON pg_class_alive(oid);\n>>>> CREATE UNIQUE INDEX pg_class_alive_relname_nsp_index ON\n>>>> pg_class_alive(relname, relnamespace);\n>>>> CREATE INDEX pg_class_tblspc_relfilenode_index ON\n>>>> pg_class_alive(reltablespace, relfilenode);\n>>>> ANALYZE pg_class_alive;\n>>>> ```\n>>>>\n>>>> and run the bloat query on `pg_class_alive` instead of `pg_class`:\n>>>> ```\n>>>> SELECT\n>>>> nn.nspname AS schemaname,\n>>>> cc.relname AS tablename,\n>>>> COALESCE(cc.reltuples,0) AS reltuples,\n>>>> COALESCE(cc.relpages,0) AS relpages,\n>>>> COALESCE(CEIL((cc.reltuples*((datahdr+8-\n>>>> (CASE WHEN datahdr%8=0 THEN 8 ELSE datahdr%8\n>>>> END))+nullhdr2+4))/(8192-20::float)),0) AS otta\n>>>> FROM\n>>>> pg_class_alive cc\n>>>> JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname <>\n>>>> 'information_schema'\n>>>> LEFT JOIN\n>>>> (\n>>>> SELECT\n>>>> foo.nspname,foo.relname,\n>>>> (datawidth+32)::numeric AS datahdr,\n>>>> (maxfracsum*(nullhdr+8-(case when nullhdr%8=0 THEN 8 ELSE\n>>>> nullhdr%8 END))) AS nullhdr2\n>>>> FROM (\n>>>> SELECT\n>>>> ns.nspname, tbl.relname,\n>>>> SUM((1-coalesce(null_frac,0))*coalesce(avg_width, 2048)) AS\n>>>> datawidth,\n>>>> MAX(coalesce(null_frac,0)) AS maxfracsum,\n>>>> 23+(\n>>>> SELECT 1+count(*)/8\n>>>> FROM pg_stats s2\n>>>> WHERE null_frac<>0 AND s2.schemaname = ns.nspname AND\n>>>> s2.tablename = tbl.relname\n>>>> ) AS nullhdr\n>>>> FROM pg_attribute att\n>>>> JOIN pg_class_alive tbl ON att.attrelid = tbl.oid\n>>>> JOIN pg_namespace ns ON ns.oid = tbl.relnamespace\n>>>> LEFT JOIN pg_stats s ON s.schemaname=ns.nspname\n>>>> AND s.tablename = tbl.relname\n>>>> AND s.inherited=false\n>>>> AND s.attname=att.attname\n>>>> WHERE att.attnum > 0 AND tbl.relkind='r'\n>>>> GROUP BY 1,2\n>>>> ) AS foo\n>>>> ) AS rs\n>>>> ON cc.relname = rs.relname AND nn.nspname = rs.nspname\n>>>> LEFT JOIN pg_index i ON indrelid = cc.oid\n>>>> LEFT JOIN pg_class_alive c2 ON c2.oid = i.indexrelid\n>>>> ```\n>>>>\n>>>> it runs in 10s, 20x faster (https://explain.depesz.com/s/K4SH)\n>>>>\n>>>> The rabbit hole probably goes deeper (e.g. should do the same for\n>>>> pg_statistic and pg_attribute and create a new pg_stats view).\n>>>>\n>>>> I am not able (at least not quickly) change the amount of temporary\n>>>> tables created or make the analytics queries finish quicker. Apart from the\n>>>> above hack of filtering out live tuples to a separate table is there\n>>>> anything I could do?\n>>>>\n>>>> Thank you,\n>>>> Marcin Gozdalik\n>>>>\n>>>> --\n>>>> Marcin Gozdalik\n>>>>\n>>>\n>>\n>> --\n>> Marcin Gozdalik\n>>\n>\n\n-- \nMarcin Gozdalik\n\n PostgreSQL 9.6.21 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 64-bitpt., 14 maj 2021 o 15:45 Imre Samu <[email protected]> napisał(a):> Unfortunately it's still 9.6.And what is your \"version()\"?for example: postgres=# select version(); version --------------------------------------------------------------------------------------------------------------------------------- PostgreSQL 9.6.22 on x86_64-pc-linux-gnu (Debian 9.6.22-1.pgdg110+1), compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit(1 row)ImreMarcin Gozdalik <[email protected]> ezt írta (időpont: 2021. máj. 14., P, 14:11):Unfortunately it's still 9.6. Upgrade to latest 13 is planned for this year.pt., 14 maj 2021 o 12:08 Imre Samu <[email protected]> napisał(a):> Apart from the above hack of filtering out live tuples to a separate table is there anything I could do?This is the latest PG13.3 version?IMHO: If not, maybe worth updating to the latest patch release, as soon as possiblehttps://www.postgresql.org/docs/release/13.3/Release date: 2021-05-13\"Disable the vacuum_cleanup_index_scale_factor parameter and storage option (Peter Geoghegan)The notion of tracking “stale” index statistics proved to interact badly with the autovacuum_vacuum_insert_threshold parameter, resulting in unnecessary full-index scans and consequent degradation of autovacuum performance. The latter mechanism seems superior, so remove the stale-statistics logic. The control parameter for that, vacuum_cleanup_index_scale_factor, will be removed entirely in v14. In v13, it remains present to avoid breaking existing configuration files, but it no longer does anything.\"best, ImreMarcin Gozdalik <[email protected]> ezt írta (időpont: 2021. máj. 14., P, 13:20):HiI am trying to use `pgmetrics` on a big (10TB+), busy (1GB/s RW) database. It takes around 5 minutes for pgmetrics to run. I traced the problem to the \"bloat query\" (version of https://wiki.postgresql.org/wiki/Show_database_bloat) spinning in CPU, doing no I/O.I have traced the problem to the bloated `pg_class` (the irony: `pgmetrics` does not collect bloat on `pg_catalog`):`vacuum (full, analyze, verbose) pg_class;````INFO: vacuuming \"pg_catalog.pg_class\"INFO: \"pg_class\": found 1 removable, 7430805 nonremovable row versions in 158870 pagesDETAIL: 7429943 dead row versions cannot be removed yet.CPU 1.36s/6.40u sec elapsed 9.85 sec.INFO: analyzing \"pg_catalog.pg_class\"INFO: \"pg_class\": scanned 60000 of 158869 pages, containing 295 live rows and 2806547 dead rows; 295 rows in sample, 781 estimated total rowsVACUUM````pg_class` has so many dead rows because the workload is temp-table heavy (creating/destroying 1M+ temporary tables per day) and has long running analytics queries running for 24h+.PG query planner assumes that index scan on `pg_class` will be very quick and plans Nested loop with Index scan. However, the index scan has 7M dead tuples to filter out and the query takes more than 200 seconds (https://explain.depesz.com/s/bw2G).If I create a temp table from `pg_class` to contain only the live tuples:```CREATE TEMPORARY TABLE pg_class_alive AS SELECT oid,* from pg_class;CREATE UNIQUE INDEX pg_class_alive_oid_index ON pg_class_alive(oid);CREATE UNIQUE INDEX pg_class_alive_relname_nsp_index ON pg_class_alive(relname, relnamespace);CREATE INDEX pg_class_tblspc_relfilenode_index ON pg_class_alive(reltablespace, relfilenode);ANALYZE pg_class_alive;```and run the bloat query on `pg_class_alive` instead of `pg_class`:```SELECT nn.nspname AS schemaname, cc.relname AS tablename, COALESCE(cc.reltuples,0) AS reltuples, COALESCE(cc.relpages,0) AS relpages, COALESCE(CEIL((cc.reltuples*((datahdr+8- (CASE WHEN datahdr%8=0 THEN 8 ELSE datahdr%8 END))+nullhdr2+4))/(8192-20::float)),0) AS otta FROM pg_class_alive cc JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname <> 'information_schema' LEFT JOIN ( SELECT foo.nspname,foo.relname, (datawidth+32)::numeric AS datahdr, (maxfracsum*(nullhdr+8-(case when nullhdr%8=0 THEN 8 ELSE nullhdr%8 END))) AS nullhdr2 FROM ( SELECT ns.nspname, tbl.relname, SUM((1-coalesce(null_frac,0))*coalesce(avg_width, 2048)) AS datawidth, MAX(coalesce(null_frac,0)) AS maxfracsum, 23+( SELECT 1+count(*)/8 FROM pg_stats s2 WHERE null_frac<>0 AND s2.schemaname = ns.nspname AND s2.tablename = tbl.relname ) AS nullhdr FROM pg_attribute att JOIN pg_class_alive tbl ON att.attrelid = tbl.oid JOIN pg_namespace ns ON ns.oid = tbl.relnamespace LEFT JOIN pg_stats s ON s.schemaname=ns.nspname AND s.tablename = tbl.relname AND s.inherited=false AND s.attname=att.attname WHERE att.attnum > 0 AND tbl.relkind='r' GROUP BY 1,2 ) AS foo ) AS rs ON cc.relname = rs.relname AND nn.nspname = rs.nspname LEFT JOIN pg_index i ON indrelid = cc.oid LEFT JOIN pg_class_alive c2 ON c2.oid = i.indexrelid```it runs in 10s, 20x faster (https://explain.depesz.com/s/K4SH)The rabbit hole probably goes deeper (e.g. should do the same for pg_statistic and pg_attribute and create a new pg_stats view).I am not able (at least not quickly) change the amount of temporary tables created or make the analytics queries finish quicker. Apart from the above hack of filtering out live tuples to a separate table is there anything I could do?Thank you,Marcin Gozdalik-- Marcin Gozdalik\n\n-- Marcin Gozdalik\n\n-- Marcin Gozdalik",
"msg_date": "Fri, 14 May 2021 15:47:20 +0000",
"msg_from": "Marcin Gozdalik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow \"bloat query\""
}
] |
[
{
"msg_contents": "Good day\n\n \n\nI'm struggling with a Postgres 13 performance issue and nothing I do seem to\nhelp.\n\n \n\nI have two tables with one having a foreign key to the other. It happens to\nbe for this one client the foreign key is always null, so no violation would\nbe possible. When deleting from the one table the foreign key trigger for\nconstraint takes 20 seconds to run.\n\n \n\nThe tables look as follows\n\n \n\ntable1\n\n id bigint pkey\n\n value number\n\n \n\ntable2 (55 mil entries)\n\n id bigint pkey\n\n table1_id bigint (fkey to table1 id)\n\n value number\n\n \n\nRunning delete from table1 where id = 48938 the trigger for constraint runs\nfor 20 seconds\n\n \n\nEvent when doing a simple select from table2 where table1_id = 48938 takes\nabout 8 seconds\n\n \n\nI've tried the following, but nothing seems to change the outcome:\n\nCREATE INDEX table2_idx ON table2(table1_id);\n\nCREATE INDEX table2_idx2 ON table2(table1_id) WHERE table1_id IS NOT NULL;\n\nCREATE INDEX table2_idx3 ON table2(table1_id) INCLUDE (id) WHERE table1_id\nIS NOT NULL;\n\n \n\nalter table table2 alter column table1_id set statistics 10000;\n\n \n\nNone of these steps changes the planner and it would continue to do table\nscans.\n\n \n\nRegards\n\nRiaan\n\n\nGood day I’m struggling with a Postgres 13 performance issue and nothing I do seem to help. I have two tables with one having a foreign key to the other. It happens to be for this one client the foreign key is always null, so no violation would be possible. When deleting from the one table the foreign key trigger for constraint takes 20 seconds to run. The tables look as follows table1 id bigint pkey value number table2 (55 mil entries) id bigint pkey table1_id bigint (fkey to table1 id) value number Running delete from table1 where id = 48938 the trigger for constraint runs for 20 seconds Event when doing a simple select from table2 where table1_id = 48938 takes about 8 seconds I’ve tried the following, but nothing seems to change the outcome:CREATE INDEX table2_idx ON table2(table1_id);CREATE INDEX table2_idx2 ON table2(table1_id) WHERE table1_id IS NOT NULL;CREATE INDEX table2_idx3 ON table2(table1_id) INCLUDE (id) WHERE table1_id IS NOT NULL; alter table table2 alter column table1_id set statistics 10000; None of these steps changes the planner and it would continue to do table scans. RegardsRiaan",
"msg_date": "Mon, 17 May 2021 22:42:25 +0200",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Index and statistics not used"
},
{
"msg_contents": "On Tue, 18 May 2021 at 08:42, <[email protected]> wrote:\n> Running delete from table1 where id = 48938 the trigger for constraint runs for 20 seconds\n>\n> Event when doing a simple select from table2 where table1_id = 48938 takes about 8 seconds\n\nDoes EXPLAIN show it uses a seq scan for this 8-second SELECT?\n\nIf so, does it use the index if you SET enable_seqscan TO off; ? If\nso, how do the costs compare to the seqscan costs?\n\nIs random_page_cost set to something sane? Are all the indexes valid?\n(psql's \\d table2 would show you INVALID if they're not.)\n\ndoes: SHOW enable_indexscan; show that index scanning is switched on?\n\nDavid\n\n\n",
"msg_date": "Tue, 18 May 2021 10:31:50 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index and statistics not used"
}
] |
[
{
"msg_contents": "Hi,\nI am trying to create partitions on the table based on first letter of the column record value using inherit relation & check constraint.\nSomehow able to create and load the data into the tables as per my requirement.\nBut the problem is when querying the data on that partitioned column, it's referring to all the children's tables instead of the matching table.\n\ncreate table t1(id int,name text);\n CREATE TABLE partition_tab.t1_name_null( CONSTRAINT null_check CHECK (name IS NULL)) INHERITS (t1); CREATE or replace FUNCTION partition_tab.func_t1_insert_trigger() RETURNS trigger LANGUAGE 'plpgsql' COST 100 VOLATILE NOT LEAKPROOFAS $BODY$DECLARE chk_cond text; c_table TEXT; c_table1 text; new_name text; m_table1 text; BEGIN if ( NEW.name is null) THEN INSERT into partition_tab.t1_name_null VALUES (NEW.*); elseif ( NEW.name is not null) THEN new_name:= substr(NEW.name,1,1); raise info 'new_name %',new_name; c_table := TG_TABLE_NAME || '_' || new_name; c_table1 := 'partition_tab.' || c_table; m_table1 := ''||TG_TABLE_NAME; IF NOT EXISTS(SELECT relname FROM pg_class WHERE relname=lower(c_table)) THEN RAISE NOTICE 'values out of range partition, creating partition table: partition_tab.%',c_table;\n chk_cond := new_name||'%'; raise info 'chk_cond %',chk_cond;\n EXECUTE 'CREATE TABLE partition_tab.' || c_table || '(check ( name like '''|| chk_cond||''')) INHERITS (' ||TG_TABLE_NAME|| ');';\n\n END IF; EXECUTE 'INSERT INTO ' || c_table1 || ' SELECT(' || m_table1 || ' ' || quote_literal(NEW) || ').* RETURNING id;'; END IF; RETURN NULL; END;$BODY$;\nCREATE TRIGGER t1_trigger BEFORE INSERT OR UPDATE ON t1 FOR EACH ROW EXECUTE PROCEDURE partition_tab.func_t1_insert_trigger()\n\nexamples: Postgres 11 | db<>fiddle\n\n\n| \n| \n| | \nPostgres 11 | db<>fiddle\n\nFree online SQL environment for experimenting and sharing.\n |\n\n |\n\n |\n\n\n\n\nAny suggestions.\n\nThanks,Rj\n\n\n \n\n Hi,I am trying to create partitions on the table based on first letter of the column record value using inherit relation & check constraint.Somehow able to create and load the data into the tables as per my requirement.But the problem is when querying the data on that partitioned column, it's referring to all the children's tables instead of the matching table.create table t1(id int,name text); CREATE TABLE partition_tab.t1_name_null( CONSTRAINT null_check CHECK (name IS NULL)) INHERITS (t1); CREATE or replace FUNCTION partition_tab.func_t1_insert_trigger() RETURNS trigger LANGUAGE 'plpgsql' COST 100 VOLATILE NOT LEAKPROOFAS $BODY$DECLARE chk_cond text; c_table TEXT; c_table1 text; new_name text; m_table1 text; BEGIN if ( NEW.name is null) THEN INSERT into partition_tab.t1_name_null VALUES (NEW.*); elseif ( NEW.name is not null) THEN new_name:= substr(NEW.name,1,1); raise info 'new_name %',new_name; c_table := TG_TABLE_NAME || '_' || new_name; c_table1 := 'partition_tab.' || c_table; m_table1 := ''||TG_TABLE_NAME; IF NOT EXISTS(SELECT relname FROM pg_class WHERE relname=lower(c_table)) THEN RAISE NOTICE 'values out of range partition, creating partition table: partition_tab.%',c_table; chk_cond := new_name||'%'; raise info 'chk_cond %',chk_cond; EXECUTE 'CREATE TABLE partition_tab.' || c_table || '(check ( name like '''|| chk_cond||''')) INHERITS (' ||TG_TABLE_NAME|| ');'; END IF; EXECUTE 'INSERT INTO ' || c_table1 || ' SELECT(' || m_table1 || ' ' || quote_literal(NEW) || ').* RETURNING id;'; END IF; RETURN NULL; END;$BODY$;CREATE TRIGGER t1_trigger BEFORE INSERT OR UPDATE ON t1 FOR EACH ROW EXECUTE PROCEDURE partition_tab.func_t1_insert_trigger()examples: Postgres 11 | db<>fiddlePostgres 11 | db<>fiddleFree online SQL environment for experimenting and sharing.Any suggestions.Thanks,Rj",
"msg_date": "Fri, 21 May 2021 00:32:11 +0000 (UTC)",
"msg_from": "Nagaraj Raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partition with check constraint with \"like\""
},
{
"msg_contents": "On Fri, 21 May 2021 at 12:32, Nagaraj Raj <[email protected]> wrote:\n> I am trying to create partitions on the table based on first letter of the column record value using inherit relation & check constraint.\n\nYou'll get much better performance out of native partitioning than you\nwill with the old inheritance method of doing it.\n\n> EXECUTE 'CREATE TABLE partition_tab.' || c_table || '(check ( name like '''|| chk_cond||''')) INHERITS (' ||TG_TABLE_NAME|| ');';\n\nThis is a bad idea. There's a lock upgrade hazard here that could end\nup causing deadlocks on INSERT. You should just create all the tables\nyou need beforehand.\n\nI'd recommend you do this using RANGE partitioning. For example:\n\ncreate table mytable (a text not null) partition by range (a);\ncreate table mytable_a partition of mytable for values from ('a') to\n('b'); -- note the upper bound of the range is non-inclusive.\ncreate table mytable_b partition of mytable for values from ('b') to ('c');\ninsert into mytable values('alpha'),('bravo');\n\nexplain select * from mytable where a = 'alpha';\n QUERY PLAN\n-------------------------------------------------------------------\n Seq Scan on mytable_a mytable (cost=0.00..27.00 rows=7 width=32)\n Filter: (a = 'alpha'::text)\n(2 rows)\n\nThe mytable_b is not scanned.\n\nDavid\n\n\n",
"msg_date": "Fri, 21 May 2021 13:22:51 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition with check constraint with \"like\""
},
{
"msg_contents": "Thank you. This is a great help. \nBut \"a\" have some records with alpha and numeric. \nexample :\ninsert into mytable values('alpha'),('bravo');\ninsert into mytable values('1lpha'),('2ravo');\n\n\n On Thursday, May 20, 2021, 06:23:14 PM PDT, David Rowley <[email protected]> wrote: \n \n On Fri, 21 May 2021 at 12:32, Nagaraj Raj <[email protected]> wrote:\n> I am trying to create partitions on the table based on first letter of the column record value using inherit relation & check constraint.\n\nYou'll get much better performance out of native partitioning than you\nwill with the old inheritance method of doing it.\n\n> EXECUTE 'CREATE TABLE partition_tab.' || c_table || '(check ( name like '''|| chk_cond||''')) INHERITS (' ||TG_TABLE_NAME|| ');';\n\nThis is a bad idea. There's a lock upgrade hazard here that could end\nup causing deadlocks on INSERT. You should just create all the tables\nyou need beforehand.\n\nI'd recommend you do this using RANGE partitioning. For example:\n\ncreate table mytable (a text not null) partition by range (a);\ncreate table mytable_a partition of mytable for values from ('a') to\n('b'); -- note the upper bound of the range is non-inclusive.\ncreate table mytable_b partition of mytable for values from ('b') to ('c');\ninsert into mytable values('alpha'),('bravo');\n\nexplain select * from mytable where a = 'alpha';\n QUERY PLAN\n-------------------------------------------------------------------\n Seq Scan on mytable_a mytable (cost=0.00..27.00 rows=7 width=32)\n Filter: (a = 'alpha'::text)\n(2 rows)\n\nThe mytable_b is not scanned.\n\nDavid\n\n\n \n\nThank you. This is a great help. But \"a\" have some records with alpha and numeric. example :insert into mytable values('alpha'),('bravo');insert into mytable values('1lpha'),('2ravo');\n\n\n\n On Thursday, May 20, 2021, 06:23:14 PM PDT, David Rowley <[email protected]> wrote:\n \n\n\nOn Fri, 21 May 2021 at 12:32, Nagaraj Raj <[email protected]> wrote:> I am trying to create partitions on the table based on first letter of the column record value using inherit relation & check constraint.You'll get much better performance out of native partitioning than youwill with the old inheritance method of doing it.> EXECUTE 'CREATE TABLE partition_tab.' || c_table || '(check ( name like '''|| chk_cond||''')) INHERITS (' ||TG_TABLE_NAME|| ');';This is a bad idea. There's a lock upgrade hazard here that could endup causing deadlocks on INSERT. You should just create all the tablesyou need beforehand.I'd recommend you do this using RANGE partitioning. For example:create table mytable (a text not null) partition by range (a);create table mytable_a partition of mytable for values from ('a') to('b'); -- note the upper bound of the range is non-inclusive.create table mytable_b partition of mytable for values from ('b') to ('c');insert into mytable values('alpha'),('bravo');explain select * from mytable where a = 'alpha'; QUERY PLAN------------------------------------------------------------------- Seq Scan on mytable_a mytable (cost=0.00..27.00 rows=7 width=32) Filter: (a = 'alpha'::text)(2 rows)The mytable_b is not scanned.David",
"msg_date": "Fri, 21 May 2021 02:36:14 +0000 (UTC)",
"msg_from": "Nagaraj Raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partition with check constraint with \"like\""
},
{
"msg_contents": "On Fri, May 21, 2021 at 02:36:14AM +0000, Nagaraj Raj wrote:\n> Thank you. This is a great help.�\n> But \"a\" have some records with alpha and numeric.�\n\nSo then you should make one or more partitions FROM ('1')TO('9').\n\n> example :\n> insert into mytable values('alpha'),('bravo');\n> insert into mytable values('1lpha'),('2ravo');\n> \n> \n> On Thursday, May 20, 2021, 06:23:14 PM PDT, David Rowley <[email protected]> wrote: \n> \n> On Fri, 21 May 2021 at 12:32, Nagaraj Raj <[email protected]> wrote:\n> > I am trying to create partitions on the table based on first letter of the column record� value using inherit relation & check constraint.\n> \n> You'll get much better performance out of native partitioning than you\n> will with the old inheritance method of doing it.\n> \n> >� EXECUTE 'CREATE TABLE partition_tab.' || c_table || '(check ( name like '''|| chk_cond||''')) INHERITS (' ||TG_TABLE_NAME|| ');';\n> \n> This is a bad idea. There's a lock upgrade hazard here that could end\n> up causing deadlocks on INSERT.� You should just create all the tables\n> you need beforehand.\n> \n> I'd recommend you do this using RANGE partitioning. For example:\n> \n> create table mytable (a text not null) partition by range (a);\n> create table mytable_a partition of mytable for values from ('a') to\n> ('b'); -- note the upper bound of the range is non-inclusive.\n> create table mytable_b partition of mytable for values from ('b') to ('c');\n> insert into mytable values('alpha'),('bravo');\n> \n> explain select * from mytable where a = 'alpha';\n> � � � � � � � � � � � � � � QUERY PLAN\n> -------------------------------------------------------------------\n> Seq Scan on mytable_a mytable� (cost=0.00..27.00 rows=7 width=32)\n> � Filter: (a = 'alpha'::text)\n> (2 rows)\n> \n> The mytable_b is not scanned.\n\n\n",
"msg_date": "Thu, 20 May 2021 21:38:36 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition with check constraint with \"like\""
},
{
"msg_contents": "On Thu, May 20, 2021, 8:38 PM Justin Pryzby <[email protected]> wrote:\n\n> On Fri, May 21, 2021 at 02:36:14AM +0000, Nagaraj Raj wrote:\n> > Thank you. This is a great help.\n> > But \"a\" have some records with alpha and numeric.\n>\n> So then you should make one or more partitions FROM ('1')TO('9').\n>\n\nWhat about 0? Sorry.\n\nSeriously though, this seems like a dumb question but if I wanted a\npartition for each numeric digit and each alpha character (upper and\nlowercase?) And wanted to avoid using a default partition, how would I use\nminvalue and maxvalue and determine which partition of\nA to B\nB to C\n...\na to b\nb to c\n...\n0 to 1\nEtc... And how to figure out the gaps between 9 and A or z and A or what?\n\nI hope the nature of my question makes sense. What is the ordering of the\ncharacters as far as partitioning goes? Or rather, how would I figure that\nout?\n\n>\n\nOn Thu, May 20, 2021, 8:38 PM Justin Pryzby <[email protected]> wrote:On Fri, May 21, 2021 at 02:36:14AM +0000, Nagaraj Raj wrote:\n> Thank you. This is a great help. \n> But \"a\" have some records with alpha and numeric. \n\nSo then you should make one or more partitions FROM ('1')TO('9').What about 0? Sorry.Seriously though, this seems like a dumb question but if I wanted a partition for each numeric digit and each alpha character (upper and lowercase?) And wanted to avoid using a default partition, how would I use minvalue and maxvalue and determine which partition ofA to BB to C...a to bb to c...0 to 1Etc... And how to figure out the gaps between 9 and A or z and A or what?I hope the nature of my question makes sense. What is the ordering of the characters as far as partitioning goes? Or rather, how would I figure that out?",
"msg_date": "Thu, 20 May 2021 21:24:00 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition with check constraint with \"like\""
},
{
"msg_contents": "So what about 'Z' or 'z' and 9?\nI created the partitions tables FROM (A) to (B) ;FROM (B) to (C) ;\n.. FROM (Y) to (Z) ;\n\nthen what would be the range of ZFROM (Z) to (?) ;\n same way for 9 On Thursday, May 20, 2021, 07:38:50 PM PDT, Justin Pryzby <[email protected]> wrote: \n \n On Fri, May 21, 2021 at 02:36:14AM +0000, Nagaraj Raj wrote:\n> Thank you. This is a great help. \n> But \"a\" have some records with alpha and numeric. \n\nSo then you should make one or more partitions FROM ('1')TO('9').\n\n> example :\n> insert into mytable values('alpha'),('bravo');\n> insert into mytable values('1lpha'),('2ravo');\n> \n> \n> On Thursday, May 20, 2021, 06:23:14 PM PDT, David Rowley <[email protected]> wrote: \n> \n> On Fri, 21 May 2021 at 12:32, Nagaraj Raj <[email protected]> wrote:\n> > I am trying to create partitions on the table based on first letter of the column record value using inherit relation & check constraint.\n> \n> You'll get much better performance out of native partitioning than you\n> will with the old inheritance method of doing it.\n> \n> > EXECUTE 'CREATE TABLE partition_tab.' || c_table || '(check ( name like '''|| chk_cond||''')) INHERITS (' ||TG_TABLE_NAME|| ');';\n> \n> This is a bad idea. There's a lock upgrade hazard here that could end\n> up causing deadlocks on INSERT. You should just create all the tables\n> you need beforehand.\n> \n> I'd recommend you do this using RANGE partitioning. For example:\n> \n> create table mytable (a text not null) partition by range (a);\n> create table mytable_a partition of mytable for values from ('a') to\n> ('b'); -- note the upper bound of the range is non-inclusive.\n> create table mytable_b partition of mytable for values from ('b') to ('c');\n> insert into mytable values('alpha'),('bravo');\n> \n> explain select * from mytable where a = 'alpha';\n> QUERY PLAN\n> -------------------------------------------------------------------\n> Seq Scan on mytable_a mytable (cost=0.00..27.00 rows=7 width=32)\n> Filter: (a = 'alpha'::text)\n> (2 rows)\n> \n> The mytable_b is not scanned.\n\n\n \n\nSo what about 'Z' or 'z' and 9?I created the partitions tables FROM (A) to (B) ;FROM (B) to (C) ;.. FROM (Y) to (Z) ;then what would be the range of ZFROM (Z) to (?) ; same way for 9 \n\n\n\n On Thursday, May 20, 2021, 07:38:50 PM PDT, Justin Pryzby <[email protected]> wrote:\n \n\n\nOn Fri, May 21, 2021 at 02:36:14AM +0000, Nagaraj Raj wrote:> Thank you. This is a great help. > But \"a\" have some records with alpha and numeric. So then you should make one or more partitions FROM ('1')TO('9').> example :> insert into mytable values('alpha'),('bravo');> insert into mytable values('1lpha'),('2ravo');> > > On Thursday, May 20, 2021, 06:23:14 PM PDT, David Rowley <[email protected]> wrote: > > On Fri, 21 May 2021 at 12:32, Nagaraj Raj <[email protected]> wrote:> > I am trying to create partitions on the table based on first letter of the column record value using inherit relation & check constraint.> > You'll get much better performance out of native partitioning than you> will with the old inheritance method of doing it.> > > EXECUTE 'CREATE TABLE partition_tab.' || c_table || '(check ( name like '''|| chk_cond||''')) INHERITS (' ||TG_TABLE_NAME|| ');';> > This is a bad idea. There's a lock upgrade hazard here that could end> up causing deadlocks on INSERT. You should just create all the tables> you need beforehand.> > I'd recommend you do this using RANGE partitioning. For example:> > create table mytable (a text not null) partition by range (a);> create table mytable_a partition of mytable for values from ('a') to> ('b'); -- note the upper bound of the range is non-inclusive.> create table mytable_b partition of mytable for values from ('b') to ('c');> insert into mytable values('alpha'),('bravo');> > explain select * from mytable where a = 'alpha';> QUERY PLAN> -------------------------------------------------------------------> Seq Scan on mytable_a mytable (cost=0.00..27.00 rows=7 width=32)> Filter: (a = 'alpha'::text)> (2 rows)> > The mytable_b is not scanned.",
"msg_date": "Fri, 21 May 2021 07:02:51 +0000 (UTC)",
"msg_from": "Nagaraj Raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partition with check constraint with \"like\""
},
{
"msg_contents": "On Fri, 21 May 2021 at 19:02, Nagaraj Raj <[email protected]> wrote:\n> then what would be the range of Z\n> FROM (Z) to (?) ;\n\npostgres=# select chr(ascii('z')+1) ;\n chr\n-----\n {\n(1 row)\n\n\n> same way for 9\n\npostgres=# select chr(ascii('9')+1) ;\n chr\n-----\n :\n(1 row)\n\nhttps://en.wikipedia.org/wiki/ASCII\n\nYou can also use MINVALUE and MAXVALUE to mean unbounded at either end\nof the range.\n\nBut is there a particular need that you want to partition this way? It\nseems like it might be a bit painful to maintain, especially if you're\nnot limiting yourself to ASCII or ANSI characters.\n\nYou might want to consider HASH partitioning if you're just looking\nfor a way to keep your tables and indexes to a more manageable size.\nYou've not really mentioned your use case here, so it's hard to give\nany advice.\n\nThere are more details about partitioning in\nhttps://www.postgresql.org/docs/current/ddl-partitioning.html\n\nDavid\n\n\n",
"msg_date": "Fri, 21 May 2021 20:23:49 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition with check constraint with \"like\""
},
{
"msg_contents": "Hi David,\nHi,\nI am trying to create partitions on the table which have around 2BIL records and users will always look for the \"name\", its not possible to create a partition with a list, so we are trying to create a partition-based first letter of the name column. name column has a combination of alpha numeric values.\n\n\n> postgres=# select chr(ascii('z')+1) ;\n> chr\n> -----\n> {\n> (1 row)\n\nI tried as below, I'm able to create a partition table for 'Z', but it's not identifying partition table. \n\npostgres=# select chr(ascii('Z')+1) ;\nchr\n-----\n[\n(1 row)\n create table mytable_z of mytable for values from ('Z') to ('Z[');CREATE TABLE \ninsert into mytable values(4,'ZAR83NB');\n\nERROR: no partition of relation \"mytable\" found for rowDETAIL: Partition key of the failing row contains (name) = (ZAR83NB).SQL state: 23514\n\n\n\n\n On Friday, May 21, 2021, 01:24:13 AM PDT, David Rowley <[email protected]> wrote: \n \n On Fri, 21 May 2021 at 19:02, Nagaraj Raj <[email protected]> wrote:\n> then what would be the range of Z\n> FROM (Z) to (?) ;\n\npostgres=# select chr(ascii('z')+1) ;\n chr\n-----\n {\n(1 row)\n\n\n> same way for 9\n\npostgres=# select chr(ascii('9')+1) ;\n chr\n-----\n :\n(1 row)\n\nhttps://en.wikipedia.org/wiki/ASCII\n\nYou can also use MINVALUE and MAXVALUE to mean unbounded at either end\nof the range.\n\nBut is there a particular need that you want to partition this way? It\nseems like it might be a bit painful to maintain, especially if you're\nnot limiting yourself to ASCII or ANSI characters.\n\nYou might want to consider HASH partitioning if you're just looking\nfor a way to keep your tables and indexes to a more manageable size.\nYou've not really mentioned your use case here, so it's hard to give\nany advice.\n\nThere are more details about partitioning in\nhttps://www.postgresql.org/docs/current/ddl-partitioning.html\n\nDavid\n\n\n \n\nHi David,Hi,I am trying to create partitions on the table which have around 2BIL records and users will always look for the \"name\", its not possible to create a partition with a list, so we are trying to create a partition-based first letter of the name column. name column has a combination of alpha numeric values.> postgres=# select chr(ascii('z')+1) ;> chr> -----> {> (1 row)I tried as below, I'm able to create a partition table for 'Z', but it's not identifying partition table. postgres=# select chr(ascii('Z')+1) ;chr-----[(1 row) create table mytable_z of mytable for values from ('Z') to ('Z[');CREATE TABLE insert into mytable values(4,'ZAR83NB');ERROR: no partition of relation \"mytable\" found for row\nDETAIL: Partition key of the failing row contains (name) = (ZAR83NB).\nSQL state: 23514\n\n\n\n On Friday, May 21, 2021, 01:24:13 AM PDT, David Rowley <[email protected]> wrote:\n \n\n\nOn Fri, 21 May 2021 at 19:02, Nagaraj Raj <[email protected]> wrote:> then what would be the range of Z> FROM (Z) to (?) ;postgres=# select chr(ascii('z')+1) ; chr----- {(1 row)> same way for 9postgres=# select chr(ascii('9')+1) ; chr----- :(1 row)https://en.wikipedia.org/wiki/ASCIIYou can also use MINVALUE and MAXVALUE to mean unbounded at either endof the range.But is there a particular need that you want to partition this way? Itseems like it might be a bit painful to maintain, especially if you'renot limiting yourself to ASCII or ANSI characters.You might want to consider HASH partitioning if you're just lookingfor a way to keep your tables and indexes to a more manageable size.You've not really mentioned your use case here, so it's hard to giveany advice.There are more details about partitioning inhttps://www.postgresql.org/docs/current/ddl-partitioning.htmlDavid",
"msg_date": "Fri, 21 May 2021 16:38:30 +0000 (UTC)",
"msg_from": "Nagaraj Raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partition with check constraint with \"like\""
},
{
"msg_contents": "just out of curiosity,\nwhat would a typical query be ?\n\nselect * from t1 where name = somename ? == equality match // if yes,\nhash partitioning may be helpful to a have reasonably balanced distribution\nor\nselect * from t1 where name like 'some%'; ---- what would be the\ndistribution of rows for such queries. i mean it can return 1 row or all\nrows or anything in between.\n\nthat may result in unbalanced partitioning.\n\nthen why partition at all ? 2B rows, if i go with 100KB size per row. that\nwould be around 200GB.\n\nalso, queries may benefit from trigram matching.\nIndex Columns for `LIKE` in PostgreSQL | Niall Burkley's Developer Blog\n<https://niallburkley.com/blog/index-columns-for-like-in-postgres/>\n\n\n<https://niallburkley.com/blog/index-columns-for-like-in-postgres/>\n\n\n\nOn Fri, 21 May 2021 at 22:08, Nagaraj Raj <[email protected]> wrote:\n\n> Hi David,\n>\n> Hi,\n>\n> I am trying to create partitions on the table which have around 2BIL\n> records and users will always look for the \"name\", its not possible to\n> create a partition with a list, so we are trying to create a\n> partition-based first letter of the name column. name column has a\n> combination of alpha numeric values.\n>\n>\n>\n> > postgres=# select chr(ascii('z')+1) ;\n> > chr\n> > -----\n> > {\n> > (1 row)\n>\n> I tried as below, I'm able to create a partition table for 'Z', but it's\n> not identifying partition table.\n>\n>\n> postgres=# select chr(ascii('Z')+1) ;\n> chr\n> -----\n> [\n> (1 row)\n>\n> create table mytable_z of mytable for values from ('Z') to ('Z[');\n> CREATE TABLE\n>\n> insert into mytable values(4,'ZAR83NB');\n>\n> ERROR: no partition of relation \"mytable\" found for row DETAIL: Partition\n> key of the failing row contains (name) = (ZAR83NB). SQL state: 23514\n>\n>\n>\n>\n>\n> On Friday, May 21, 2021, 01:24:13 AM PDT, David Rowley <\n> [email protected]> wrote:\n>\n>\n> On Fri, 21 May 2021 at 19:02, Nagaraj Raj <[email protected]> wrote:\n> > then what would be the range of Z\n> > FROM (Z) to (?) ;\n>\n> postgres=# select chr(ascii('z')+1) ;\n> chr\n> -----\n> {\n> (1 row)\n>\n>\n> > same way for 9\n>\n> postgres=# select chr(ascii('9')+1) ;\n> chr\n> -----\n> :\n> (1 row)\n>\n> https://en.wikipedia.org/wiki/ASCII\n>\n> You can also use MINVALUE and MAXVALUE to mean unbounded at either end\n> of the range.\n>\n> But is there a particular need that you want to partition this way? It\n> seems like it might be a bit painful to maintain, especially if you're\n> not limiting yourself to ASCII or ANSI characters.\n>\n> You might want to consider HASH partitioning if you're just looking\n> for a way to keep your tables and indexes to a more manageable size.\n> You've not really mentioned your use case here, so it's hard to give\n> any advice.\n>\n> There are more details about partitioning in\n> https://www.postgresql.org/docs/current/ddl-partitioning.html\n>\n>\n> David\n>\n>\n>\n\n-- \nThanks,\nVijay\nMumbai, India\n\njust out of curiosity,what would a typical query be ?select * from t1 where name = somename ? == equality match // if yes, hash partitioning may be helpful to a have reasonably balanced distributionorselect * from t1 where name like 'some%'; ---- what would be the distribution of rows for such queries. i mean it can return 1 row or all rows or anything in between. that may result in unbalanced partitioning. then why partition at all ? 2B rows, if i go with 100KB size per row. that would be around 200GB.also, queries may benefit from trigram matching.Index Columns for `LIKE` in PostgreSQL | Niall Burkley's Developer Blog On Fri, 21 May 2021 at 22:08, Nagaraj Raj <[email protected]> wrote:\nHi David,Hi,I am trying to create partitions on the table which have around 2BIL records and users will always look for the \"name\", its not possible to create a partition with a list, so we are trying to create a partition-based first letter of the name column. name column has a combination of alpha numeric values.> postgres=# select chr(ascii('z')+1) ;> chr> -----> {> (1 row)I tried as below, I'm able to create a partition table for 'Z', but it's not identifying partition table. postgres=# select chr(ascii('Z')+1) ;chr-----[(1 row) create table mytable_z of mytable for values from ('Z') to ('Z[');CREATE TABLE insert into mytable values(4,'ZAR83NB');ERROR: no partition of relation \"mytable\" found for row\nDETAIL: Partition key of the failing row contains (name) = (ZAR83NB).\nSQL state: 23514\n\n\n\n On Friday, May 21, 2021, 01:24:13 AM PDT, David Rowley <[email protected]> wrote:\n \n\n\nOn Fri, 21 May 2021 at 19:02, Nagaraj Raj <[email protected]> wrote:> then what would be the range of Z> FROM (Z) to (?) ;postgres=# select chr(ascii('z')+1) ; chr----- {(1 row)> same way for 9postgres=# select chr(ascii('9')+1) ; chr----- :(1 row)https://en.wikipedia.org/wiki/ASCIIYou can also use MINVALUE and MAXVALUE to mean unbounded at either endof the range.But is there a particular need that you want to partition this way? Itseems like it might be a bit painful to maintain, especially if you'renot limiting yourself to ASCII or ANSI characters.You might want to consider HASH partitioning if you're just lookingfor a way to keep your tables and indexes to a more manageable size.You've not really mentioned your use case here, so it's hard to giveany advice.There are more details about partitioning inhttps://www.postgresql.org/docs/current/ddl-partitioning.htmlDavid\n\n-- Thanks,VijayMumbai, India",
"msg_date": "Sat, 22 May 2021 00:38:14 +0530",
"msg_from": "Vijaykumar Jain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition with check constraint with \"like\""
},
{
"msg_contents": "> select * from t1 where name = somename ? == equality match // if yes, hash partitioning may be helpful to a have reasonably balanced distribution\nyes, its an equality check,\n\n On Friday, May 21, 2021, 12:08:25 PM PDT, Vijaykumar Jain <[email protected]> wrote: \n \n just out of curiosity,what would a typical query be ?\nselect * from t1 where name = somename ? == equality match // if yes, hash partitioning may be helpful to a have reasonably balanced distributionorselect * from t1 where name like 'some%'; ---- what would be the distribution of rows for such queries. i mean it can return 1 row or all rows or anything in between. that may result in unbalanced partitioning. then why partition at all ? 2B rows, if i go with 100KB size per row. that would be around 200GB.\nalso, queries may benefit from trigram matching.Index Columns for `LIKE` in PostgreSQL | Niall Burkley's Developer Blog\n\n \n\n\nOn Fri, 21 May 2021 at 22:08, Nagaraj Raj <[email protected]> wrote:\n\n Hi David,\nHi,\nI am trying to create partitions on the table which have around 2BIL records and users will always look for the \"name\", its not possible to create a partition with a list, so we are trying to create a partition-based first letter of the name column. name column has a combination of alpha numeric values.\n\n\n> postgres=# select chr(ascii('z')+1) ;\n> chr\n> -----\n> {\n> (1 row)\n\nI tried as below, I'm able to create a partition table for 'Z', but it's not identifying partition table. \n\npostgres=# select chr(ascii('Z')+1) ;\nchr\n-----\n[\n(1 row)\n create table mytable_z of mytable for values from ('Z') to ('Z[');CREATE TABLE \ninsert into mytable values(4,'ZAR83NB');\n\nERROR: no partition of relation \"mytable\" found for rowDETAIL: Partition key of the failing row contains (name) = (ZAR83NB).SQL state: 23514\n\n\n\n\n On Friday, May 21, 2021, 01:24:13 AM PDT, David Rowley <[email protected]> wrote: \n \n On Fri, 21 May 2021 at 19:02, Nagaraj Raj <[email protected]> wrote:\n> then what would be the range of Z\n> FROM (Z) to (?) ;\n\npostgres=# select chr(ascii('z')+1) ;\n chr\n-----\n {\n(1 row)\n\n\n> same way for 9\n\npostgres=# select chr(ascii('9')+1) ;\n chr\n-----\n :\n(1 row)\n\nhttps://en.wikipedia.org/wiki/ASCII\n\nYou can also use MINVALUE and MAXVALUE to mean unbounded at either end\nof the range.\n\nBut is there a particular need that you want to partition this way? It\nseems like it might be a bit painful to maintain, especially if you're\nnot limiting yourself to ASCII or ANSI characters.\n\nYou might want to consider HASH partitioning if you're just looking\nfor a way to keep your tables and indexes to a more manageable size.\nYou've not really mentioned your use case here, so it's hard to give\nany advice.\n\nThere are more details about partitioning in\nhttps://www.postgresql.org/docs/current/ddl-partitioning.html\n\nDavid\n\n\n \n\n\n-- \nThanks,VijayMumbai, India \n\n> select * from t1 where name = somename ? == equality match // if yes, hash partitioning may be helpful to a have reasonably balanced distributionyes, its an equality check, \n\n\n\n On Friday, May 21, 2021, 12:08:25 PM PDT, Vijaykumar Jain <[email protected]> wrote:\n \n\n\njust out of curiosity,what would a typical query be ?select * from t1 where name = somename ? == equality match // if yes, hash partitioning may be helpful to a have reasonably balanced distributionorselect * from t1 where name like 'some%'; ---- what would be the distribution of rows for such queries. i mean it can return 1 row or all rows or anything in between. that may result in unbalanced partitioning. then why partition at all ? 2B rows, if i go with 100KB size per row. that would be around 200GB.also, queries may benefit from trigram matching.Index Columns for `LIKE` in PostgreSQL | Niall Burkley's Developer Blog On Fri, 21 May 2021 at 22:08, Nagaraj Raj <[email protected]> wrote:\nHi David,Hi,I am trying to create partitions on the table which have around 2BIL records and users will always look for the \"name\", its not possible to create a partition with a list, so we are trying to create a partition-based first letter of the name column. name column has a combination of alpha numeric values.> postgres=# select chr(ascii('z')+1) ;> chr> -----> {> (1 row)I tried as below, I'm able to create a partition table for 'Z', but it's not identifying partition table. postgres=# select chr(ascii('Z')+1) ;chr-----[(1 row) create table mytable_z of mytable for values from ('Z') to ('Z[');CREATE TABLE insert into mytable values(4,'ZAR83NB');ERROR: no partition of relation \"mytable\" found for row\nDETAIL: Partition key of the failing row contains (name) = (ZAR83NB).\nSQL state: 23514\n\n\n\n On Friday, May 21, 2021, 01:24:13 AM PDT, David Rowley <[email protected]> wrote:\n \n\n\nOn Fri, 21 May 2021 at 19:02, Nagaraj Raj <[email protected]> wrote:> then what would be the range of Z> FROM (Z) to (?) ;postgres=# select chr(ascii('z')+1) ; chr----- {(1 row)> same way for 9postgres=# select chr(ascii('9')+1) ; chr----- :(1 row)https://en.wikipedia.org/wiki/ASCIIYou can also use MINVALUE and MAXVALUE to mean unbounded at either endof the range.But is there a particular need that you want to partition this way? Itseems like it might be a bit painful to maintain, especially if you'renot limiting yourself to ASCII or ANSI characters.You might want to consider HASH partitioning if you're just lookingfor a way to keep your tables and indexes to a more manageable size.You've not really mentioned your use case here, so it's hard to giveany advice.There are more details about partitioning inhttps://www.postgresql.org/docs/current/ddl-partitioning.htmlDavid\n\n-- Thanks,VijayMumbai, India",
"msg_date": "Fri, 21 May 2021 19:26:53 +0000 (UTC)",
"msg_from": "Nagaraj Raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partition with check constraint with \"like\""
},
{
"msg_contents": "Hi\n\nI don’t discuss here the choice itself but this is not correct:\n\ncreate table mytable_z of mytable for values from ('Z') to ('Z[');\n\n \n\nIt should be\n\ncreate table mytable_z of mytable for values from ('Z') to ('[')\n\n \n\nMichel SALAIS\n\n \n\nDe : Nagaraj Raj <[email protected]> \nEnvoyé : vendredi 21 mai 2021 18:39\nÀ : David Rowley <[email protected]>\nCc : Justin Pryzby <[email protected]>; Pgsql-performance <[email protected]>\nObjet : Re: Partition with check constraint with \"like\"\n\n \n\nHi David,\n\n \n\nHi,\n\n \n\nI am trying to create partitions on the table which have around 2BIL records and users will always look for the \"name\", its not possible to create a partition with a list, so we are trying to create a partition-based first letter of the name column. name column has a combination of alpha numeric values.\n\n \n\n \n\n \n\n> postgres=# select chr(ascii('z')+1) ;\n> chr\n> -----\n> {\n> (1 row)\n\n \n\nI tried as below, I'm able to create a partition table for 'Z', but it's not identifying partition table. \n\n \n\n \n\npostgres=# select chr(ascii('Z')+1) ;\nchr\n-----\n[\n(1 row)\n\n \n\ncreate table mytable_z of mytable for values from ('Z') to ('Z[');\n\nCREATE TABLE \n\n \n\ninsert into mytable values(4,'ZAR83NB');\n\n \n\nERROR: no partition of relation \"mytable\" found for row DETAIL: Partition key of the failing row contains (name) = (ZAR83NB). SQL state: 23514\n\n \n\n \n\n \n\n \n\n \n\nOn Friday, May 21, 2021, 01:24:13 AM PDT, David Rowley <[email protected] <mailto:[email protected]> > wrote: \n\n \n\n \n\nOn Fri, 21 May 2021 at 19:02, Nagaraj Raj <[email protected] <mailto:[email protected]> > wrote:\n> then what would be the range of Z\n> FROM (Z) to (?) ;\n\npostgres=# select chr(ascii('z')+1) ;\nchr\n-----\n{\n(1 row)\n\n\n> same way for 9\n\npostgres=# select chr(ascii('9')+1) ;\nchr\n-----\n:\n(1 row)\n\nhttps://en.wikipedia.org/wiki/ASCII\n\nYou can also use MINVALUE and MAXVALUE to mean unbounded at either end\nof the range.\n\nBut is there a particular need that you want to partition this way? It\nseems like it might be a bit painful to maintain, especially if you're\nnot limiting yourself to ASCII or ANSI characters.\n\nYou might want to consider HASH partitioning if you're just looking\nfor a way to keep your tables and indexes to a more manageable size.\nYou've not really mentioned your use case here, so it's hard to give\nany advice.\n\nThere are more details about partitioning in\nhttps://www.postgresql.org/docs/current/ddl-partitioning.html\n\n\n\nDavid\n\n\n\n\nHiI don’t discuss here the choice itself but this is not correct:create table mytable_z of mytable for values from ('Z') to ('Z['); It should becreate table mytable_z of mytable for values from ('Z') to ('[') Michel SALAIS De : Nagaraj Raj <[email protected]> Envoyé : vendredi 21 mai 2021 18:39À : David Rowley <[email protected]>Cc : Justin Pryzby <[email protected]>; Pgsql-performance <[email protected]>Objet : Re: Partition with check constraint with \"like\" Hi David, Hi, I am trying to create partitions on the table which have around 2BIL records and users will always look for the \"name\", its not possible to create a partition with a list, so we are trying to create a partition-based first letter of the name column. name column has a combination of alpha numeric values. > postgres=# select chr(ascii('z')+1) ;> chr> -----> {> (1 row) I tried as below, I'm able to create a partition table for 'Z', but it's not identifying partition table. postgres=# select chr(ascii('Z')+1) ;chr-----[(1 row) create table mytable_z of mytable for values from ('Z') to ('Z[');CREATE TABLE insert into mytable values(4,'ZAR83NB'); ERROR: no partition of relation \"mytable\" found for row DETAIL: Partition key of the failing row contains (name) = (ZAR83NB). SQL state: 23514 On Friday, May 21, 2021, 01:24:13 AM PDT, David Rowley <[email protected]> wrote: On Fri, 21 May 2021 at 19:02, Nagaraj Raj <[email protected]> wrote:> then what would be the range of Z> FROM (Z) to (?) ;postgres=# select chr(ascii('z')+1) ;chr-----{(1 row)> same way for 9postgres=# select chr(ascii('9')+1) ;chr-----:(1 row)https://en.wikipedia.org/wiki/ASCIIYou can also use MINVALUE and MAXVALUE to mean unbounded at either endof the range.But is there a particular need that you want to partition this way? Itseems like it might be a bit painful to maintain, especially if you'renot limiting yourself to ASCII or ANSI characters.You might want to consider HASH partitioning if you're just lookingfor a way to keep your tables and indexes to a more manageable size.You've not really mentioned your use case here, so it's hard to giveany advice.There are more details about partitioning inhttps://www.postgresql.org/docs/current/ddl-partitioning.htmlDavid",
"msg_date": "Fri, 21 May 2021 23:00:18 +0200",
"msg_from": "\"Michel SALAIS\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Partition with check constraint with \"like\""
},
{
"msg_contents": "Hi, \nThis is also not working,\n\ncreate table mytable_z partition of mytable for values from ('Z') to ('[')partition by range(id);\n\nERROR: empty range bound specified for partition \"mytable_z\"DETAIL: Specified lower bound ('Z') is greater than or equal to upper bound ('[').SQL state: 42P17\n\nDB running on version PostgreSQL 11.6, compiled by Visual C++ build 1800, 64-bit\n On Friday, May 21, 2021, 02:00:38 PM PDT, Michel SALAIS <[email protected]> wrote: \n \n #yiv0089608923 #yiv0089608923 -- _filtered {} _filtered {} _filtered {} _filtered {} _filtered {}#yiv0089608923 #yiv0089608923 p.yiv0089608923MsoNormal, #yiv0089608923 li.yiv0089608923MsoNormal, #yiv0089608923 div.yiv0089608923MsoNormal {margin:0cm;font-size:11.0pt;font-family:sans-serif;}#yiv0089608923 a:link, #yiv0089608923 span.yiv0089608923MsoHyperlink {color:blue;text-decoration:underline;}#yiv0089608923 span.yiv0089608923EmailStyle20 {font-family:sans-serif;color:windowtext;}#yiv0089608923 .yiv0089608923MsoChpDefault {font-size:10.0pt;} _filtered {}#yiv0089608923 div.yiv0089608923WordSection1 {}#yiv0089608923 \nHi\n\nI don’t discuss here the choice itself but this is not correct:\n\ncreate table mytable_z of mytable for values from ('Z') to ('Z[');\n\n \n\nIt should be\n\ncreate table mytable_z of mytable for values from ('Z') to ('[')\n\n \n\nMichel SALAIS\n\n \n\nDe : Nagaraj Raj <[email protected]> \nEnvoyé : vendredi 21 mai 2021 18:39\nÀ : David Rowley <[email protected]>\nCc : Justin Pryzby <[email protected]>; Pgsql-performance <[email protected]>\nObjet : Re: Partition with check constraint with \"like\"\n\n \n\nHi David,\n\n \n\nHi,\n\n \n\nI am trying to create partitions on the table which have around 2BIL records and users will always look for the \"name\", its not possible to create a partition with a list, so we are trying to create a partition-based first letter of the name column. name column has a combination of alpha numeric values.\n\n \n\n \n\n \n\n> postgres=# select chr(ascii('z')+1) ;\n> chr\n> -----\n> {\n> (1 row)\n\n \n\nI tried as below, I'm able to create a partition table for 'Z', but it's not identifying partition table. \n\n \n\n \n\npostgres=# select chr(ascii('Z')+1) ;\nchr\n-----\n[\n(1 row)\n\n \n\ncreate table mytable_z of mytable for values from ('Z') to ('Z[');\n\nCREATE TABLE \n\n \n\ninsert into mytable values(4,'ZAR83NB');\n\n \n\nERROR: no partition of relation \"mytable\" found for row DETAIL: Partition key of the failing row contains (name) = (ZAR83NB). SQL state: 23514\n\n \n\n \n\n \n\n \n\n \n\nOn Friday, May 21, 2021, 01:24:13 AM PDT, David Rowley <[email protected]> wrote: \n\n \n\n \n\nOn Fri, 21 May 2021 at 19:02, Nagaraj Raj <[email protected]> wrote:\n> then what would be the range of Z\n> FROM (Z) to (?) ;\n\npostgres=# select chr(ascii('z')+1) ;\nchr\n-----\n{\n(1 row)\n\n\n> same way for 9\n\npostgres=# select chr(ascii('9')+1) ;\nchr\n-----\n:\n(1 row)\n\nhttps://en.wikipedia.org/wiki/ASCII\n\nYou can also use MINVALUE and MAXVALUE to mean unbounded at either end\nof the range.\n\nBut is there a particular need that you want to partition this way? It\nseems like it might be a bit painful to maintain, especially if you're\nnot limiting yourself to ASCII or ANSI characters.\n\nYou might want to consider HASH partitioning if you're just looking\nfor a way to keep your tables and indexes to a more manageable size.\nYou've not really mentioned your use case here, so it's hard to give\nany advice.\n\nThere are more details about partitioning in\nhttps://www.postgresql.org/docs/current/ddl-partitioning.html\n\n\n\nDavid\n\n\n \n\nHi, This is also not working,create table mytable_z partition of mytable for values from ('Z') to ('[')partition by range(id);ERROR: empty range bound specified for partition \"mytable_z\"\nDETAIL: Specified lower bound ('Z') is greater than or equal to upper bound ('[').\nSQL state: 42P17DB running on version PostgreSQL 11.6, compiled by Visual C++ build 1800, 64-bit\n\n\n\n On Friday, May 21, 2021, 02:00:38 PM PDT, Michel SALAIS <[email protected]> wrote:\n \n\n\nHiI don’t discuss here the choice itself but this is not correct:create table mytable_z of mytable for values from ('Z') to ('Z['); It should becreate table mytable_z of mytable for values from ('Z') to ('[') Michel SALAIS De : Nagaraj Raj <[email protected]> Envoyé : vendredi 21 mai 2021 18:39À : David Rowley <[email protected]>Cc : Justin Pryzby <[email protected]>; Pgsql-performance <[email protected]>Objet : Re: Partition with check constraint with \"like\" Hi David, Hi, I am trying to create partitions on the table which have around 2BIL records and users will always look for the \"name\", its not possible to create a partition with a list, so we are trying to create a partition-based first letter of the name column. name column has a combination of alpha numeric values. > postgres=# select chr(ascii('z')+1) ;> chr> -----> {> (1 row) I tried as below, I'm able to create a partition table for 'Z', but it's not identifying partition table. postgres=# select chr(ascii('Z')+1) ;chr-----[(1 row) create table mytable_z of mytable for values from ('Z') to ('Z[');CREATE TABLE insert into mytable values(4,'ZAR83NB'); ERROR: no partition of relation \"mytable\" found for row DETAIL: Partition key of the failing row contains (name) = (ZAR83NB). SQL state: 23514 On Friday, May 21, 2021, 01:24:13 AM PDT, David Rowley <[email protected]> wrote: On Fri, 21 May 2021 at 19:02, Nagaraj Raj <[email protected]> wrote:> then what would be the range of Z> FROM (Z) to (?) ;postgres=# select chr(ascii('z')+1) ;chr-----{(1 row)> same way for 9postgres=# select chr(ascii('9')+1) ;chr-----:(1 row)https://en.wikipedia.org/wiki/ASCIIYou can also use MINVALUE and MAXVALUE to mean unbounded at either endof the range.But is there a particular need that you want to partition this way? Itseems like it might be a bit painful to maintain, especially if you'renot limiting yourself to ASCII or ANSI characters.You might want to consider HASH partitioning if you're just lookingfor a way to keep your tables and indexes to a more manageable size.You've not really mentioned your use case here, so it's hard to giveany advice.There are more details about partitioning inhttps://www.postgresql.org/docs/current/ddl-partitioning.htmlDavid",
"msg_date": "Fri, 21 May 2021 22:59:01 +0000 (UTC)",
"msg_from": "Nagaraj Raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partition with check constraint with \"like\""
},
{
"msg_contents": "sorry, forgot to attach the test cases.Postgres 13 | db<>fiddle\n\n\n| \n| \n| | \nPostgres 13 | db<>fiddle\n\nFree online SQL environment for experimenting and sharing.\n |\n\n |\n\n |\n\n\n\n\n On Friday, May 21, 2021, 03:59:18 PM PDT, Nagaraj Raj <[email protected]> wrote: \n \n Hi, \nThis is also not working,\n\ncreate table mytable_z partition of mytable for values from ('Z') to ('[')partition by range(id);\n\nERROR: empty range bound specified for partition \"mytable_z\"DETAIL: Specified lower bound ('Z') is greater than or equal to upper bound ('[').SQL state: 42P17\n\nDB running on version PostgreSQL 11.6, compiled by Visual C++ build 1800, 64-bit\n On Friday, May 21, 2021, 02:00:38 PM PDT, Michel SALAIS <[email protected]> wrote: \n \n #yiv1392522220 -- filtered {}#yiv1392522220 filtered {}#yiv1392522220 filtered {}#yiv1392522220 filtered {}#yiv1392522220 filtered {}#yiv1392522220 p.yiv1392522220MsoNormal, #yiv1392522220 li.yiv1392522220MsoNormal, #yiv1392522220 div.yiv1392522220MsoNormal {margin:0cm;font-size:11.0pt;font-family:sans-serif;}#yiv1392522220 a:link, #yiv1392522220 span.yiv1392522220MsoHyperlink {color:blue;text-decoration:underline;}#yiv1392522220 span.yiv1392522220EmailStyle20 {font-family:sans-serif;color:windowtext;}#yiv1392522220 .yiv1392522220MsoChpDefault {font-size:10.0pt;}#yiv1392522220 filtered {}#yiv1392522220 div.yiv1392522220WordSection1 {}#yiv1392522220 \nHi\n\nI don’t discuss here the choice itself but this is not correct:\n\ncreate table mytable_z of mytable for values from ('Z') to ('Z[');\n\n \n\nIt should be\n\ncreate table mytable_z of mytable for values from ('Z') to ('[')\n\n \n\nMichel SALAIS\n\n \n\nDe : Nagaraj Raj <[email protected]> \nEnvoyé : vendredi 21 mai 2021 18:39\nÀ : David Rowley <[email protected]>\nCc : Justin Pryzby <[email protected]>; Pgsql-performance <[email protected]>\nObjet : Re: Partition with check constraint with \"like\"\n\n \n\nHi David,\n\n \n\nHi,\n\n \n\nI am trying to create partitions on the table which have around 2BIL records and users will always look for the \"name\", its not possible to create a partition with a list, so we are trying to create a partition-based first letter of the name column. name column has a combination of alpha numeric values.\n\n \n\n \n\n \n\n> postgres=# select chr(ascii('z')+1) ;\n> chr\n> -----\n> {\n> (1 row)\n\n \n\nI tried as below, I'm able to create a partition table for 'Z', but it's not identifying partition table. \n\n \n\n \n\npostgres=# select chr(ascii('Z')+1) ;\nchr\n-----\n[\n(1 row)\n\n \n\ncreate table mytable_z of mytable for values from ('Z') to ('Z[');\n\nCREATE TABLE \n\n \n\ninsert into mytable values(4,'ZAR83NB');\n\n \n\nERROR: no partition of relation \"mytable\" found for row DETAIL: Partition key of the failing row contains (name) = (ZAR83NB). SQL state: 23514\n\n \n\n \n\n \n\n \n\n \n\nOn Friday, May 21, 2021, 01:24:13 AM PDT, David Rowley <[email protected]> wrote: \n\n \n\n \n\nOn Fri, 21 May 2021 at 19:02, Nagaraj Raj <[email protected]> wrote:\n> then what would be the range of Z\n> FROM (Z) to (?) ;\n\npostgres=# select chr(ascii('z')+1) ;\nchr\n-----\n{\n(1 row)\n\n\n> same way for 9\n\npostgres=# select chr(ascii('9')+1) ;\nchr\n-----\n:\n(1 row)\n\nhttps://en.wikipedia.org/wiki/ASCII\n\nYou can also use MINVALUE and MAXVALUE to mean unbounded at either end\nof the range.\n\nBut is there a particular need that you want to partition this way? It\nseems like it might be a bit painful to maintain, especially if you're\nnot limiting yourself to ASCII or ANSI characters.\n\nYou might want to consider HASH partitioning if you're just looking\nfor a way to keep your tables and indexes to a more manageable size.\nYou've not really mentioned your use case here, so it's hard to give\nany advice.\n\nThere are more details about partitioning in\nhttps://www.postgresql.org/docs/current/ddl-partitioning.html\n\n\n\nDavid\n\n\n \n\nsorry, forgot to attach the test cases.Postgres 13 | db<>fiddlePostgres 13 | db<>fiddleFree online SQL environment for experimenting and sharing.\n\n\n\n On Friday, May 21, 2021, 03:59:18 PM PDT, Nagaraj Raj <[email protected]> wrote:\n \n\n\n\nHi, This is also not working,create table mytable_z partition of mytable for values from ('Z') to ('[')partition by range(id);ERROR: empty range bound specified for partition \"mytable_z\"\nDETAIL: Specified lower bound ('Z') is greater than or equal to upper bound ('[').\nSQL state: 42P17DB running on version PostgreSQL 11.6, compiled by Visual C++ build 1800, 64-bit\n\n\n\n On Friday, May 21, 2021, 02:00:38 PM PDT, Michel SALAIS <[email protected]> wrote:\n \n\n\nHiI don’t discuss here the choice itself but this is not correct:create table mytable_z of mytable for values from ('Z') to ('Z['); It should becreate table mytable_z of mytable for values from ('Z') to ('[') Michel SALAIS De : Nagaraj Raj <[email protected]> Envoyé : vendredi 21 mai 2021 18:39À : David Rowley <[email protected]>Cc : Justin Pryzby <[email protected]>; Pgsql-performance <[email protected]>Objet : Re: Partition with check constraint with \"like\" Hi David, Hi, I am trying to create partitions on the table which have around 2BIL records and users will always look for the \"name\", its not possible to create a partition with a list, so we are trying to create a partition-based first letter of the name column. name column has a combination of alpha numeric values. > postgres=# select chr(ascii('z')+1) ;> chr> -----> {> (1 row) I tried as below, I'm able to create a partition table for 'Z', but it's not identifying partition table. postgres=# select chr(ascii('Z')+1) ;chr-----[(1 row) create table mytable_z of mytable for values from ('Z') to ('Z[');CREATE TABLE insert into mytable values(4,'ZAR83NB'); ERROR: no partition of relation \"mytable\" found for row DETAIL: Partition key of the failing row contains (name) = (ZAR83NB). SQL state: 23514 On Friday, May 21, 2021, 01:24:13 AM PDT, David Rowley <[email protected]> wrote: On Fri, 21 May 2021 at 19:02, Nagaraj Raj <[email protected]> wrote:> then what would be the range of Z> FROM (Z) to (?) ;postgres=# select chr(ascii('z')+1) ;chr-----{(1 row)> same way for 9postgres=# select chr(ascii('9')+1) ;chr-----:(1 row)https://en.wikipedia.org/wiki/ASCIIYou can also use MINVALUE and MAXVALUE to mean unbounded at either endof the range.But is there a particular need that you want to partition this way? Itseems like it might be a bit painful to maintain, especially if you'renot limiting yourself to ASCII or ANSI characters.You might want to consider HASH partitioning if you're just lookingfor a way to keep your tables and indexes to a more manageable size.You've not really mentioned your use case here, so it's hard to giveany advice.There are more details about partitioning inhttps://www.postgresql.org/docs/current/ddl-partitioning.htmlDavid",
"msg_date": "Fri, 21 May 2021 23:28:28 +0000 (UTC)",
"msg_from": "Nagaraj Raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partition with check constraint with \"like\""
},
{
"msg_contents": "On Sat, 22 May 2021 at 04:38, Nagaraj Raj <[email protected]> wrote:\n> I am trying to create partitions on the table which have around 2BIL records and users will always look for the \"name\", its not possible to create a partition with a list, so we are trying to create a partition-based first letter of the name column. name column has a combination of alpha numeric values.\n\nGoing by the description of your use case, I think HASH partitioning\nmight be a better option for you. It'll certainly be less painful to\ninitially set up and maintain.\n\nHere's an example:\n\ncreate table mytable (a text) partition by hash(a);\ncreate table mytable0 partition of mytable for values with(modulus 10,\nremainder 0);\ncreate table mytable1 partition of mytable for values with(modulus 10,\nremainder 1);\ncreate table mytable2 partition of mytable for values with(modulus 10,\nremainder 2); --etc\n\nChange the modulus to the number of partitions you want and ensure you\ncreate a partition for each modulus. In this case, it would be 0 to 9.\n\nDavid\n\n\n",
"msg_date": "Sat, 22 May 2021 13:38:16 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition with check constraint with \"like\""
},
{
"msg_contents": "On Sat, 22 May 2021 at 10:59, Nagaraj Raj <[email protected]> wrote:\n> ERROR: empty range bound specified for partition \"mytable_z\" DETAIL: Specified lower bound ('Z') is greater than or equal to upper bound ('['). SQL state: 42P17\n\nIt looks like '[' does not come after 'Z' in your collation.\n\nDavid\n\n\n",
"msg_date": "Sat, 22 May 2021 13:38:48 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition with check constraint with \"like\""
},
{
"msg_contents": "Hi,\n\n \n\nThen we must know what is your collation…\n\nWhat is the collation of your database?\n\n \n\nselect datname, pg_catalog.pg_encoding_to_char(encoding) \"encoding\", datcollate, datctype\n\nfrom pg_database;\n\n \n\nIt is also possible to define an explicit collation for the column. You can have it when you describe the table…\n\n \n\nBut I think like others have already said that this is perhaps not the right choice.\n\n \n\nMichel SALAIS\n\nDe : Nagaraj Raj <[email protected]> \nEnvoyé : samedi 22 mai 2021 01:28\nÀ : 'David Rowley' <[email protected]>; Michel SALAIS <[email protected]>\nCc : 'Justin Pryzby' <[email protected]>; 'Pgsql-performance' <[email protected]>; Michael Lewis <[email protected]>\nObjet : Re: Partition with check constraint with \"like\"\n\n \n\nsorry, forgot to attach the test cases.\n\nPostgres 13 | db <https://dbfiddle.uk/?rdbms=postgres_13&fiddle=602350db327ee6215837bbf48f0763f8> <>fiddle\n\n \n\n\n\n\t\n\nPostgres 13 | db<>fiddle\n\n\nFree online SQL environment for experimenting and sharing.\n\n \n\n \n\n \n\nOn Friday, May 21, 2021, 03:59:18 PM PDT, Nagaraj Raj <[email protected] <mailto:[email protected]> > wrote: \n\n \n\n \n\nHi, \n\n \n\nThis is also not working,\n\n \n\n \n\ncreate table mytable_z partition of mytable for values from ('Z') to ('[')\n\npartition by range(id);\n\n \n\n \n\nERROR: empty range bound specified for partition \"mytable_z\" DETAIL: Specified lower bound ('Z') is greater than or equal to upper bound ('['). SQL state: 42P17\n\n \n\n \n\nDB running on version PostgreSQL 11.6, compiled by Visual C++ build 1800, 64-bit\n\n \n\nOn Friday, May 21, 2021, 02:00:38 PM PDT, Michel SALAIS <[email protected] <mailto:[email protected]> > wrote: \n\n \n\n \n\nHi\n\nI don’t discuss here the choice itself but this is not correct:\n\ncreate table mytable_z of mytable for values from ('Z') to ('Z[');\n\n \n\nIt should be\n\ncreate table mytable_z of mytable for values from ('Z') to ('[')\n\n \n\nMichel SALAIS\n\n \n\nDe : Nagaraj Raj <[email protected] <mailto:[email protected]> > \nEnvoyé : vendredi 21 mai 2021 18:39\nÀ : David Rowley <[email protected] <mailto:[email protected]> >\nCc : Justin Pryzby <[email protected] <mailto:[email protected]> >; Pgsql-performance <[email protected] <mailto:[email protected]> >\nObjet : Re: Partition with check constraint with \"like\"\n\n \n\nHi David,\n\n \n\nHi,\n\n \n\nI am trying to create partitions on the table which have around 2BIL records and users will always look for the \"name\", its not possible to create a partition with a list, so we are trying to create a partition-based first letter of the name column. name column has a combination of alpha numeric values.\n\n \n\n \n\n \n\n> postgres=# select chr(ascii('z')+1) ;\n> chr\n> -----\n> {\n> (1 row)\n\n \n\nI tried as below, I'm able to create a partition table for 'Z', but it's not identifying partition table. \n\n \n\n \n\npostgres=# select chr(ascii('Z')+1) ;\nchr\n-----\n[\n(1 row)\n\n \n\ncreate table mytable_z of mytable for values from ('Z') to ('Z[');\n\nCREATE TABLE \n\n \n\ninsert into mytable values(4,'ZAR83NB');\n\n \n\nERROR: no partition of relation \"mytable\" found for row DETAIL: Partition key of the failing row contains (name) = (ZAR83NB). SQL state: 23514\n\n \n\n \n\n \n\n \n\n \n\nOn Friday, May 21, 2021, 01:24:13 AM PDT, David Rowley <[email protected] <mailto:[email protected]> > wrote: \n\n \n\n \n\nOn Fri, 21 May 2021 at 19:02, Nagaraj Raj <[email protected] <mailto:[email protected]> > wrote:\n> then what would be the range of Z\n> FROM (Z) to (?) ;\n\npostgres=# select chr(ascii('z')+1) ;\nchr\n-----\n{\n(1 row)\n\n\n> same way for 9\n\npostgres=# select chr(ascii('9')+1) ;\nchr\n-----\n:\n(1 row)\n\nhttps://en.wikipedia.org/wiki/ASCII\n\nYou can also use MINVALUE and MAXVALUE to mean unbounded at either end\nof the range.\n\nBut is there a particular need that you want to partition this way? It\nseems like it might be a bit painful to maintain, especially if you're\nnot limiting yourself to ASCII or ANSI characters.\n\nYou might want to consider HASH partitioning if you're just looking\nfor a way to keep your tables and indexes to a more manageable size.\nYou've not really mentioned your use case here, so it's hard to give\nany advice.\n\nThere are more details about partitioning in\nhttps://www.postgresql.org/docs/current/ddl-partitioning.html\n\n\n\nDavid\n\n\nHi, Then we must know what is your collation…What is the collation of your database? select datname, pg_catalog.pg_encoding_to_char(encoding) \"encoding\", datcollate, datctypefrom pg_database; It is also possible to define an explicit collation for the column. You can have it when you describe the table… But I think like others have already said that this is perhaps not the right choice. Michel SALAISDe : Nagaraj Raj <[email protected]> Envoyé : samedi 22 mai 2021 01:28À : 'David Rowley' <[email protected]>; Michel SALAIS <[email protected]>Cc : 'Justin Pryzby' <[email protected]>; 'Pgsql-performance' <[email protected]>; Michael Lewis <[email protected]>Objet : Re: Partition with check constraint with \"like\" sorry, forgot to attach the test cases.Postgres 13 | db<>fiddle Postgres 13 | db<>fiddleFree online SQL environment for experimenting and sharing. On Friday, May 21, 2021, 03:59:18 PM PDT, Nagaraj Raj <[email protected]> wrote: Hi, This is also not working, create table mytable_z partition of mytable for values from ('Z') to ('[')partition by range(id); ERROR: empty range bound specified for partition \"mytable_z\" DETAIL: Specified lower bound ('Z') is greater than or equal to upper bound ('['). SQL state: 42P17 DB running on version PostgreSQL 11.6, compiled by Visual C++ build 1800, 64-bit On Friday, May 21, 2021, 02:00:38 PM PDT, Michel SALAIS <[email protected]> wrote: HiI don’t discuss here the choice itself but this is not correct:create table mytable_z of mytable for values from ('Z') to ('Z['); It should becreate table mytable_z of mytable for values from ('Z') to ('[') Michel SALAIS De : Nagaraj Raj <[email protected]> Envoyé : vendredi 21 mai 2021 18:39À : David Rowley <[email protected]>Cc : Justin Pryzby <[email protected]>; Pgsql-performance <[email protected]>Objet : Re: Partition with check constraint with \"like\" Hi David, Hi, I am trying to create partitions on the table which have around 2BIL records and users will always look for the \"name\", its not possible to create a partition with a list, so we are trying to create a partition-based first letter of the name column. name column has a combination of alpha numeric values. > postgres=# select chr(ascii('z')+1) ;> chr> -----> {> (1 row) I tried as below, I'm able to create a partition table for 'Z', but it's not identifying partition table. postgres=# select chr(ascii('Z')+1) ;chr-----[(1 row) create table mytable_z of mytable for values from ('Z') to ('Z[');CREATE TABLE insert into mytable values(4,'ZAR83NB'); ERROR: no partition of relation \"mytable\" found for row DETAIL: Partition key of the failing row contains (name) = (ZAR83NB). SQL state: 23514 On Friday, May 21, 2021, 01:24:13 AM PDT, David Rowley <[email protected]> wrote: On Fri, 21 May 2021 at 19:02, Nagaraj Raj <[email protected]> wrote:> then what would be the range of Z> FROM (Z) to (?) ;postgres=# select chr(ascii('z')+1) ;chr-----{(1 row)> same way for 9postgres=# select chr(ascii('9')+1) ;chr-----:(1 row)https://en.wikipedia.org/wiki/ASCIIYou can also use MINVALUE and MAXVALUE to mean unbounded at either endof the range.But is there a particular need that you want to partition this way? Itseems like it might be a bit painful to maintain, especially if you'renot limiting yourself to ASCII or ANSI characters.You might want to consider HASH partitioning if you're just lookingfor a way to keep your tables and indexes to a more manageable size.You've not really mentioned your use case here, so it's hard to giveany advice.There are more details about partitioning inhttps://www.postgresql.org/docs/current/ddl-partitioning.htmlDavid",
"msg_date": "Sat, 22 May 2021 07:41:43 +0200",
"msg_from": "\"Michel SALAIS\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Partition with check constraint with \"like\""
}
] |
[
{
"msg_contents": "Hi,\n\nmy POC in postgres 12.(important ?)\n\nif I setup 2 postgres clusters, and create a publication in one and a\nsubscription in the other,\nand do on the pub an update which does not change the data (updating an\nexisting record with same data) then this (useless) update go through\nreplication.(ie consumes network ressource)\n\nwhat are ways to avoid this ?\n(I thought of a trigger to not execute the useless update, but I dont see\nhow to do this)\nany ideas ?\n\nthanks\nPS: remarks about the meaning of this are off topic, thanks\n\nMarc MILLAS\nSenior Architect\n+33607850334\nwww.mokadb.com\n\nHi,my POC in postgres 12.(important ?)if I setup 2 postgres clusters, and create a publication in one and a subscription in the other,and do on the pub an update which does not change the data (updating an existing record with same data) then this (useless) update go through replication.(ie consumes network ressource)what are ways to avoid this ?(I thought of a trigger to not execute the useless update, but I dont see how to do this)any ideas ?thanksPS: remarks about the meaning of this are off topic, thanksMarc MILLASSenior Architect+33607850334www.mokadb.com",
"msg_date": "Fri, 21 May 2021 14:41:31 +0200",
"msg_from": "Marc Millas <[email protected]>",
"msg_from_op": true,
"msg_subject": "logical replication"
},
{
"msg_contents": "\nOn 5/21/21 8:41 AM, Marc Millas wrote:\n> Hi,\n>\n> my POC in postgres 12.(important ?)\n>\n> if I setup 2 postgres clusters, and create a publication in one and a\n> subscription in the other,\n> and do on the pub an update which does not change the data (updating\n> an existing record with same data) then this (useless) update go\n> through replication.(ie consumes network ressource)\n>\n> what are ways to avoid this ?\n> (I thought of a trigger to not execute the useless update, but I\n> dont see how to do this)\n> any ideas ?\n>\n> thanks\n> PS: remarks about the meaning of this are off topic, thanks\n>\n>\n\nPostgres provides exactly such a trigger. See\nhttps://www.postgresql.org/docs/12/functions-trigger.html\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 21 May 2021 09:21:13 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logical replication"
},
{
"msg_contents": "perfect :-)\n\nthanks\n\nMarc MILLAS\nSenior Architect\n+33607850334\nwww.mokadb.com\n\n\n\nOn Fri, May 21, 2021 at 3:21 PM Andrew Dunstan <[email protected]> wrote:\n\n>\n> On 5/21/21 8:41 AM, Marc Millas wrote:\n> > Hi,\n> >\n> > my POC in postgres 12.(important ?)\n> >\n> > if I setup 2 postgres clusters, and create a publication in one and a\n> > subscription in the other,\n> > and do on the pub an update which does not change the data (updating\n> > an existing record with same data) then this (useless) update go\n> > through replication.(ie consumes network ressource)\n> >\n> > what are ways to avoid this ?\n> > (I thought of a trigger to not execute the useless update, but I\n> > dont see how to do this)\n> > any ideas ?\n> >\n> > thanks\n> > PS: remarks about the meaning of this are off topic, thanks\n> >\n> >\n>\n> Postgres provides exactly such a trigger. See\n> https://www.postgresql.org/docs/12/functions-trigger.html\n>\n>\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\nperfect :-)thanksMarc MILLASSenior Architect+33607850334www.mokadb.comOn Fri, May 21, 2021 at 3:21 PM Andrew Dunstan <[email protected]> wrote:\nOn 5/21/21 8:41 AM, Marc Millas wrote:\n> Hi,\n>\n> my POC in postgres 12.(important ?)\n>\n> if I setup 2 postgres clusters, and create a publication in one and a\n> subscription in the other,\n> and do on the pub an update which does not change the data (updating\n> an existing record with same data) then this (useless) update go\n> through replication.(ie consumes network ressource)\n>\n> what are ways to avoid this ?\n> (I thought of a trigger to not execute the useless update, but I\n> dont see how to do this)\n> any ideas ?\n>\n> thanks\n> PS: remarks about the meaning of this are off topic, thanks\n>\n>\n\nPostgres provides exactly such a trigger. See\nhttps://www.postgresql.org/docs/12/functions-trigger.html\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 21 May 2021 15:44:41 +0200",
"msg_from": "Marc Millas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: logical replication"
}
] |
[
{
"msg_contents": "I am seeing a strange issue on a database using jdbc. Regularly, 4 or\n5 times a day, I see something like a \"stutter\", where a bundle of\nmaybe 30 transactions suddenly finish at the same time. It looks like\n(it is quite hard to catch this exactly) that the lead transaction\nwhich has been blocking the rest has been blocked in COMMIT. In each\ncase it blocks for almost exactly 30s, just over, and once it goes\nthrough, releases locks, and the others clear behind it.\n\nMy question: what are the range of possibilities that might cause a\nCOMMIT to block? I haven't seen this before. Is there anything\nsuspicious about the regular 30s? Occasionally we see 60s, which\nseems likely to be two sets of 30.\n\nRegards\nBob\n\n\n",
"msg_date": "Mon, 24 May 2021 12:30:19 +0100",
"msg_from": "Bob Jolliffe <[email protected]>",
"msg_from_op": true,
"msg_subject": "transaction blocking on COMMIT"
},
{
"msg_contents": "I think there have been similar issues reported earlier as well. But it\nwould be too early to generalize.\n\n\nWhere is the db server running? Cloud?\n\nAlso what is the version ?\n\n\nOn Mon, May 24, 2021, 5:00 PM Bob Jolliffe <[email protected]> wrote:\n\n> I am seeing a strange issue on a database using jdbc. Regularly, 4 or\n> 5 times a day, I see something like a \"stutter\", where a bundle of\n> maybe 30 transactions suddenly finish at the same time. It looks like\n> (it is quite hard to catch this exactly) that the lead transaction\n> which has been blocking the rest has been blocked in COMMIT. In each\n> case it blocks for almost exactly 30s, just over, and once it goes\n> through, releases locks, and the others clear behind it.\n>\n> My question: what are the range of possibilities that might cause a\n> COMMIT to block? I haven't seen this before. Is there anything\n> suspicious about the regular 30s? Occasionally we see 60s, which\n> seems likely to be two sets of 30.\n>\n> Regards\n> Bob\n>\n>\n>\n\nI think there have been similar issues reported earlier as well. But it would be too early to generalize.Where is the db server running? Cloud?Also what is the version ?On Mon, May 24, 2021, 5:00 PM Bob Jolliffe <[email protected]> wrote:I am seeing a strange issue on a database using jdbc. Regularly, 4 or\n5 times a day, I see something like a \"stutter\", where a bundle of\nmaybe 30 transactions suddenly finish at the same time. It looks like\n(it is quite hard to catch this exactly) that the lead transaction\nwhich has been blocking the rest has been blocked in COMMIT. In each\ncase it blocks for almost exactly 30s, just over, and once it goes\nthrough, releases locks, and the others clear behind it.\n\nMy question: what are the range of possibilities that might cause a\nCOMMIT to block? I haven't seen this before. Is there anything\nsuspicious about the regular 30s? Occasionally we see 60s, which\nseems likely to be two sets of 30.\n\nRegards\nBob",
"msg_date": "Mon, 24 May 2021 17:05:32 +0530",
"msg_from": "Vijaykumar Jain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: transaction blocking on COMMIT"
},
{
"msg_contents": "Hello Jain\n\nSorry forgot to indicate: it is running the ubuntu packaged version\n13.3 on ubuntu 20.04.\n\nIt is not in the cloud, but is a VM in a government datacentre. I am\nnot sure of the underlying hyperviser. I could find out.\n\nRegards\nBob\n\n\nOn Mon, 24 May 2021 at 12:35, Vijaykumar Jain\n<[email protected]> wrote:\n>\n> I think there have been similar issues reported earlier as well. But it would be too early to generalize.\n>\n>\n> Where is the db server running? Cloud?\n>\n> Also what is the version ?\n>\n>\n> On Mon, May 24, 2021, 5:00 PM Bob Jolliffe <[email protected]> wrote:\n>>\n>> I am seeing a strange issue on a database using jdbc. Regularly, 4 or\n>> 5 times a day, I see something like a \"stutter\", where a bundle of\n>> maybe 30 transactions suddenly finish at the same time. It looks like\n>> (it is quite hard to catch this exactly) that the lead transaction\n>> which has been blocking the rest has been blocked in COMMIT. In each\n>> case it blocks for almost exactly 30s, just over, and once it goes\n>> through, releases locks, and the others clear behind it.\n>>\n>> My question: what are the range of possibilities that might cause a\n>> COMMIT to block? I haven't seen this before. Is there anything\n>> suspicious about the regular 30s? Occasionally we see 60s, which\n>> seems likely to be two sets of 30.\n>>\n>> Regards\n>> Bob\n>>\n>>\n\n\n",
"msg_date": "Mon, 24 May 2021 14:52:56 +0100",
"msg_from": "Bob Jolliffe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: transaction blocking on COMMIT"
},
{
"msg_contents": "No worries,\n\nThere were some threads earlier which mentioned some automated changes to\ndisk by the provider that resulted in some slowness.\n\nBut otherwise also, do you query system, disk metrics.\n\nDo you see any anomaly in disk io (wait) when you saw blocking?\nIf it did, did the io return to normal when blocks were cleared ?\n\n\n\nOn Mon, May 24, 2021, 7:23 PM Bob Jolliffe <[email protected]> wrote:\n\n> Hello Jain\n>\n> Sorry forgot to indicate: it is running the ubuntu packaged version\n> 13.3 on ubuntu 20.04.\n>\n> It is not in the cloud, but is a VM in a government datacentre. I am\n> not sure of the underlying hyperviser. I could find out.\n>\n> Regards\n> Bob\n>\n>\n> On Mon, 24 May 2021 at 12:35, Vijaykumar Jain\n> <[email protected]> wrote:\n> >\n> > I think there have been similar issues reported earlier as well. But it\n> would be too early to generalize.\n> >\n> >\n> > Where is the db server running? Cloud?\n> >\n> > Also what is the version ?\n> >\n> >\n> > On Mon, May 24, 2021, 5:00 PM Bob Jolliffe <[email protected]>\n> wrote:\n> >>\n> >> I am seeing a strange issue on a database using jdbc. Regularly, 4 or\n> >> 5 times a day, I see something like a \"stutter\", where a bundle of\n> >> maybe 30 transactions suddenly finish at the same time. It looks like\n> >> (it is quite hard to catch this exactly) that the lead transaction\n> >> which has been blocking the rest has been blocked in COMMIT. In each\n> >> case it blocks for almost exactly 30s, just over, and once it goes\n> >> through, releases locks, and the others clear behind it.\n> >>\n> >> My question: what are the range of possibilities that might cause a\n> >> COMMIT to block? I haven't seen this before. Is there anything\n> >> suspicious about the regular 30s? Occasionally we see 60s, which\n> >> seems likely to be two sets of 30.\n> >>\n> >> Regards\n> >> Bob\n> >>\n> >>\n>\n\nNo worries,There were some threads earlier which mentioned some automated changes to disk by the provider that resulted in some slowness.But otherwise also, do you query system, disk metrics.Do you see any anomaly in disk io (wait) when you saw blocking?If it did, did the io return to normal when blocks were cleared ?On Mon, May 24, 2021, 7:23 PM Bob Jolliffe <[email protected]> wrote:Hello Jain\n\nSorry forgot to indicate: it is running the ubuntu packaged version\n13.3 on ubuntu 20.04.\n\nIt is not in the cloud, but is a VM in a government datacentre. I am\nnot sure of the underlying hyperviser. I could find out.\n\nRegards\nBob\n\n\nOn Mon, 24 May 2021 at 12:35, Vijaykumar Jain\n<[email protected]> wrote:\n>\n> I think there have been similar issues reported earlier as well. But it would be too early to generalize.\n>\n>\n> Where is the db server running? Cloud?\n>\n> Also what is the version ?\n>\n>\n> On Mon, May 24, 2021, 5:00 PM Bob Jolliffe <[email protected]> wrote:\n>>\n>> I am seeing a strange issue on a database using jdbc. Regularly, 4 or\n>> 5 times a day, I see something like a \"stutter\", where a bundle of\n>> maybe 30 transactions suddenly finish at the same time. It looks like\n>> (it is quite hard to catch this exactly) that the lead transaction\n>> which has been blocking the rest has been blocked in COMMIT. In each\n>> case it blocks for almost exactly 30s, just over, and once it goes\n>> through, releases locks, and the others clear behind it.\n>>\n>> My question: what are the range of possibilities that might cause a\n>> COMMIT to block? I haven't seen this before. Is there anything\n>> suspicious about the regular 30s? Occasionally we see 60s, which\n>> seems likely to be two sets of 30.\n>>\n>> Regards\n>> Bob\n>>\n>>",
"msg_date": "Mon, 24 May 2021 19:39:47 +0530",
"msg_from": "Vijaykumar Jain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: transaction blocking on COMMIT"
},
{
"msg_contents": "It is hard to say as it only happens for 30s couple of times per day.\nEverything does return to normal after the blocking transaction is\ncommitted. It could be a disk thing or even a network issue (the java\napp is on a different machine to the db). But I never saw\ntransactions blocked in commit before so was wondering if there is any\nrational set of reasons why it might do that.\n\nOn Mon, 24 May 2021 at 15:09, Vijaykumar Jain\n<[email protected]> wrote:\n>\n> No worries,\n>\n> There were some threads earlier which mentioned some automated changes to disk by the provider that resulted in some slowness.\n>\n> But otherwise also, do you query system, disk metrics.\n>\n> Do you see any anomaly in disk io (wait) when you saw blocking?\n> If it did, did the io return to normal when blocks were cleared ?\n>\n>\n>\n> On Mon, May 24, 2021, 7:23 PM Bob Jolliffe <[email protected]> wrote:\n>>\n>> Hello Jain\n>>\n>> Sorry forgot to indicate: it is running the ubuntu packaged version\n>> 13.3 on ubuntu 20.04.\n>>\n>> It is not in the cloud, but is a VM in a government datacentre. I am\n>> not sure of the underlying hyperviser. I could find out.\n>>\n>> Regards\n>> Bob\n>>\n>>\n>> On Mon, 24 May 2021 at 12:35, Vijaykumar Jain\n>> <[email protected]> wrote:\n>> >\n>> > I think there have been similar issues reported earlier as well. But it would be too early to generalize.\n>> >\n>> >\n>> > Where is the db server running? Cloud?\n>> >\n>> > Also what is the version ?\n>> >\n>> >\n>> > On Mon, May 24, 2021, 5:00 PM Bob Jolliffe <[email protected]> wrote:\n>> >>\n>> >> I am seeing a strange issue on a database using jdbc. Regularly, 4 or\n>> >> 5 times a day, I see something like a \"stutter\", where a bundle of\n>> >> maybe 30 transactions suddenly finish at the same time. It looks like\n>> >> (it is quite hard to catch this exactly) that the lead transaction\n>> >> which has been blocking the rest has been blocked in COMMIT. In each\n>> >> case it blocks for almost exactly 30s, just over, and once it goes\n>> >> through, releases locks, and the others clear behind it.\n>> >>\n>> >> My question: what are the range of possibilities that might cause a\n>> >> COMMIT to block? I haven't seen this before. Is there anything\n>> >> suspicious about the regular 30s? Occasionally we see 60s, which\n>> >> seems likely to be two sets of 30.\n>> >>\n>> >> Regards\n>> >> Bob\n>> >>\n>> >>\n\n\n",
"msg_date": "Mon, 24 May 2021 17:22:28 +0100",
"msg_from": "Bob Jolliffe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: transaction blocking on COMMIT"
},
{
"msg_contents": "\n\n> On May 24, 2021, at 09:22, Bob Jolliffe <[email protected]> wrote:\n> \n> It is hard to say as it only happens for 30s couple of times per day.\n> Everything does return to normal after the blocking transaction is\n> committed. It could be a disk thing or even a network issue (the java\n> app is on a different machine to the db). But I never saw\n> transactions blocked in commit before so was wondering if there is any\n> rational set of reasons why it might do that.\n\nOne thing you can check is to turn off synchronous_commit (understanding the possibility of \"time loss\" in the event of a system crash). If that mitigates the problem, the issue is likely the I/O subsystem blocking during the fsync() operation.\n\n",
"msg_date": "Mon, 24 May 2021 09:24:09 -0700",
"msg_from": "Christophe Pettus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: transaction blocking on COMMIT"
},
{
"msg_contents": "On 05/24/21 19:24, Christophe Pettus wrote:\n>\n>> On May 24, 2021, at 09:22, Bob Jolliffe <[email protected]> wrote:\n>>\n>> It is hard to say as it only happens for 30s couple of times per day.\n>> Everything does return to normal after the blocking transaction is\n>> committed. It could be a disk thing or even a network issue (the java\n>> app is on a different machine to the db). But I never saw\n>> transactions blocked in commit before so was wondering if there is any\n>> rational set of reasons why it might do that.\n> One thing you can check is to turn off synchronous_commit (understanding the possibility of \"time loss\" in the event of a system crash). If that mitigates the problem, the issue is likely the I/O subsystem blocking during the fsync() operation.\n>\n>\nJust a question. Is there a btrfs(with compression maybe) around? 30 \nseconds is a commit(file system) timeout for btrfs. Some processes like \nbtrfs cleaner/allocate/worker on top of CPU/io use?\n\n\n\n",
"msg_date": "Tue, 25 May 2021 00:59:11 +0300",
"msg_from": "Alexey M Boltenkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: transaction blocking on COMMIT"
},
{
"msg_contents": "No brtfs. We are going to try turning off synchronous_commit\ntemporarily to see if there are underlying I/O issues.\n\nOn Mon, 24 May 2021 at 22:59, Alexey M Boltenkov <[email protected]> wrote:\n>\n> On 05/24/21 19:24, Christophe Pettus wrote:\n> >\n> >> On May 24, 2021, at 09:22, Bob Jolliffe <[email protected]> wrote:\n> >>\n> >> It is hard to say as it only happens for 30s couple of times per day.\n> >> Everything does return to normal after the blocking transaction is\n> >> committed. It could be a disk thing or even a network issue (the java\n> >> app is on a different machine to the db). But I never saw\n> >> transactions blocked in commit before so was wondering if there is any\n> >> rational set of reasons why it might do that.\n> > One thing you can check is to turn off synchronous_commit (understanding the possibility of \"time loss\" in the event of a system crash). If that mitigates the problem, the issue is likely the I/O subsystem blocking during the fsync() operation.\n> >\n> >\n> Just a question. Is there a btrfs(with compression maybe) around? 30\n> seconds is a commit(file system) timeout for btrfs. Some processes like\n> btrfs cleaner/allocate/worker on top of CPU/io use?\n>\n\n\n",
"msg_date": "Thu, 27 May 2021 14:08:18 +0100",
"msg_from": "Bob Jolliffe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: transaction blocking on COMMIT"
}
] |
[
{
"msg_contents": "Hi list.\n\nI have a question about the different plans produced by postgres for\nan outer join versus an inner join.\n\n(complete sql script attached)\n\nTake these two tables:\n\n CREATE TABLE album\n ( id SERIAL PRIMARY KEY,\n title TEXT NOT NULL\n );\n CREATE INDEX album_title ON album (title);\n\n CREATE TABLE track\n ( id SERIAL PRIMARY KEY,\n album_id INT NOT NULL REFERENCES album(id),\n title TEXT NOT NULL\n );\n CREATE INDEX track_album ON track(album_id);\n CREATE INDEX track_title ON track(title);\n\nwhere crucially `track` references `album(id)` with a `NOT NULL` reference.\n\nNow, if we query both an inner and an outer join on `track` and\n`album` (after having suitably filled-in data), we get very different\nplans where only the inner join exploits indices.\n\n\nThat is:\n\n EXPLAIN ANALYZE\n SELECT t.id\n FROM track t\n INNER JOIN album a ON (t.album_id = a.id)\n ORDER BY a.title ASC\n LIMIT 10;\n\nProduces this query plan:\n\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.56..3.29 rows=10 width=36) (actual time=0.038..0.052\nrows=10 loops=1)\n -> Nested Loop (cost=0.56..3606.93 rows=13200 width=36) (actual\ntime=0.036..0.046 rows=10 loops=1)\n -> Index Scan using album_title on album a\n(cost=0.28..113.23 rows=1397 width=36) (actual time=0.015..0.016\nrows=1 loops=1)\n -> Index Scan using track_album on track t (cost=0.29..1.84\nrows=66 width=8) (actual time=0.012..0.018 rows=10 loops=1)\n Index Cond: (album_id = a.id)\n Planning Time: 0.473 ms\n Execution Time: 0.096 ms\n(7 rows)\n\nWhile this:\n\n EXPLAIN ANALYZE\n SELECT t.id\n FROM track t\n LEFT JOIN album a ON (t.album_id = a.id)\n ORDER BY a.title ASC\n LIMIT 10;\n\nProduces this query plan:\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=604.43..604.45 rows=10 width=36) (actual\ntime=20.934..20.943 rows=10 loops=1)\n -> Sort (cost=604.43..637.43 rows=13200 width=36) (actual\ntime=20.932..20.934 rows=10 loops=1)\n Sort Key: a.title\n Sort Method: top-N heapsort Memory: 27kB\n -> Hash Left Join (cost=42.43..319.18 rows=13200 width=36)\n(actual time=1.082..12.333 rows=10000 loops=1)\n Hash Cond: (t.album_id = a.id)\n -> Seq Scan on track t (cost=0.00..242.00 rows=13200\nwidth=8) (actual time=0.031..3.919 rows=10000 loops=1)\n -> Hash (cost=24.97..24.97 rows=1397 width=36)\n(actual time=0.990..0.991 rows=1000 loops=1)\n Buckets: 2048 Batches: 1 Memory Usage: 101kB\n -> Seq Scan on album a (cost=0.00..24.97\nrows=1397 width=36) (actual time=0.014..0.451 rows=1000 loops=1)\n Planning Time: 0.251 ms\n Execution Time: 20.999 ms\n(12 rows)\n\nMy question then is, shouldn't the inner and outer join queries be\nsemantically equivalent when the columns we are joining on are\nnon-nullable foreign keys?\nIs there some corner case I'm not considering?\nWould it be a good addition to postgres if it could detect this and\nproduce a plan that exploits the indices?\n\n(My root motivation for asking this question is this github issue:\nhttps://github.com/hasura/graphql-engine/issues/5949)\n\nRegards,\nPhilip",
"msg_date": "Mon, 24 May 2021 14:10:24 +0200",
"msg_from": "Philip Lykke Carlsen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimising outer joins in the presence of non-nullable references"
},
{
"msg_contents": "Philip Lykke Carlsen <[email protected]> writes:\n> My question then is, shouldn't the inner and outer join queries be\n> semantically equivalent when the columns we are joining on are\n> non-nullable foreign keys?\n\nMaybe, but no such knowledge is built into the planner.\n\n> Is there some corner case I'm not considering?\n\nI'm a little suspicious whether it's actually a safe assumption to\nmake, in view of the fact that enforcement of FKs is delayed till\nend-of-statement or even end-of-transaction. Thus, the relationship\nisn't necessarily valid at every instant.\n\n> Would it be a good addition to postgres if it could detect this and\n> produce a plan that exploits the indices?\n\nMaybe. Aside from semantic correctness issues, the big question\nwould be whether the detection could be made cheap enough to not\nbe a drag on the 99.99% of cases where it's not helpful.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 May 2021 10:11:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimising outer joins in the presence of non-nullable references"
}
] |
[
{
"msg_contents": "I have a table 'sub_soc' with 3BIL records, it's been partitioned and indexed on the soc column. when the user is running a query with left join on this table and joining some other tables, the query planner doing a full table scan instead of looking into partitioned tables and index scan. \n\nSELECT \n t2.cid_hash AS BILLG_ACCT_CID_HASH ,\n t2.proxy_id AS INDVDL_ENTITY_PROXY_ID ,\n t2.accs_mthd AS ACCS_MTHD_CID_HASH\nFROM\n public.sub t2\nInner join acc t3 on t3.cid_hash = t2.cid_hash\nLeft join sub_soc t4 on (t2.accs_mthd = t4.accs_mthd\n AND t2.cid_hash = t4.cid_hash)\nWHERE\n ( ( (t3.acct = 'I' AND t3.acct_sub IN ( '4',\n'5' ) ) OR t2.ban IN ( '00','01','02','03','04','05' ) )\n OR (t4.soc = 'NFWJYW0' AND t4.curr_ind = 'Y') );\n\nIf I use AND instead of OR, it's doing partition & index scan; otherwise, it's a full scan.\nCan you please provide suggestions?\nFor DDL structure Postgres 11 | db<>fiddle \n\n\n| \n| \n| | \nPostgres 11 | db<>fiddle\n\nFree online SQL environment for experimenting and sharing.\n |\n\n |\n\n |\n\n\n\n\n\nThanks,Raj\n\nI have a table 'sub_soc' with 3BIL records, it's been partitioned and indexed on the soc column. when the user is running a query with left join on this table and joining some other tables, the query planner doing a full table scan instead of looking into partitioned tables and index scan. SELECT t2.cid_hash AS BILLG_ACCT_CID_HASH , t2.proxy_id AS INDVDL_ENTITY_PROXY_ID , t2.accs_mthd AS ACCS_MTHD_CID_HASHFROM public.sub t2Inner join acc t3 on t3.cid_hash = t2.cid_hashLeft join sub_soc t4 on (t2.accs_mthd = t4.accs_mthd AND t2.cid_hash = t4.cid_hash)WHERE ( ( (t3.acct = 'I' AND t3.acct_sub IN ( '4','5' ) ) OR t2.ban IN ( '00','01','02','03','04','05' ) ) OR (t4.soc = 'NFWJYW0' AND t4.curr_ind = 'Y') );If I use AND instead of OR, it's doing partition & index scan; otherwise, it's a full scan.Can you please provide suggestions?For DDL structure Postgres 11 | db<>fiddle Postgres 11 | db<>fiddleFree online SQL environment for experimenting and sharing.Thanks,Raj",
"msg_date": "Tue, 25 May 2021 22:50:48 +0000 (UTC)",
"msg_from": "Nagaraj Raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "issue partition scan"
},
{
"msg_contents": "\n> On May 25, 2021, at 15:50, Nagaraj Raj <[email protected]> wrote:\n> \n> SELECT \n> t2.cid_hash AS BILLG_ACCT_CID_HASH ,\n> t2.proxy_id AS INDVDL_ENTITY_PROXY_ID ,\n> t2.accs_mthd AS ACCS_MTHD_CID_HASH\n> FROM\n> public.sub t2\n> Inner join acc t3 on t3.cid_hash = t2.cid_hash\n> Left join sub_soc t4 on (t2.accs_mthd = t4.accs_mthd\n> AND t2.cid_hash = t4.cid_hash)\n> WHERE\n> ( ( (t3.acct = 'I' AND t3.acct_sub IN ( '4',\n> '5' ) ) OR t2.ban IN ( '00','01','02','03','04','05' ) )\n> OR (t4.soc = 'NFWJYW0' AND t4.curr_ind = 'Y') );\n\nAs written, with the OR, it cannot exclude any partitions from the query. The records returned will be from two merged sets:\n\n1. Those that have sub_soc.soc = 'NFWJYW0' and sub_soc.curr_ind = 'Y'\n\nIt can use constraint exclusion on these to only scan applicable partitions.\n\n2. Those that have (acc.acct = 'I' AND acc.acct_sub IN ( '4', '5' ) ) OR sub.ban IN ( '00','01','02','03','04','05' )\n\nIt can't use constraint exclusion on these, since results can come from any partition.\n\n",
"msg_date": "Tue, 25 May 2021 16:01:39 -0700",
"msg_from": "Christophe Pettus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue partition scan"
},
{
"msg_contents": "Apologies, I didn't understand you completely.\n> 1. Those that have sub_soc.soc = 'NFWJYW0' and sub_soc.curr_ind = 'Y'\n\n> It can use constraint exclusion on these to only scan applicable partitions.\n\n> 2. Those that have (acc.acct = 'I' AND acc.acct_sub IN ( '4', '5' ) ) OR sub.ban IN ( '00','01','02','03','04','05' )\n\n> It can't use constraint exclusion on these since results can come from any partition.\nWhy is it not using constraint exclusion on the above two conditions(1 and 2) included in the where clause ? \nBoth sets are pointing to different tables.\n On Tuesday, May 25, 2021, 04:01:53 PM PDT, Christophe Pettus <[email protected]> wrote: \n \n \n> On May 25, 2021, at 15:50, Nagaraj Raj <[email protected]> wrote:\n> \n> SELECT \n> t2.cid_hash AS BILLG_ACCT_CID_HASH ,\n> t2.proxy_id AS INDVDL_ENTITY_PROXY_ID ,\n> t2.accs_mthd AS ACCS_MTHD_CID_HASH\n> FROM\n> public.sub t2\n> Inner join acc t3 on t3.cid_hash = t2.cid_hash\n> Left join sub_soc t4 on (t2.accs_mthd = t4.accs_mthd\n> AND t2.cid_hash = t4.cid_hash)\n> WHERE\n> ( ( (t3.acct = 'I' AND t3.acct_sub IN ( '4',\n> '5' ) ) OR t2.ban IN ( '00','01','02','03','04','05' ) )\n> OR (t4.soc = 'NFWJYW0' AND t4.curr_ind = 'Y') );\n\nAs written, with the OR, it cannot exclude any partitions from the query. The records returned will be from two merged sets:\n\n1. Those that have sub_soc.soc = 'NFWJYW0' and sub_soc.curr_ind = 'Y'\n\nIt can use constraint exclusion on these to only scan applicable partitions.\n\n2. Those that have (acc.acct = 'I' AND acc.acct_sub IN ( '4', '5' ) ) OR sub.ban IN ( '00','01','02','03','04','05' )\n\nIt can't use constraint exclusion on these, since results can come from any partition.\n\n \n\nApologies, I didn't understand you completely.> 1. Those that have sub_soc.soc = 'NFWJYW0' and sub_soc.curr_ind = 'Y'> It can use constraint exclusion on these to only scan applicable partitions.> 2. Those that have (acc.acct = 'I' AND acc.acct_sub IN ( '4', '5' ) ) OR sub.ban IN ( '00','01','02','03','04','05' )> It can't use constraint exclusion on these since results can come from any partition.Why is it not using constraint exclusion on the above two conditions(1 and 2) included in the where clause ? Both sets are pointing to different tables.\n\n\n\n On Tuesday, May 25, 2021, 04:01:53 PM PDT, Christophe Pettus <[email protected]> wrote:\n \n\n\n> On May 25, 2021, at 15:50, Nagaraj Raj <[email protected]> wrote:> > SELECT > t2.cid_hash AS BILLG_ACCT_CID_HASH ,> t2.proxy_id AS INDVDL_ENTITY_PROXY_ID ,> t2.accs_mthd AS ACCS_MTHD_CID_HASH> FROM> public.sub t2> Inner join acc t3 on t3.cid_hash = t2.cid_hash> Left join sub_soc t4 on (t2.accs_mthd = t4.accs_mthd> AND t2.cid_hash = t4.cid_hash)> WHERE> ( ( (t3.acct = 'I' AND t3.acct_sub IN ( '4',> '5' ) ) OR t2.ban IN ( '00','01','02','03','04','05' ) )> OR (t4.soc = 'NFWJYW0' AND t4.curr_ind = 'Y') );As written, with the OR, it cannot exclude any partitions from the query. The records returned will be from two merged sets:1. Those that have sub_soc.soc = 'NFWJYW0' and sub_soc.curr_ind = 'Y'It can use constraint exclusion on these to only scan applicable partitions.2. Those that have (acc.acct = 'I' AND acc.acct_sub IN ( '4', '5' ) ) OR sub.ban IN ( '00','01','02','03','04','05' )It can't use constraint exclusion on these, since results can come from any partition.",
"msg_date": "Tue, 25 May 2021 23:38:04 +0000 (UTC)",
"msg_from": "Nagaraj Raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: issue partition scan"
},
{
"msg_contents": "On Wed, 26 May 2021 at 11:38, Nagaraj Raj <[email protected]> wrote:\n>\n> Apologies, I didn't understand you completely.\n>\n> > 1. Those that have sub_soc.soc = 'NFWJYW0' and sub_soc.curr_ind = 'Y'\n>\n> > It can use constraint exclusion on these to only scan applicable partitions.\n>\n> > 2. Those that have (acc.acct = 'I' AND acc.acct_sub IN ( '4', '5' ) ) OR sub.ban IN ( '00','01','02','03','04','05' )\n>\n> > It can't use constraint exclusion on these since results can come from any partition.\n>\n> Why is it not using constraint exclusion on the above two conditions(1 and 2) included in the where clause ?\n>\n> Both sets are pointing to different tables.\n\nIt's because of the OR condition. If it was an AND condition then the\nplanner wouldn't have to consider the fact that records in other\npartitions might be required for the join.\n\nDavid\n\n\n",
"msg_date": "Wed, 26 May 2021 12:16:58 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue partition scan"
},
{
"msg_contents": "\n\n> On May 25, 2021, at 17:16, David Rowley <[email protected]> wrote:\n> \n> It's because of the OR condition. If it was an AND condition then the\n> planner wouldn't have to consider the fact that records in other\n> partitions might be required for the join.\n\nThe OP might consider rewriting the query as a UNION, with each part of the top-lkevel OR being a branch of the UNION, but excluding the partitioned table from the JOINs for the branch of the UNION that doesn't appear to actually require them.\n\n",
"msg_date": "Tue, 25 May 2021 17:27:27 -0700",
"msg_from": "Christophe Pettus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue partition scan"
}
] |
[
{
"msg_contents": "Hello Pgsql-performance,\n\nTo not flood network with many parameters I send only one and use `WITH` hack to reuse value inside query:\n\nWITH\n_app_period AS ( select app_period() ),\nready AS (\nSELECT\n\n min( lower( o.app_period ) ) OVER ( PARTITION BY agreement_id ) <@ (select * from _app_period) AS new_order,\n max( upper( o.app_period ) ) OVER ( PARTITION BY agreement_id ) <@ (select * from _app_period) AS del_order\n ,o.*\nFROM \"order_bt\" o\nLEFT JOIN acc_ready( 'Usage', (select * from _app_period), o ) acc_u ON acc_u.ready\nLEFT JOIN acc_ready( 'Invoice', (select * from _app_period), o ) acc_i ON acc_i.ready\n\n\nLEFT JOIN agreement a ON a.id = o.agreement_id\nLEFT JOIN xcheck c ON c.doc_id = o.id and c.doctype = 'OrderDetail'\n\nWHERE o.sys_period @> sys_time() AND o.app_period && (select * from _app_period)\n)\nSELECT * FROM ready\n\nhttps://explain.depesz.com/s/kDCJ3#query\n\nbut becaues of this `acc_ready` is not inlined and I get perfomance\ndowngrade.\n\nCan we mark here (select * from _app_period) subquery as constant and\nallow to pass inline condition:\n\n>none of the actual arguments contain volatile expressions or subselects\nhttps://wiki.postgresql.org/wiki/Inlining_of_SQL_functions\n\nthis subselect is not volatile and could be expanded to constant\n\n\nWhat do you think about this proposition?\n\nI expect it to spent 0.5ms instead of 14ms like here (I put app_period() explicitly)\nhttps://explain.depesz.com/s/iNTw 30 times faster!\n\nEXPLAIN( ANALYSE, FORMAT JSON, VERBOSE, settings, buffers )\nWITH ready AS (\nSELECT\n\n min( lower( o.app_period ) ) OVER ( PARTITION BY agreement_id ) <@ app_period() AS new_order,\n max( upper( o.app_period ) ) OVER ( PARTITION BY agreement_id ) <@ app_period() AS del_order\n ,o.*\nFROM \"order_bt\" o\nLEFT JOIN acc_ready( 'Usage', app_period(), o ) acc_u ON acc_u.ready\nLEFT JOIN acc_ready( 'Invoice', app_period(), o ) acc_i ON acc_i.ready\n\n\nLEFT JOIN agreement a ON a.id = o.agreement_id\nLEFT JOIN xcheck c ON c.doc_id = o.id and c.doctype = 'OrderDetail'\n\nWHERE o.sys_period @> sys_time() AND o.app_period && app_period()\n)\nSELECT * FROM ready\n\n\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n",
"msg_date": "Thu, 27 May 2021 13:37:33 +0300",
"msg_from": "Eugen Konkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Count (select 1) subquery as constant"
}
] |
[
{
"msg_contents": "I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4 at \none point), gradually moving to v9.0 w/ replication in 2010. In 2017 I \nmoved my 20GB database to AWS/RDS, gradually upgrading to v9.6, & was \nentirely satisfied with the result.\n\nIn March of this year, AWS announced that v9.6 was nearing end of \nsupport, & AWS would forcibly upgrade everyone to v12 on January 22, \n2022, if users did not perform the upgrade earlier. My first attempt \nwas successful as far as the upgrade itself, but complex queries that \nnormally ran in a couple of seconds on v9.x, were taking minutes in v12.\n\nI didn't have the time in March to diagnose the problem, other than some \nfutile adjustments to server parameters, so I reverted back to a saved \ncopy of my v9.6 data.\n\nOn Sunday, being retired, I decided to attempt to solve the issue in \nearnest. I have now spent five days (about 14 hours a day), trying \nvarious things. Keeping the v9.6 data online for web users, I've \n\"forked\" the data into a new copy, & updated it in turn to PostgreSQL \nv10, v11, v12, & v13. All exhibit the same problem: As you will see \nbelow, it appears that versions 10 & above are doing a sequential scan \nof some of the \"large\" (200K rows) tables. Note that the expected & \nactual run times for v9.6 & v13.2 both differ by more than *two orders \nof magnitude*. Rather than post a huge eMail (ha ha), I'll start with \nthis one, that shows an \"EXPLAIN ANALYZE\" from both v9.6 & v13.2, \nfollowed by the related table & view definitions. With one exception, \ntable definitions are from the FCC (Federal Communications Commission); \nthe view definitions are my own.\n\n*Here's from v9.6:*\n\n=> EXPLAIN ANALYZE SELECT club_count, extra_count, region_count, \ncallsign AS trustee_callsign, applicant_type, entity_name, licensee_id \nAS _lid FROM genclub_multi_ WHERE club_count >= 5 ORDER BY extra_count \nDESC, club_count DESC, entity_name;\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=407.13..407.13 rows=1 width=94) (actual \ntime=348.850..348.859 rows=43 loops=1)\n Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC, \n\"_EN\".entity_name\n Sort Method: quicksort Memory: 31kB\n -> Nested Loop (cost=4.90..407.12 rows=1 width=94) (actual \ntime=7.587..348.732 rows=43 loops=1)\n -> Nested Loop (cost=4.47..394.66 rows=1 width=94) (actual \ntime=5.740..248.149 rows=43 loops=1)\n -> Nested Loop Left Join (cost=4.04..382.20 rows=1 \nwidth=79) (actual time=2.458..107.908 rows=55 loops=1)\n -> Hash Join (cost=3.75..380.26 rows=1 width=86) \n(actual time=2.398..106.990 rows=55 loops=1)\n Hash Cond: ((\"_EN\".country_id = \n\"_GovtRegion\".country_id) AND (\"_EN\".state = \"_GovtRegion\".territory_id))\n -> Nested Loop (cost=0.43..376.46 rows=47 \nwidth=94) (actual time=2.294..106.736 rows=55 loops=1)\n -> Seq Scan on \"_Club\" \n(cost=0.00..4.44 rows=44 width=35) (actual time=0.024..0.101 rows=44 \nloops=1)\n Filter: (club_count >= 5)\n Rows Removed by Filter: 151\n -> Index Scan using \"_EN_callsign\" on \n\"_EN\" (cost=0.43..8.45 rows=1 width=69) (actual time=2.179..2.420 \nrows=1 loops=44)\n Index Cond: (callsign = \n\"_Club\".trustee_callsign)\n -> Hash (cost=1.93..1.93 rows=93 width=7) \n(actual time=0.071..0.071 rows=88 loops=1)\n Buckets: 1024 Batches: 1 Memory \nUsage: 12kB\n -> Seq Scan on \"_GovtRegion\" \n(cost=0.00..1.93 rows=93 width=7) (actual time=0.010..0.034 rows=93 loops=1)\n -> Nested Loop (cost=0.29..1.93 rows=1 width=7) \n(actual time=0.012..0.014 rows=1 loops=55)\n Join Filter: (\"_IsoCountry\".iso_alpha2 = \n\"_Territory\".country_id)\n Rows Removed by Join Filter: 0\n -> Index Only Scan using \n\"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..1.62 rows=1 \nwidth=3) (actual time=0.006..0.006 rows=1 loops=55)\n Index Cond: (iso_alpha2 = \n\"_GovtRegion\".country_id)\n Heap Fetches: 55\n -> Index Only Scan using \"_Territory_pkey\" \non \"_Territory\" (cost=0.14..0.29 rows=1 width=7)\n (actual time=0.004..0.005 rows=1 loops=55)\n Index Cond: (territory_id = \n\"_GovtRegion\".territory_id)\n Heap Fetches: 59\n -> Index Scan using \"_HD_pkey\" on \"_HD\" \n(cost=0.43..12.45 rows=1 width=15) (actual time=2.548..2.548 rows=1 \nloops=55)\n Index Cond: (unique_system_identifier = \n\"_EN\".unique_system_identifier)\n Filter: ((\"_EN\".callsign = callsign) AND \n(((((license_status)::text || ' - '::text) || (COALESCE((SubPlan 2), \n'???'::character varying))::text))::character(1) = 'A'::bpchar))\n Rows Removed by Filter: 0\n SubPlan 2\n -> Limit (cost=0.15..8.17 rows=1 width=32) \n(actual time=0.006..0.007 rows=1 loops=55)\n -> Index Scan using \"_LicStatus_pkey\" on \n\"_LicStatus\" (cost=0.15..8.17 rows=1 width=32) (actual \ntime=0.005..0.005 rows=1 loops=55)\n Index Cond: (\"_HD\".license_status = \nstatus_id)\n -> Index Scan using \"_AM_pkey\" on \"_AM\" (cost=0.43..4.27 \nrows=1 width=15) (actual time=2.325..2.325 rows=1 loops=43)\n Index Cond: (unique_system_identifier = \n\"_EN\".unique_system_identifier)\n Filter: (\"_EN\".callsign = callsign)\n SubPlan 1\n -> Limit (cost=0.15..8.17 rows=1 width=32) (actual \ntime=0.007..0.007 rows=1 loops=43)\n -> Index Scan using \"_ApplicantType_pkey\" on \n\"_ApplicantType\" (cost=0.15..8.17 rows=1 width=32) (actual \ntime=0.005..0.005 rows=1 loops=43)\n Index Cond: (\"_EN\".applicant_type_code = \napp_type_id)\n Planning time: 13.490 ms\n Execution time: 349.182 ms\n(43 rows)\n\n\n*Here's from v13.2:*\n\n=> EXPLAIN ANALYZE SELECT club_count, extra_count, region_count, \ncallsign AS trustee_callsign, applicant_type, entity_name, licensee_id \nAS _lid FROM genclub_multi_ WHERE club_count >= 5 ORDER BY extra_count \nDESC, club_count DESC, entity_name;\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=144365.60..144365.60 rows=1 width=94) (actual \ntime=31898.860..31901.922 rows=43 loops=1)\n Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC, \n\"_EN\".entity_name\n Sort Method: quicksort Memory: 31kB\n -> Nested Loop (cost=58055.66..144365.59 rows=1 width=94) (actual \ntime=6132.403..31894.233 rows=43 loops=1)\n -> Nested Loop (cost=58055.51..144364.21 rows=1 width=62) \n(actual time=1226.085..30337.921 rows=837792 loops=1)\n -> Nested Loop Left Join (cost=58055.09..144360.38 \nrows=1 width=59) (actual time=1062.414..12471.456 rows=1487153 loops=1)\n -> Hash Join (cost=58054.80..144359.69 rows=1 \nwidth=66) (actual time=1061.330..6635.041 rows=1487153 loops=1)\n Hash Cond: ((\"_EN\".unique_system_identifier \n= \"_AM\".unique_system_identifier) AND (\"_EN\".callsign = \"_AM\".callsign))\n -> Hash Join (cost=3.33..53349.72 \nrows=1033046 width=51) (actual time=2.151..3433.178 rows=1487153 loops=1)\n Hash Cond: ((\"_EN\".country_id = \n\"_GovtRegion\".country_id) AND (\"_EN\".state = \"_GovtRegion\".territory_id))\n -> Seq Scan on \"_EN\" \n(cost=0.00..45288.05 rows=1509005 width=60) (actual time=0.037..2737.054 \nrows=1508736 loops=1)\n -> Hash (cost=1.93..1.93 rows=93 \nwidth=7) (actual time=0.706..1.264 rows=88 loops=1)\n Buckets: 1024 Batches: 1 Memory \nUsage: 12kB\n -> Seq Scan on \"_GovtRegion\" \n(cost=0.00..1.93 rows=93 width=7) (actual time=0.013..0.577 rows=93 loops=1)\n -> Hash (cost=28093.99..28093.99 \nrows=1506699 width=15) (actual time=1055.587..1055.588 rows=1506474 loops=1)\n Buckets: 131072 Batches: 32 Memory \nUsage: 3175kB\n -> Seq Scan on \"_AM\" \n(cost=0.00..28093.99 rows=1506699 width=15) (actual time=0.009..742.774 \nrows=1506474 loops=1)\n -> Nested Loop (cost=0.29..0.68 rows=1 width=7) \n(actual time=0.003..0.004 rows=1 loops=1487153)\n Join Filter: (\"_IsoCountry\".iso_alpha2 = \n\"_Territory\".country_id)\n Rows Removed by Join Filter: 0\n -> Index Only Scan using \n\"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38 rows=1 \nwidth=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n Index Cond: (iso_alpha2 = \n\"_GovtRegion\".country_id)\n Heap Fetches: 1487153\n -> Index Only Scan using \"_Territory_pkey\" \non \"_Territory\" (cost=0.14..0.29 rows=1 width=7) (actual \ntime=0.001..0.001 rows=1 loops=1487153)\n Index Cond: (territory_id = \n\"_GovtRegion\".territory_id)\n Heap Fetches: 1550706\n -> Index Scan using \"_HD_pkey\" on \"_HD\" \n(cost=0.43..3.82 rows=1 width=15) (actual time=0.012..0.012 rows=1 \nloops=1487153)\n Index Cond: (unique_system_identifier = \n\"_EN\".unique_system_identifier)\n Filter: ((\"_EN\".callsign = callsign) AND \n(((((license_status)::text || ' - '::text) || (COALESCE((SubPlan 2), \n'???'::character varying))::text))::character(1) = 'A'::bpchar))\n Rows Removed by Filter: 0\n SubPlan 2\n -> Limit (cost=0.00..1.07 rows=1 width=13) \n(actual time=0.001..0.001 rows=1 loops=1487153)\n -> Seq Scan on \"_LicStatus\" \n(cost=0.00..1.07 rows=1 width=13) (actual time=0.000..0.000 rows=1 \nloops=1487153)\n Filter: (\"_HD\".license_status = \nstatus_id)\n Rows Removed by Filter: 1\n -> Index Scan using \"_Club_pkey\" on \"_Club\" (cost=0.14..0.17 \nrows=1 width=35) (actual time=0.002..0.002 rows=0 loops=837792)\n Index Cond: (trustee_callsign = \"_EN\".callsign)\n Filter: (club_count >= 5)\n Rows Removed by Filter: 0\n SubPlan 1\n -> Limit (cost=0.00..1.20 rows=1 width=15) (actual \ntime=0.060..0.060 rows=1 loops=43)\n -> Seq Scan on \"_ApplicantType\" (cost=0.00..1.20 \nrows=1 width=15) (actual time=0.016..0.016 rows=1 loops=43)\n Filter: (\"_EN\".applicant_type_code = app_type_id)\n Rows Removed by Filter: 7\n Planning Time: 173.753 ms\n Execution Time: 31919.601 ms\n(46 rows)\n\n\n*VIEW genclub_multi_:*\n\n=> \\d+ genclub_multi_\n View \"Callsign.genclub_multi_\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n------------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n fcc_reg_num | character(10) | | | | \nextended |\n licensee_id | character(9) | | | | \nextended |\n subgroup_id_num | character(3) | | | | \nextended |\n applicant_type | text | | | | \nextended |\n entity_type | text | | | | \nextended |\n entity_name | character varying(200) | | | | \nextended |\n attention | character varying(35) | | | | \nextended |\n first_name | character varying(20) | | | | \nextended |\n middle_init | character(1) | | | | \nextended |\n last_name | character varying(20) | | | | \nextended |\n name_suffix | character(3) | | | | \nextended |\n street_address | character varying(60) | | | | \nextended |\n po_box | text | | | | \nextended |\n locality | character varying | | | | \nextended |\n locality_ | character varying | | | | \nextended |\n county | character varying | | | | \nextended |\n state | text | | | | \nextended |\n postal_code | text | | | | \nextended |\n full_name | text | | | | \nextended |\n _entity_name | text | | | | \nextended |\n _first_name | text | | | | \nextended |\n _last_name | text | | | | \nextended |\n zip5 | character(5) | | | | \nextended |\n zip_location | \"GeoPosition\" | | | | \nextended |\n maidenhead | bpchar | | | | \nextended |\n geo_region | smallint | | | | \nplain |\n uls_file_num | character(14) | | | | \nextended |\n radio_service | text | | | | \nextended |\n license_status | text | | | | \nextended |\n grant_date | date | | | | \nplain |\n effective_date | date | | | | \nplain |\n cancel_date | date | | | | \nplain |\n expire_date | date | | | | \nplain |\n end_date | date | | | | \nplain |\n available_date | date | | | | \nplain |\n last_action_date | date | | | | \nplain |\n uls_region | \"MySql\".tinyint | | | | \nplain |\n callsign_group | text | | | | \nextended |\n operator_group | text | | | | \nextended |\n operator_class | text | | | | \nextended |\n prev_class | text | | | | \nextended |\n prev_callsign | character(10) | | | | \nextended |\n vanity_type | text | | | | \nextended |\n is_trustee | character(1) | | | | \nextended |\n trustee_callsign | character(10) | | | | \nextended |\n trustee_name | character varying(50) | | | | \nextended |\n validity | integer | | | | \nplain |\n club_count | bigint | | | | \nplain |\n extra_count | bigint | | | | \nplain |\n region_count | bigint | | | | \nplain |\nView definition:\n SELECT licjb_.sys_id,\n licjb_.callsign,\n licjb_.fcc_reg_num,\n licjb_.licensee_id,\n licjb_.subgroup_id_num,\n licjb_.applicant_type,\n licjb_.entity_type,\n licjb_.entity_name,\n licjb_.attention,\n licjb_.first_name,\n licjb_.middle_init,\n licjb_.last_name,\n licjb_.name_suffix,\n licjb_.street_address,\n licjb_.po_box,\n licjb_.locality,\n licjb_.locality_,\n licjb_.county,\n licjb_.state,\n licjb_.postal_code,\n licjb_.full_name,\n licjb_._entity_name,\n licjb_._first_name,\n licjb_._last_name,\n licjb_.zip5,\n licjb_.zip_location,\n licjb_.maidenhead,\n licjb_.geo_region,\n licjb_.uls_file_num,\n licjb_.radio_service,\n licjb_.license_status,\n licjb_.grant_date,\n licjb_.effective_date,\n licjb_.cancel_date,\n licjb_.expire_date,\n licjb_.end_date,\n licjb_.available_date,\n licjb_.last_action_date,\n licjb_.uls_region,\n licjb_.callsign_group,\n licjb_.operator_group,\n licjb_.operator_class,\n licjb_.prev_class,\n licjb_.prev_callsign,\n licjb_.vanity_type,\n licjb_.is_trustee,\n licjb_.trustee_callsign,\n licjb_.trustee_name,\n licjb_.validity,\n gen.club_count,\n gen.extra_count,\n gen.region_count\n FROM licjb_,\n \"GenLicClub\" gen\n WHERE licjb_.callsign = gen.trustee_callsign AND \nlicjb_.license_status::character(1) = 'A'::bpchar;\n*\n**VIEW GenLicClub:*\n\n=> \\d+ \"GenLicClub\"\n View \"Callsign.GenLicClub\"\n Column | Type | Collation | Nullable | Default | \nStorage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n trustee_callsign | character(10) | | | | extended |\n club_count | bigint | | | | plain |\n extra_count | bigint | | | | plain |\n region_count | bigint | | | | plain |\nView definition:\n SELECT \"_Club\".trustee_callsign,\n \"_Club\".club_count,\n \"_Club\".extra_count,\n \"_Club\".region_count\n FROM \"GenLic\".\"_Club\";\n\n*TABLE \"GenLic\".\"_Club\":*\n\n=> \\d+ \"GenLic\".\"_Club\"\n Table \"GenLic._Club\"\n Column | Type | Collation | Nullable | Default | \nStorage | Stats target | Description\n------------------+---------------+-----------+----------+---------+----------+--------------+-------------\n trustee_callsign | character(10) | | not null | | extended \n| |\n club_count | bigint | | | | plain \n| |\n extra_count | bigint | | | | plain \n| |\n region_count | bigint | | | | plain \n| |\nIndexes:\n \"_Club_pkey\" PRIMARY KEY, btree (trustee_callsign)\n\n*VIEW licjb_:*\n\n=> \\d+ licjb_\n View \"Callsign.licjb_\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n------------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n fcc_reg_num | character(10) | | | | \nextended |\n licensee_id | character(9) | | | | \nextended |\n subgroup_id_num | character(3) | | | | \nextended |\n applicant_type | text | | | | \nextended |\n entity_type | text | | | | \nextended |\n entity_name | character varying(200) | | | | \nextended |\n attention | character varying(35) | | | | \nextended |\n first_name | character varying(20) | | | | \nextended |\n middle_init | character(1) | | | | \nextended |\n last_name | character varying(20) | | | | \nextended |\n name_suffix | character(3) | | | | \nextended |\n street_address | character varying(60) | | | | \nextended |\n po_box | text | | | | \nextended |\n locality | character varying | | | | \nextended |\n locality_ | character varying | | | | \nextended |\n county | character varying | | | | \nextended |\n state | text | | | | \nextended |\n postal_code | text | | | | \nextended |\n full_name | text | | | | \nextended |\n _entity_name | text | | | | \nextended |\n _first_name | text | | | | \nextended |\n _last_name | text | | | | \nextended |\n zip5 | character(5) | | | | \nextended |\n zip_location | \"GeoPosition\" | | | | \nextended |\n maidenhead | bpchar | | | | \nextended |\n geo_region | smallint | | | | \nplain |\n uls_file_num | character(14) | | | | \nextended |\n radio_service | text | | | | \nextended |\n license_status | text | | | | \nextended |\n grant_date | date | | | | \nplain |\n effective_date | date | | | | \nplain |\n cancel_date | date | | | | \nplain |\n expire_date | date | | | | \nplain |\n end_date | date | | | | \nplain |\n available_date | date | | | | \nplain |\n last_action_date | date | | | | \nplain |\n uls_region | \"MySql\".tinyint | | | | \nplain |\n callsign_group | text | | | | \nextended |\n operator_group | text | | | | \nextended |\n operator_class | text | | | | \nextended |\n prev_class | text | | | | \nextended |\n prev_callsign | character(10) | | | | \nextended |\n vanity_type | text | | | | \nextended |\n is_trustee | character(1) | | | | \nextended |\n trustee_callsign | character(10) | | | | \nextended |\n trustee_name | character varying(50) | | | | \nextended |\n validity | integer | | | | \nplain |\nView definition:\n SELECT lic_en_.sys_id,\n lic_en_.callsign,\n lic_en_.fcc_reg_num,\n lic_en_.licensee_id,\n lic_en_.subgroup_id_num,\n lic_en_.applicant_type,\n lic_en_.entity_type,\n lic_en_.entity_name,\n lic_en_.attention,\n lic_en_.first_name,\n lic_en_.middle_init,\n lic_en_.last_name,\n lic_en_.name_suffix,\n lic_en_.street_address,\n lic_en_.po_box,\n lic_en_.locality,\n lic_en_.locality_,\n lic_en_.county,\n lic_en_.state,\n lic_en_.postal_code,\n lic_en_.full_name,\n lic_en_._entity_name,\n lic_en_._first_name,\n lic_en_._last_name,\n lic_en_.zip5,\n lic_en_.zip_location,\n lic_en_.maidenhead,\n lic_en_.geo_region,\n lic_hd_.uls_file_num,\n lic_hd_.radio_service,\n lic_hd_.license_status,\n lic_hd_.grant_date,\n lic_hd_.effective_date,\n lic_hd_.cancel_date,\n lic_hd_.expire_date,\n lic_hd_.end_date,\n lic_hd_.available_date,\n lic_hd_.last_action_date,\n lic_am_.uls_region,\n lic_am_.callsign_group,\n lic_am_.operator_group,\n lic_am_.operator_class,\n lic_am_.prev_class,\n lic_am_.prev_callsign,\n lic_am_.vanity_type,\n lic_am_.is_trustee,\n lic_am_.trustee_callsign,\n lic_am_.trustee_name,\n CASE\n WHEN lic_am_.vanity_type::character(1) = ANY \n(ARRAY['A'::bpchar, 'C'::bpchar]) THEN verify_callsign(lic_en_.callsign, \nlic_en_.licensee_id, lic_hd_.grant_date, lic_en_.state::bpchar, \nlic_am_.operator_class::bpchar, lic_en_.applicant_type::bpchar, \nlic_am_.trustee_callsign)\n ELSE NULL::integer\n END AS validity\n FROM lic_en_\n JOIN lic_hd_ USING (sys_id, callsign)\n JOIN lic_am_ USING (sys_id, callsign);\n\n*VIEW lic_en_:*\n\n=> \\d+ lic_en_\n View \"Callsign.lic_en_\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n-----------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n fcc_reg_num | character(10) | | | | \nextended |\n licensee_id | character(9) | | | | \nextended |\n subgroup_id_num | character(3) | | | | \nextended |\n applicant_type | text | | | | \nextended |\n entity_type | text | | | | \nextended |\n entity_name | character varying(200) | | | | \nextended |\n attention | character varying(35) | | | | \nextended |\n first_name | character varying(20) | | | | \nextended |\n middle_init | character(1) | | | | \nextended |\n last_name | character varying(20) | | | | \nextended |\n name_suffix | character(3) | | | | \nextended |\n street_address | character varying(60) | | | | \nextended |\n po_box | text | | | | \nextended |\n locality | character varying | | | | \nextended |\n locality_ | character varying | | | | \nextended |\n county | character varying | | | | \nextended |\n state | text | | | | \nextended |\n postal_code | text | | | | \nextended |\n full_name | text | | | | \nextended |\n _entity_name | text | | | | \nextended |\n _first_name | text | | | | \nextended |\n _last_name | text | | | | \nextended |\n zip5 | character(5) | | | | \nextended |\n zip_location | \"GeoPosition\" | | | | \nextended |\n maidenhead | bpchar | | | | \nextended |\n geo_region | smallint | | | | \nplain |\nView definition:\n SELECT lic_en.sys_id,\n lic_en.callsign,\n lic_en.fcc_reg_num,\n lic_en.licensee_id,\n lic_en.subgroup_id_num,\n (lic_en.applicant_type::text || ' - '::text) || COALESCE(( SELECT \n\"ApplicantType\".app_type_text\n FROM \"ApplicantType\"\n WHERE lic_en.applicant_type = \"ApplicantType\".app_type_id\n LIMIT 1), '???'::character varying)::text AS applicant_type,\n (lic_en.entity_type::text || ' - '::text) || COALESCE(( SELECT \n\"EntityType\".entity_text\n FROM \"EntityType\"\n WHERE lic_en.entity_type = \"EntityType\".entity_id\n LIMIT 1), '???'::character varying)::text AS entity_type,\n lic_en.entity_name,\n lic_en.attention,\n lic_en.first_name,\n lic_en.middle_init,\n lic_en.last_name,\n lic_en.name_suffix,\n lic_en.street_address,\n lic_en.po_box,\n lic_en.locality,\n zip_code.locality_text AS locality_,\n \"County\".county_text AS county,\n (territory_id::text || ' - '::text) || \nCOALESCE(govt_region.territory_text, '???'::character varying)::text AS \nstate,\n zip9_format(lic_en.postal_code::text) AS postal_code,\n lic_en.full_name,\n lic_en._entity_name,\n lic_en._first_name,\n lic_en._last_name,\n lic_en.zip5,\n zip_code.zip_location,\n maidenhead(zip_code.zip_location) AS maidenhead,\n govt_region.geo_region\n FROM lic_en\n JOIN govt_region USING (territory_id, country_id)\n LEFT JOIN zip_code USING (territory_id, country_id, zip5)\n LEFT JOIN \"County\" USING (territory_id, country_id, fips_county);\n\n*VIEW lic_en:*\n\n=> \\d+ lic_en\n View \"Callsign.lic_en\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n-----------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n fcc_reg_num | character(10) | | | | \nextended |\n licensee_id | character(9) | | | | \nextended |\n subgroup_id_num | character(3) | | | | \nextended |\n applicant_type | character(1) | | | | \nextended |\n entity_type | character(2) | | | | \nextended |\n entity_name | character varying(200) | | | | \nextended |\n attention | character varying(35) | | | | \nextended |\n first_name | character varying(20) | | | | \nextended |\n middle_init | character(1) | | | | \nextended |\n last_name | character varying(20) | | | | \nextended |\n name_suffix | character(3) | | | | \nextended |\n street_address | character varying(60) | | | | \nextended |\n po_box | text | | | | \nextended |\n locality | character varying | | | | \nextended |\n territory_id | character(2) | | | | \nextended |\n postal_code | character(9) | | | | \nextended |\n full_name | text | | | | \nextended |\n _entity_name | text | | | | \nextended |\n _first_name | text | | | | \nextended |\n _last_name | text | | | | \nextended |\n zip5 | character(5) | | | | \nextended |\n country_id | character(2) | | | | \nextended |\nView definition:\n SELECT _lic_en.sys_id,\n _lic_en.callsign,\n _lic_en.fcc_reg_num,\n _lic_en.licensee_id,\n _lic_en.subgroup_id_num,\n _lic_en.applicant_type,\n _lic_en.entity_type,\n _lic_en.entity_name,\n _lic_en.attention,\n _lic_en.first_name,\n _lic_en.middle_init,\n _lic_en.last_name,\n _lic_en.name_suffix,\n _lic_en.street_address,\n _lic_en.po_box,\n _lic_en.locality,\n _lic_en.territory_id,\n _lic_en.postal_code,\n _lic_en.full_name,\n _lic_en._entity_name,\n _lic_en._first_name,\n _lic_en._last_name,\n _lic_en.zip5,\n _lic_en.country_id\n FROM _lic_en;\n\n*VIEW _lic_en:*\n\n=> \\d+ _lic_en\n View \"Callsign._lic_en\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n-----------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n fcc_reg_num | character(10) | | | | \nextended |\n licensee_id | character(9) | | | | \nextended |\n subgroup_id_num | character(3) | | | | \nextended |\n applicant_type | character(1) | | | | \nextended |\n entity_type | character(2) | | | | \nextended |\n entity_name | character varying(200) | | | | \nextended |\n attention | character varying(35) | | | | \nextended |\n first_name | character varying(20) | | | | \nextended |\n middle_init | character(1) | | | | \nextended |\n last_name | character varying(20) | | | | \nextended |\n name_suffix | character(3) | | | | \nextended |\n street_address | character varying(60) | | | | \nextended |\n po_box | text | | | | \nextended |\n locality | character varying | | | | \nextended |\n territory_id | character(2) | | | | \nextended |\n postal_code | character(9) | | | | \nextended |\n full_name | text | | | | \nextended |\n _entity_name | text | | | | \nextended |\n _first_name | text | | | | \nextended |\n _last_name | text | | | | \nextended |\n zip5 | character(5) | | | | \nextended |\n country_id | character(2) | | | | \nextended |\nView definition:\n SELECT \"_EN\".unique_system_identifier AS sys_id,\n \"_EN\".callsign,\n \"_EN\".frn AS fcc_reg_num,\n \"_EN\".licensee_id,\n \"_EN\".sgin AS subgroup_id_num,\n \"_EN\".applicant_type_code AS applicant_type,\n \"_EN\".entity_type,\n \"_EN\".entity_name,\n \"_EN\".attention_line AS attention,\n \"_EN\".first_name,\n \"_EN\".mi AS middle_init,\n \"_EN\".last_name,\n \"_EN\".suffix AS name_suffix,\n \"_EN\".street_address,\n po_box_format(\"_EN\".po_box::text) AS po_box,\n \"_EN\".city AS locality,\n \"_EN\".state AS territory_id,\n \"_EN\".zip_code AS postal_code,\n initcap(((COALESCE(\"_EN\".first_name::text || ' '::text, ''::text) \n|| COALESCE(\"_EN\".mi::text || ' '::text, ''::text)) || \n\"_EN\".last_name::text) || COALESCE(' '::text || \"_EN\".suffix::text, \n''::text)) AS full_name,\n initcap(\"_EN\".entity_name::text) AS _entity_name,\n initcap(\"_EN\".first_name::text) AS _first_name,\n initcap(\"_EN\".last_name::text) AS _last_name,\n \"_EN\".zip_code::character(5) AS zip5,\n \"_EN\".country_id\n FROM \"UlsLic\".\"_EN\";\n\n*TABLE \"UlsLic\".\"_EN\"**:*\n\n=> \\d+ \"UlsLic\".\"_EN\"\n Table \"UlsLic._EN\"\n Column | Type | Collation | \nNullable | Default | Storage | Stats target | Description\n--------------------------+------------------------+-----------+----------+---------+----------+--------------+-------------\n record_type | character(2) | | not \nnull | | extended | |\n unique_system_identifier | integer | | not \nnull | | plain | |\n uls_file_number | character(14) | | \n| | extended | |\n ebf_number | character varying(30) | | \n| | extended | |\n callsign | character(10) | | \n| | extended | |\n entity_type | character(2) | | \n| | extended | |\n licensee_id | character(9) | | \n| | extended | |\n entity_name | character varying(200) | | \n| | extended | |\n first_name | character varying(20) | | \n| | extended | |\n mi | character(1) | | \n| | extended | |\n last_name | character varying(20) | | \n| | extended | |\n suffix | character(3) | | \n| | extended | |\n phone | character(10) | | \n| | extended | |\n fax | character(10) | | \n| | extended | |\n email | character varying(50) | | \n| | extended | |\n street_address | character varying(60) | | \n| | extended | |\n city | character varying | | \n| | extended | |\n state | character(2) | | \n| | extended | |\n zip_code | character(9) | | \n| | extended | |\n po_box | character varying(20) | | \n| | extended | |\n attention_line | character varying(35) | | \n| | extended | |\n sgin | character(3) | | \n| | extended | |\n frn | character(10) | | \n| | extended | |\n applicant_type_code | character(1) | | \n| | extended | |\n applicant_type_other | character(40) | | \n| | extended | |\n status_code | character(1) | | \n| | extended | |\n status_date | \"MySql\".datetime | | \n| | plain | |\n lic_category_code | character(1) | | \n| | extended | |\n linked_license_id | numeric(9,0) | | \n| | main | |\n linked_callsign | character(10) | | \n| | extended | |\n country_id | character(2) | | \n| | extended | |\nIndexes:\n \"_EN_pkey\" PRIMARY KEY, btree (unique_system_identifier)\n \"_EN__entity_name\" btree (initcap(entity_name::text))\n \"_EN__first_name\" btree (initcap(first_name::text))\n \"_EN__last_name\" btree (initcap(last_name::text))\n \"_EN__zip5\" btree ((zip_code::character(5)))\n \"_EN_callsign\" btree (callsign)\n \"_EN_fcc_reg_num\" btree (frn)\n \"_EN_licensee_id\" btree (licensee_id)\nCheck constraints:\n \"_EN_record_type_check\" CHECK (record_type = 'EN'::bpchar)\nForeign-key constraints:\n \"_EN_applicant_type_code_fkey\" FOREIGN KEY (applicant_type_code) \nREFERENCES \"FccLookup\".\"_ApplicantType\"(app_type_id\n)\n \"_EN_entity_type_fkey\" FOREIGN KEY (entity_type) REFERENCES \n\"FccLookup\".\"_EntityType\"(entity_id)\n \"_EN_state_fkey\" FOREIGN KEY (state, country_id) REFERENCES \n\"BaseLookup\".\"_Territory\"(territory_id, country_id)\n \"_EN_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFERENCES \"UlsLic\".\"_HD\"(unique_system_i\ndentifier) ON UPDATE CASCADE ON DELETE CASCADE\n\n\n*VIEW lic_hd_:*\n\n=> \\d+ lic_hd_\n View \"Callsign.lic_hd_\"\n Column | Type | Collation | Nullable | Default | \nStorage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | plain |\n callsign | character(10) | | | | extended |\n uls_file_num | character(14) | | | | extended |\n radio_service | text | | | | extended |\n license_status | text | | | | extended |\n grant_date | date | | | | plain |\n effective_date | date | | | | plain |\n cancel_date | date | | | | plain |\n expire_date | date | | | | plain |\n end_date | date | | | | plain |\n available_date | date | | | | plain |\n last_action_date | date | | | | plain |\nView definition:\n SELECT lic_hd.sys_id,\n lic_hd.callsign,\n lic_hd.uls_file_num,\n (lic_hd.radio_service::text || ' - '::text) || COALESCE(( SELECT \n\"RadioService\".service_text\n FROM \"RadioService\"\n WHERE lic_hd.radio_service = \"RadioService\".service_id\n LIMIT 1), '???'::character varying)::text AS radio_service,\n (lic_hd.license_status::text || ' - '::text) || COALESCE(( SELECT \n\"LicStatus\".status_text\n FROM \"LicStatus\"\n WHERE lic_hd.license_status = \"LicStatus\".status_id\n LIMIT 1), '???'::character varying)::text AS license_status,\n lic_hd.grant_date,\n lic_hd.effective_date,\n lic_hd.cancel_date,\n lic_hd.expire_date,\n LEAST(lic_hd.cancel_date, lic_hd.expire_date) AS end_date,\n CASE\n WHEN lic_hd.cancel_date < lic_hd.expire_date THEN \nGREATEST((lic_hd.cancel_date + '2 years'::interval)::date, \nlic_hd.last_action_date + 30)\n WHEN lic_hd.license_status = 'A'::bpchar AND uls_date() > \n(lic_hd.expire_date + '2 years'::interval)::date THEN NULL::date\n ELSE (lic_hd.expire_date + '2 years'::interval)::date\n END + 1 AS available_date,\n lic_hd.last_action_date\n FROM lic_hd;\n\n*VIEW lic_hd:*\n\n=> \\d+ lic_hd\n View \"Callsign.lic_hd\"\n Column | Type | Collation | Nullable | Default | \nStorage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | plain |\n callsign | character(10) | | | | extended |\n uls_file_num | character(14) | | | | extended |\n radio_service | character(2) | | | | extended |\n license_status | character(1) | | | | extended |\n grant_date | date | | | | plain |\n effective_date | date | | | | plain |\n cancel_date | date | | | | plain |\n expire_date | date | | | | plain |\n last_action_date | date | | | | plain |\nView definition:\n SELECT _lic_hd.sys_id,\n _lic_hd.callsign,\n _lic_hd.uls_file_num,\n _lic_hd.radio_service,\n _lic_hd.license_status,\n _lic_hd.grant_date,\n _lic_hd.effective_date,\n _lic_hd.cancel_date,\n _lic_hd.expire_date,\n _lic_hd.last_action_date\n FROM _lic_hd;\n\n*VIEW _lic_hd:*\n\n=> \\d+ _lic_hd\n View \"Callsign._lic_hd\"\n Column | Type | Collation | Nullable | Default | \nStorage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | plain |\n callsign | character(10) | | | | extended |\n uls_file_num | character(14) | | | | extended |\n radio_service | character(2) | | | | extended |\n license_status | character(1) | | | | extended |\n grant_date | date | | | | plain |\n effective_date | date | | | | plain |\n cancel_date | date | | | | plain |\n expire_date | date | | | | plain |\n last_action_date | date | | | | plain |\nView definition:\n SELECT \"_HD\".unique_system_identifier AS sys_id,\n \"_HD\".callsign,\n \"_HD\".uls_file_number AS uls_file_num,\n \"_HD\".radio_service_code AS radio_service,\n \"_HD\".license_status,\n \"_HD\".grant_date,\n \"_HD\".effective_date,\n \"_HD\".cancellation_date AS cancel_date,\n \"_HD\".expired_date AS expire_date,\n \"_HD\".last_action_date\n FROM \"UlsLic\".\"_HD\";\n\n*TABLE **\"UlsLic\".\"_HD\"**:*\n\n=> \\d+ \"UlsLic\".\"_HD\"\n Table \"UlsLic._HD\"\n Column | Type | Collation | \nNullable | Default | Storage | Stats target | Descr\niption\n------------------------------+-----------------------+-----------+----------+---------+----------+--------------+------\n-------\n record_type | character(2) | | not null \n| | extended | |\n unique_system_identifier | integer | | not null \n| | plain | |\n uls_file_number | character(14) | | \n| | extended | |\n ebf_number | character varying(30) | | \n| | extended | |\n callsign | character(10) | | \n| | extended | |\n license_status | character(1) | | \n| | extended | |\n radio_service_code | character(2) | | \n| | extended | |\n grant_date | date | | \n| | plain | |\n expired_date | date | | \n| | plain | |\n cancellation_date | date | | \n| | plain | |\n eligibility_rule_num | character(10) | | \n| | extended | |\n applicant_type_code_reserved | character(1) | | \n| | extended | |\n alien | character(1) | | \n| | extended | |\n alien_government | character(1) | | \n| | extended | |\n alien_corporation | character(1) | | \n| | extended | |\n alien_officer | character(1) | | \n| | extended | |\n alien_control | character(1) | | \n| | extended | |\n revoked | character(1) | | \n| | extended | |\n convicted | character(1) | | \n| | extended | |\n adjudged | character(1) | | \n| | extended | |\n involved_reserved | character(1) | | \n| | extended | |\n common_carrier | character(1) | | \n| | extended | |\n non_common_carrier | character(1) | | \n| | extended | |\n private_comm | character(1) | | \n| | extended | |\n fixed | character(1) | | \n| | extended | |\n mobile | character(1) | | \n| | extended | |\n radiolocation | character(1) | | \n| | extended | |\n satellite | character(1) | | \n| | extended | |\n developmental_or_sta | character(1) | | \n| | extended | |\n interconnected_service | character(1) | | \n| | extended | |\n certifier_first_name | character varying(20) | | \n| | extended | |\n certifier_mi | character varying | | \n| | extended | |\n certifier_last_name | character varying | | \n| | extended | |\n certifier_suffix | character(3) | | \n| | extended | |\n certifier_title | character(40) | | \n| | extended | |\n gender | character(1) | | \n| | extended | |\n african_american | character(1) | | \n| | extended | |\n native_american | character(1) | | \n| | extended | |\n hawaiian | character(1) | | \n| | extended | |\n asian | character(1) | | \n| | extended | |\n white | character(1) | | \n| | extended | |\n ethnicity | character(1) | | \n| | extended | |\n effective_date | date | | \n| | plain | |\n last_action_date | date | | \n| | plain | |\n auction_id | integer | | \n| | plain | |\n reg_stat_broad_serv | character(1) | | \n| | extended | |\n band_manager | character(1) | | \n| | extended | |\n type_serv_broad_serv | character(1) | | \n| | extended | |\n alien_ruling | character(1) | | \n| | extended | |\n licensee_name_change | character(1) | | \n| | extended | |\n whitespace_ind | character(1) | | \n| | extended | |\n additional_cert_choice | character(1) | | \n| | extended | |\n additional_cert_answer | character(1) | | \n| | extended | |\n discontinuation_ind | character(1) | | \n| | extended | |\n regulatory_compliance_ind | character(1) | | \n| | extended | |\n dummy1 | character varying | | \n| | extended | |\n dummy2 | character varying | | \n| | extended | |\n dummy3 | character varying | | \n| | extended | |\n dummy4 | character varying | | \n| | extended | |\nIndexes:\n \"_HD_pkey\" PRIMARY KEY, btree (unique_system_identifier)\n \"_HD_callsign\" btree (callsign)\n \"_HD_grant_date\" btree (grant_date)\n \"_HD_last_action_date\" btree (last_action_date)\n \"_HD_uls_file_num\" btree (uls_file_number)\nCheck constraints:\n \"_HD_record_type_check\" CHECK (record_type = 'HD'::bpchar)\nForeign-key constraints:\n \"_HD_license_status_fkey\" FOREIGN KEY (license_status) REFERENCES \n\"FccLookup\".\"_LicStatus\"(status_id)\n \"_HD_radio_service_code_fkey\" FOREIGN KEY (radio_service_code) \nREFERENCES \"FccLookup\".\"_RadioService\"(service_id)\nReferenced by:\n TABLE \"\"UlsLic\".\"_AM\"\" CONSTRAINT \n\"_AM_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFEREN\nCES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON DELETE \nCASCADE\n TABLE \"\"UlsLic\".\"_CO\"\" CONSTRAINT \n\"_CO_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFEREN\nCES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON DELETE \nCASCADE\n TABLE \"\"UlsLic\".\"_EN\"\" CONSTRAINT \n\"_EN_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFEREN\nCES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON DELETE \nCASCADE\n TABLE \"\"UlsLic\".\"_HS\"\" CONSTRAINT \n\"_HS_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFEREN\nCES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON DELETE \nCASCADE\n TABLE \"\"UlsLic\".\"_LA\"\" CONSTRAINT \n\"_LA_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFEREN\nCES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON DELETE \nCASCADE\n TABLE \"\"UlsLic\".\"_SC\"\" CONSTRAINT \n\"_SC_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFEREN\nCES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON DELETE \nCASCADE\n TABLE \"\"UlsLic\".\"_SF\"\" CONSTRAINT \n\"_SF_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFEREN\nCES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON DELETE \nCASCADE\n\n*VIEW lic_am_:*\n\n=> \\d+ lic_am_\n View \"Callsign.lic_am_\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n------------------+-----------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n uls_region | \"MySql\".tinyint | | | | \nplain |\n callsign_group | text | | | | \nextended |\n operator_group | text | | | | \nextended |\n operator_class | text | | | | \nextended |\n prev_class | text | | | | \nextended |\n prev_callsign | character(10) | | | | \nextended |\n vanity_type | text | | | | \nextended |\n is_trustee | character(1) | | | | \nextended |\n trustee_callsign | character(10) | | | | \nextended |\n trustee_name | character varying(50) | | | | \nextended |\nView definition:\n SELECT lic_am.sys_id,\n lic_am.callsign,\n lic_am.uls_region,\n ( SELECT (\"CallsignGroup\".group_id::text || ' - '::text) || \n\"CallsignGroup\".match_text::text\n FROM \"CallsignGroup\"\n WHERE lic_am.callsign ~ \"CallsignGroup\".pattern::text\n LIMIT 1) AS callsign_group,\n ( SELECT (oper_group.group_id::text || ' - '::text) || \noper_group.group_text::text\n FROM oper_group\n WHERE lic_am.operator_class = oper_group.class_id\n LIMIT 1) AS operator_group,\n (lic_am.operator_class::text || ' - '::text) || COALESCE(( SELECT \n\"OperatorClass\".class_text\n FROM \"OperatorClass\"\n WHERE lic_am.operator_class = \"OperatorClass\".class_id\n LIMIT 1), '???'::character varying)::text AS operator_class,\n (lic_am.prev_class::text || ' - '::text) || COALESCE(( SELECT \n\"OperatorClass\".class_text\n FROM \"OperatorClass\"\n WHERE lic_am.prev_class = \"OperatorClass\".class_id\n LIMIT 1), '???'::character varying)::text AS prev_class,\n lic_am.prev_callsign,\n (lic_am.vanity_type::text || ' - '::text) || COALESCE(( SELECT \n\"VanityType\".vanity_text\n FROM \"VanityType\"\n WHERE lic_am.vanity_type = \"VanityType\".vanity_id\n LIMIT 1), '???'::character varying)::text AS vanity_type,\n lic_am.is_trustee,\n lic_am.trustee_callsign,\n lic_am.trustee_name\n FROM lic_am;\n\n*VIEW lic_am:*\n\n=> \\d+ lic_am\n View \"Callsign.lic_am\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n------------------+-----------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n uls_region | \"MySql\".tinyint | | | | \nplain |\n uls_group | character(1) | | | | \nextended |\n operator_class | character(1) | | | | \nextended |\n prev_callsign | character(10) | | | | \nextended |\n prev_class | character(1) | | | | \nextended |\n vanity_type | character(1) | | | | \nextended |\n is_trustee | character(1) | | | | \nextended |\n trustee_callsign | character(10) | | | | \nextended |\n trustee_name | character varying(50) | | | | \nextended |\nView definition:\n SELECT _lic_am.sys_id,\n _lic_am.callsign,\n _lic_am.uls_region,\n _lic_am.uls_group,\n _lic_am.operator_class,\n _lic_am.prev_callsign,\n _lic_am.prev_class,\n _lic_am.vanity_type,\n _lic_am.is_trustee,\n _lic_am.trustee_callsign,\n _lic_am.trustee_name\n FROM _lic_am;\n\n*VIEW _lic_am:*\n\n=> \\d+ _lic_am\n View \"Callsign._lic_am\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n------------------+-----------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n uls_region | \"MySql\".tinyint | | | | \nplain |\n uls_group | character(1) | | | | \nextended |\n operator_class | character(1) | | | | \nextended |\n prev_callsign | character(10) | | | | \nextended |\n prev_class | character(1) | | | | \nextended |\n vanity_type | character(1) | | | | \nextended |\n is_trustee | character(1) | | | | \nextended |\n trustee_callsign | character(10) | | | | \nextended |\n trustee_name | character varying(50) | | | | \nextended |\nView definition:\n SELECT \"_AM\".unique_system_identifier AS sys_id,\n \"_AM\".callsign,\n \"_AM\".region_code AS uls_region,\n \"_AM\".group_code AS uls_group,\n \"_AM\".operator_class,\n \"_AM\".previous_callsign AS prev_callsign,\n \"_AM\".previous_operator_class AS prev_class,\n \"_AM\".vanity_callsign_change AS vanity_type,\n \"_AM\".trustee_indicator AS is_trustee,\n \"_AM\".trustee_callsign,\n \"_AM\".trustee_name\n FROM \"UlsLic\".\"_AM\";\n\n*TABLE **\"UlsLic\".\"_AM\"**:*\n\n=> \\d+ \"UlsLic\".\"_AM\"\n Table \"UlsLic._AM\"\n Column | Type | Collation | \nNullable | Default | Storage | Stats target | Description\n----------------------------+-----------------------+-----------+----------+---------+----------+--------------+-------------\n record_type | character(2) | | not \nnull | | extended | |\n unique_system_identifier | integer | | not \nnull | | plain | |\n uls_file_number | character(14) | | \n| | extended | |\n ebf_number | character varying(30) | | \n| | extended | |\n callsign | character(10) | | \n| | extended | |\n operator_class | character(1) | | \n| | extended | |\n group_code | character(1) | | \n| | extended | |\n region_code | \"MySql\".tinyint | | \n| | plain | |\n trustee_callsign | character(10) | | \n| | extended | |\n trustee_indicator | character(1) | | \n| | extended | |\n physician_certification | character(1) | | \n| | extended | |\n ve_signature | character(1) | | \n| | extended | |\n systematic_callsign_change | character(1) | | \n| | extended | |\n vanity_callsign_change | character(1) | | \n| | extended | |\n vanity_relationship | character(12) | | \n| | extended | |\n previous_callsign | character(10) | | \n| | extended | |\n previous_operator_class | character(1) | | \n| | extended | |\n trustee_name | character varying(50) | | \n| | extended | |\nIndexes:\n \"_AM_pkey\" PRIMARY KEY, btree (unique_system_identifier)\n \"_AM_callsign\" btree (callsign)\n \"_AM_prev_callsign\" btree (previous_callsign)\n \"_AM_trustee_callsign\" btree (trustee_callsign)\nCheck constraints:\n \"_AM_record_type_check\" CHECK (record_type = 'AM'::bpchar)\nForeign-key constraints:\n \"_AM_operator_class_fkey\" FOREIGN KEY (operator_class) REFERENCES \n\"FccLookup\".\"_OperatorClass\"(class_id)\n \"_AM_previous_operator_class_fkey\" FOREIGN KEY \n(previous_operator_class) REFERENCES \"FccLookup\".\"_OperatorClass\"(cla\nss_id)\n \"_AM_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFERENCES \"UlsLic\".\"_HD\"(unique_system_i\ndentifier) ON UPDATE CASCADE ON DELETE CASCADE\n \"_AM_vanity_callsign_change_fkey\" FOREIGN KEY \n(vanity_callsign_change) REFERENCES \"FccLookup\".\"_VanityType\"(vanity_i\nd)\n\n\n\n\n\n\n\n I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4\n at one point), gradually moving to v9.0 w/ replication in 2010. In\n 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to\n v9.6, & was entirely satisfied with the result.\n\n In March of this year, AWS announced that v9.6 was nearing end of\n support, & AWS would forcibly upgrade everyone to v12 on January\n 22, 2022, if users did not perform the upgrade earlier. My first\n attempt was successful as far as the upgrade itself, but complex\n queries that normally ran in a couple of seconds on v9.x, were\n taking minutes in v12.\n\n I didn't have the time in March to diagnose the problem, other than\n some futile adjustments to server parameters, so I reverted back to\n a saved copy of my v9.6 data.\n\n On Sunday, being retired, I decided to attempt to solve the issue in\n earnest. I have now spent five days (about 14 hours a day), trying\n various things. Keeping the v9.6 data online for web users, I've\n \"forked\" the data into a new copy, & updated it in turn to\n PostgreSQL v10, v11, v12, & v13. All exhibit the same problem: \n As you will see below, it appears that versions 10 & above are\n doing a sequential scan of some of the \"large\" (200K rows) tables. \n Note that the expected & actual run times for v9.6 & v13.2\n both differ by more than two orders of magnitude. Rather\n than post a huge eMail (ha ha), I'll start with this one, that shows\n an \"EXPLAIN ANALYZE\" from both v9.6 & v13.2, followed by the\n related table & view definitions. With one exception, table\n definitions are from the FCC (Federal Communications Commission); \n the view definitions are my own.\n\nHere's from v9.6:\n\n=> EXPLAIN ANALYZE SELECT\n club_count, extra_count, region_count, callsign AS\n trustee_callsign, applicant_type, entity_name, licensee_id AS _lid\n FROM genclub_multi_ WHERE club_count >= 5 ORDER BY extra_count\n DESC, club_count DESC, entity_name;\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=407.13..407.13 rows=1 width=94) (actual\n time=348.850..348.859 rows=43 loops=1)\n Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC,\n \"_EN\".entity_name\n Sort Method: quicksort Memory: 31kB\n -> Nested Loop (cost=4.90..407.12 rows=1 width=94) (actual\n time=7.587..348.732 rows=43 loops=1)\n -> Nested Loop (cost=4.47..394.66 rows=1 width=94)\n (actual time=5.740..248.149 rows=43 loops=1)\n -> Nested Loop Left Join (cost=4.04..382.20\n rows=1 width=79) (actual time=2.458..107.908 rows=55 loops=1)\n -> Hash Join (cost=3.75..380.26 rows=1\n width=86) (actual time=2.398..106.990 rows=55 loops=1)\n Hash Cond: ((\"_EN\".country_id =\n \"_GovtRegion\".country_id) AND (\"_EN\".state =\n \"_GovtRegion\".territory_id))\n -> Nested Loop (cost=0.43..376.46\n rows=47 width=94) (actual time=2.294..106.736 rows=55 loops=1)\n -> Seq Scan on \"_Club\" \n (cost=0.00..4.44 rows=44 width=35) (actual time=0.024..0.101\n rows=44 loops=1)\n Filter: (club_count >=\n 5)\n Rows Removed by Filter: 151\n -> Index Scan using\n \"_EN_callsign\" on \"_EN\" (cost=0.43..8.45 rows=1 width=69) (actual\n time=2.179..2.420 rows=1 loops=44)\n Index Cond: (callsign =\n \"_Club\".trustee_callsign)\n -> Hash (cost=1.93..1.93 rows=93\n width=7) (actual time=0.071..0.071 rows=88 loops=1)\n Buckets: 1024 Batches: 1 Memory\n Usage: 12kB\n -> Seq Scan on \"_GovtRegion\" \n (cost=0.00..1.93 rows=93 width=7) (actual time=0.010..0.034\n rows=93 loops=1)\n -> Nested Loop (cost=0.29..1.93 rows=1\n width=7) (actual time=0.012..0.014 rows=1 loops=55)\n Join Filter: (\"_IsoCountry\".iso_alpha2\n = \"_Territory\".country_id)\n Rows Removed by Join Filter: 0\n -> Index Only Scan using\n \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..1.62\n rows=1 width=3) (actual time=0.006..0.006 rows=1 loops=55)\n Index Cond: (iso_alpha2 =\n \"_GovtRegion\".country_id)\n Heap Fetches: 55\n -> Index Only Scan using\n \"_Territory_pkey\" on \"_Territory\" (cost=0.14..0.29 rows=1\n width=7)\n (actual time=0.004..0.005 rows=1 loops=55)\n Index Cond: (territory_id =\n \"_GovtRegion\".territory_id)\n Heap Fetches: 59\n -> Index Scan using \"_HD_pkey\" on \"_HD\" \n (cost=0.43..12.45 rows=1 width=15) (actual time=2.548..2.548\n rows=1 loops=55)\n Index Cond: (unique_system_identifier =\n \"_EN\".unique_system_identifier)\n Filter: ((\"_EN\".callsign = callsign) AND\n (((((license_status)::text || ' - '::text) || (COALESCE((SubPlan\n 2), '???'::character varying))::text))::character(1) =\n 'A'::bpchar))\n Rows Removed by Filter: 0\n SubPlan 2\n -> Limit (cost=0.15..8.17 rows=1\n width=32) (actual time=0.006..0.007 rows=1 loops=55)\n -> Index Scan using\n \"_LicStatus_pkey\" on \"_LicStatus\" (cost=0.15..8.17 rows=1\n width=32) (actual time=0.005..0.005 rows=1 loops=55)\n Index Cond:\n (\"_HD\".license_status = status_id)\n -> Index Scan using \"_AM_pkey\" on \"_AM\" \n (cost=0.43..4.27 rows=1 width=15) (actual time=2.325..2.325 rows=1\n loops=43)\n Index Cond: (unique_system_identifier =\n \"_EN\".unique_system_identifier)\n Filter: (\"_EN\".callsign = callsign)\n SubPlan 1\n -> Limit (cost=0.15..8.17 rows=1 width=32) (actual\n time=0.007..0.007 rows=1 loops=43)\n -> Index Scan using \"_ApplicantType_pkey\" on\n \"_ApplicantType\" (cost=0.15..8.17 rows=1 width=32) (actual\n time=0.005..0.005 rows=1 loops=43)\n Index Cond: (\"_EN\".applicant_type_code =\n app_type_id)\n Planning time: 13.490 ms\n Execution time: 349.182 ms\n (43 rows)\n\n\nHere's from v13.2: \n\n=> EXPLAIN ANALYZE SELECT\n club_count, extra_count, region_count, callsign AS\n trustee_callsign, applicant_type, entity_name, licensee_id AS _lid\n FROM genclub_multi_ WHERE club_count >= 5 ORDER BY extra_count\n DESC, club_count DESC, entity_name;\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=144365.60..144365.60 rows=1 width=94) (actual\n time=31898.860..31901.922 rows=43 loops=1)\n Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC,\n \"_EN\".entity_name\n Sort Method: quicksort Memory: 31kB\n -> Nested Loop (cost=58055.66..144365.59 rows=1 width=94)\n (actual time=6132.403..31894.233 rows=43 loops=1)\n -> Nested Loop (cost=58055.51..144364.21 rows=1\n width=62) (actual time=1226.085..30337.921 rows=837792 loops=1)\n -> Nested Loop Left Join \n (cost=58055.09..144360.38 rows=1 width=59) (actual\n time=1062.414..12471.456 rows=1487153 loops=1)\n -> Hash Join (cost=58054.80..144359.69\n rows=1 width=66) (actual time=1061.330..6635.041 rows=1487153\n loops=1)\n Hash Cond:\n ((\"_EN\".unique_system_identifier = \"_AM\".unique_system_identifier)\n AND (\"_EN\".callsign = \"_AM\".callsign))\n -> Hash Join (cost=3.33..53349.72\n rows=1033046 width=51) (actual time=2.151..3433.178 rows=1487153\n loops=1)\n Hash Cond: ((\"_EN\".country_id =\n \"_GovtRegion\".country_id) AND (\"_EN\".state =\n \"_GovtRegion\".territory_id))\n -> Seq Scan on \"_EN\" \n (cost=0.00..45288.05 rows=1509005 width=60) (actual\n time=0.037..2737.054 rows=1508736 loops=1)\n -> Hash (cost=1.93..1.93\n rows=93 width=7) (actual time=0.706..1.264 rows=88 loops=1)\n Buckets: 1024 Batches: 1 \n Memory Usage: 12kB\n -> Seq Scan on\n \"_GovtRegion\" (cost=0.00..1.93 rows=93 width=7) (actual\n time=0.013..0.577 rows=93 loops=1)\n -> Hash (cost=28093.99..28093.99\n rows=1506699 width=15) (actual time=1055.587..1055.588\n rows=1506474 loops=1)\n Buckets: 131072 Batches: 32 \n Memory Usage: 3175kB\n -> Seq Scan on \"_AM\" \n (cost=0.00..28093.99 rows=1506699 width=15) (actual\n time=0.009..742.774 rows=1506474 loops=1)\n -> Nested Loop (cost=0.29..0.68 rows=1\n width=7) (actual time=0.003..0.004 rows=1 loops=1487153)\n Join Filter: (\"_IsoCountry\".iso_alpha2\n = \"_Territory\".country_id)\n Rows Removed by Join Filter: 0\n -> Index Only Scan using\n \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38\n rows=1 width=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n Index Cond: (iso_alpha2 =\n \"_GovtRegion\".country_id)\n Heap Fetches: 1487153\n -> Index Only Scan using\n \"_Territory_pkey\" on \"_Territory\" (cost=0.14..0.29 rows=1\n width=7) (actual time=0.001..0.001 rows=1 loops=1487153)\n Index Cond: (territory_id =\n \"_GovtRegion\".territory_id)\n Heap Fetches: 1550706\n -> Index Scan using \"_HD_pkey\" on \"_HD\" \n (cost=0.43..3.82 rows=1 width=15) (actual time=0.012..0.012 rows=1\n loops=1487153)\n Index Cond: (unique_system_identifier =\n \"_EN\".unique_system_identifier)\n Filter: ((\"_EN\".callsign = callsign) AND\n (((((license_status)::text || ' - '::text) || (COALESCE((SubPlan\n 2), '???'::character varying))::text))::character(1) =\n 'A'::bpchar))\n Rows Removed by Filter: 0\n SubPlan 2\n -> Limit (cost=0.00..1.07 rows=1\n width=13) (actual time=0.001..0.001 rows=1 loops=1487153)\n -> Seq Scan on \"_LicStatus\" \n (cost=0.00..1.07 rows=1 width=13) (actual time=0.000..0.000 rows=1\n loops=1487153)\n Filter: (\"_HD\".license_status =\n status_id)\n Rows Removed by Filter: 1\n -> Index Scan using \"_Club_pkey\" on \"_Club\" \n (cost=0.14..0.17 rows=1 width=35) (actual time=0.002..0.002 rows=0\n loops=837792)\n Index Cond: (trustee_callsign = \"_EN\".callsign)\n Filter: (club_count >= 5)\n Rows Removed by Filter: 0\n SubPlan 1\n -> Limit (cost=0.00..1.20 rows=1 width=15) (actual\n time=0.060..0.060 rows=1 loops=43)\n -> Seq Scan on \"_ApplicantType\" \n (cost=0.00..1.20 rows=1 width=15) (actual time=0.016..0.016 rows=1\n loops=43)\n Filter: (\"_EN\".applicant_type_code =\n app_type_id)\n Rows Removed by Filter: 7\n Planning Time: 173.753 ms\n Execution Time: 31919.601 ms\n (46 rows)\n\n\nVIEW genclub_multi_:\n\n => \\d+ genclub_multi_\n View\n \"Callsign.genclub_multi_\"\n Column | Type | Collation | Nullable\n | Default | Storage | Description\n------------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n fcc_reg_num | character(10) | | \n | | extended |\n licensee_id | character(9) | | \n | | extended |\n subgroup_id_num | character(3) | | \n | | extended |\n applicant_type | text | | \n | | extended |\n entity_type | text | | \n | | extended |\n entity_name | character varying(200) | | \n | | extended |\n attention | character varying(35) | | \n | | extended |\n first_name | character varying(20) | | \n | | extended |\n middle_init | character(1) | | \n | | extended |\n last_name | character varying(20) | | \n | | extended |\n name_suffix | character(3) | | \n | | extended |\n street_address | character varying(60) | | \n | | extended |\n po_box | text | | \n | | extended |\n locality | character varying | | \n | | extended |\n locality_ | character varying | | \n | | extended |\n county | character varying | | \n | | extended |\n state | text | | \n | | extended |\n postal_code | text | | \n | | extended |\n full_name | text | | \n | | extended |\n _entity_name | text | | \n | | extended |\n _first_name | text | | \n | | extended |\n _last_name | text | | \n | | extended |\n zip5 | character(5) | | \n | | extended |\n zip_location | \"GeoPosition\" | | \n | | extended |\n maidenhead | bpchar | | \n | | extended |\n geo_region | smallint | | \n | | plain |\n uls_file_num | character(14) | | \n | | extended |\n radio_service | text | | \n | | extended |\n license_status | text | | \n | | extended |\n grant_date | date | | \n | | plain |\n effective_date | date | | \n | | plain |\n cancel_date | date | | \n | | plain |\n expire_date | date | | \n | | plain |\n end_date | date | | \n | | plain |\n available_date | date | | \n | | plain |\n last_action_date | date | | \n | | plain |\n uls_region | \"MySql\".tinyint | | \n | | plain |\n callsign_group | text | | \n | | extended |\n operator_group | text | | \n | | extended |\n operator_class | text | | \n | | extended |\n prev_class | text | | \n | | extended |\n prev_callsign | character(10) | | \n | | extended |\n vanity_type | text | | \n | | extended |\n is_trustee | character(1) | | \n | | extended |\n trustee_callsign | character(10) | | \n | | extended |\n trustee_name | character varying(50) | | \n | | extended |\n validity | integer | | \n | | plain |\n club_count | bigint | | \n | | plain |\n extra_count | bigint | | \n | | plain |\n region_count | bigint | | \n | | plain |\n View definition:\n SELECT licjb_.sys_id,\n licjb_.callsign,\n licjb_.fcc_reg_num,\n licjb_.licensee_id,\n licjb_.subgroup_id_num,\n licjb_.applicant_type,\n licjb_.entity_type,\n licjb_.entity_name,\n licjb_.attention,\n licjb_.first_name,\n licjb_.middle_init,\n licjb_.last_name,\n licjb_.name_suffix,\n licjb_.street_address,\n licjb_.po_box,\n licjb_.locality,\n licjb_.locality_,\n licjb_.county,\n licjb_.state,\n licjb_.postal_code,\n licjb_.full_name,\n licjb_._entity_name,\n licjb_._first_name,\n licjb_._last_name,\n licjb_.zip5,\n licjb_.zip_location,\n licjb_.maidenhead,\n licjb_.geo_region,\n licjb_.uls_file_num,\n licjb_.radio_service,\n licjb_.license_status,\n licjb_.grant_date,\n licjb_.effective_date,\n licjb_.cancel_date,\n licjb_.expire_date,\n licjb_.end_date,\n licjb_.available_date,\n licjb_.last_action_date,\n licjb_.uls_region,\n licjb_.callsign_group,\n licjb_.operator_group,\n licjb_.operator_class,\n licjb_.prev_class,\n licjb_.prev_callsign,\n licjb_.vanity_type,\n licjb_.is_trustee,\n licjb_.trustee_callsign,\n licjb_.trustee_name,\n licjb_.validity,\n gen.club_count,\n gen.extra_count,\n gen.region_count\n FROM licjb_,\n \"GenLicClub\" gen\n WHERE licjb_.callsign = gen.trustee_callsign AND\n licjb_.license_status::character(1) = 'A'::bpchar;\n\nVIEW GenLicClub:\n\n=> \\d+ \"GenLicClub\"\n View \"Callsign.GenLicClub\"\n Column | Type | Collation | Nullable | Default\n | Storage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n trustee_callsign | character(10) | | | \n | extended |\n club_count | bigint | | | \n | plain |\n extra_count | bigint | | | \n | plain |\n region_count | bigint | | | \n | plain |\n View definition:\n SELECT \"_Club\".trustee_callsign,\n \"_Club\".club_count,\n \"_Club\".extra_count,\n \"_Club\".region_count\n FROM \"GenLic\".\"_Club\";\n\nTABLE \"GenLic\".\"_Club\":\n\n=> \\d+ \"GenLic\".\"_Club\"\n Table \"GenLic._Club\"\n Column | Type | Collation | Nullable | Default\n | Storage | Stats target | Description\n------------------+---------------+-----------+----------+---------+----------+--------------+-------------\n trustee_callsign | character(10) | | not null | \n | extended | |\n club_count | bigint | | | \n | plain | |\n extra_count | bigint | | | \n | plain | |\n region_count | bigint | | | \n | plain | |\n Indexes:\n \"_Club_pkey\" PRIMARY KEY, btree (trustee_callsign)\n\nVIEW licjb_:\n\n=> \\d+ licjb_\n View \"Callsign.licjb_\"\n Column | Type | Collation | Nullable\n | Default | Storage | Description\n------------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n fcc_reg_num | character(10) | | \n | | extended |\n licensee_id | character(9) | | \n | | extended |\n subgroup_id_num | character(3) | | \n | | extended |\n applicant_type | text | | \n | | extended |\n entity_type | text | | \n | | extended |\n entity_name | character varying(200) | | \n | | extended |\n attention | character varying(35) | | \n | | extended |\n first_name | character varying(20) | | \n | | extended |\n middle_init | character(1) | | \n | | extended |\n last_name | character varying(20) | | \n | | extended |\n name_suffix | character(3) | | \n | | extended |\n street_address | character varying(60) | | \n | | extended |\n po_box | text | | \n | | extended |\n locality | character varying | | \n | | extended |\n locality_ | character varying | | \n | | extended |\n county | character varying | | \n | | extended |\n state | text | | \n | | extended |\n postal_code | text | | \n | | extended |\n full_name | text | | \n | | extended |\n _entity_name | text | | \n | | extended |\n _first_name | text | | \n | | extended |\n _last_name | text | | \n | | extended |\n zip5 | character(5) | | \n | | extended |\n zip_location | \"GeoPosition\" | | \n | | extended |\n maidenhead | bpchar | | \n | | extended |\n geo_region | smallint | | \n | | plain |\n uls_file_num | character(14) | | \n | | extended |\n radio_service | text | | \n | | extended |\n license_status | text | | \n | | extended |\n grant_date | date | | \n | | plain |\n effective_date | date | | \n | | plain |\n cancel_date | date | | \n | | plain |\n expire_date | date | | \n | | plain |\n end_date | date | | \n | | plain |\n available_date | date | | \n | | plain |\n last_action_date | date | | \n | | plain |\n uls_region | \"MySql\".tinyint | | \n | | plain |\n callsign_group | text | | \n | | extended |\n operator_group | text | | \n | | extended |\n operator_class | text | | \n | | extended |\n prev_class | text | | \n | | extended |\n prev_callsign | character(10) | | \n | | extended |\n vanity_type | text | | \n | | extended |\n is_trustee | character(1) | | \n | | extended |\n trustee_callsign | character(10) | | \n | | extended |\n trustee_name | character varying(50) | | \n | | extended |\n validity | integer | | \n | | plain |\n View definition:\n SELECT lic_en_.sys_id,\n lic_en_.callsign,\n lic_en_.fcc_reg_num,\n lic_en_.licensee_id,\n lic_en_.subgroup_id_num,\n lic_en_.applicant_type,\n lic_en_.entity_type,\n lic_en_.entity_name,\n lic_en_.attention,\n lic_en_.first_name,\n lic_en_.middle_init,\n lic_en_.last_name,\n lic_en_.name_suffix,\n lic_en_.street_address,\n lic_en_.po_box,\n lic_en_.locality,\n lic_en_.locality_,\n lic_en_.county,\n lic_en_.state,\n lic_en_.postal_code,\n lic_en_.full_name,\n lic_en_._entity_name,\n lic_en_._first_name,\n lic_en_._last_name,\n lic_en_.zip5,\n lic_en_.zip_location,\n lic_en_.maidenhead,\n lic_en_.geo_region,\n lic_hd_.uls_file_num,\n lic_hd_.radio_service,\n lic_hd_.license_status,\n lic_hd_.grant_date,\n lic_hd_.effective_date,\n lic_hd_.cancel_date,\n lic_hd_.expire_date,\n lic_hd_.end_date,\n lic_hd_.available_date,\n lic_hd_.last_action_date,\n lic_am_.uls_region,\n lic_am_.callsign_group,\n lic_am_.operator_group,\n lic_am_.operator_class,\n lic_am_.prev_class,\n lic_am_.prev_callsign,\n lic_am_.vanity_type,\n lic_am_.is_trustee,\n lic_am_.trustee_callsign,\n lic_am_.trustee_name,\n CASE\n WHEN lic_am_.vanity_type::character(1) = ANY\n (ARRAY['A'::bpchar, 'C'::bpchar]) THEN\n verify_callsign(lic_en_.callsign, lic_en_.licensee_id,\n lic_hd_.grant_date, lic_en_.state::bpchar,\n lic_am_.operator_class::bpchar, lic_en_.applicant_type::bpchar,\n lic_am_.trustee_callsign)\n ELSE NULL::integer\n END AS validity\n FROM lic_en_\n JOIN lic_hd_ USING (sys_id, callsign)\n JOIN lic_am_ USING (sys_id, callsign);\n\nVIEW lic_en_:\n\n=> \\d+ lic_en_\n View \"Callsign.lic_en_\"\n Column | Type | Collation | Nullable |\n Default | Storage | Description\n-----------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n fcc_reg_num | character(10) | | \n | | extended |\n licensee_id | character(9) | | \n | | extended |\n subgroup_id_num | character(3) | | \n | | extended |\n applicant_type | text | | \n | | extended |\n entity_type | text | | \n | | extended |\n entity_name | character varying(200) | | \n | | extended |\n attention | character varying(35) | | \n | | extended |\n first_name | character varying(20) | | \n | | extended |\n middle_init | character(1) | | \n | | extended |\n last_name | character varying(20) | | \n | | extended |\n name_suffix | character(3) | | \n | | extended |\n street_address | character varying(60) | | \n | | extended |\n po_box | text | | \n | | extended |\n locality | character varying | | \n | | extended |\n locality_ | character varying | | \n | | extended |\n county | character varying | | \n | | extended |\n state | text | | \n | | extended |\n postal_code | text | | \n | | extended |\n full_name | text | | \n | | extended |\n _entity_name | text | | \n | | extended |\n _first_name | text | | \n | | extended |\n _last_name | text | | \n | | extended |\n zip5 | character(5) | | \n | | extended |\n zip_location | \"GeoPosition\" | | \n | | extended |\n maidenhead | bpchar | | \n | | extended |\n geo_region | smallint | | \n | | plain |\n View definition:\n SELECT lic_en.sys_id,\n lic_en.callsign,\n lic_en.fcc_reg_num,\n lic_en.licensee_id,\n lic_en.subgroup_id_num,\n (lic_en.applicant_type::text || ' - '::text) || COALESCE((\n SELECT \"ApplicantType\".app_type_text\n FROM \"ApplicantType\"\n WHERE lic_en.applicant_type =\n \"ApplicantType\".app_type_id\n LIMIT 1), '???'::character varying)::text AS\n applicant_type,\n (lic_en.entity_type::text || ' - '::text) || COALESCE(( SELECT\n \"EntityType\".entity_text\n FROM \"EntityType\"\n WHERE lic_en.entity_type = \"EntityType\".entity_id\n LIMIT 1), '???'::character varying)::text AS entity_type,\n lic_en.entity_name,\n lic_en.attention,\n lic_en.first_name,\n lic_en.middle_init,\n lic_en.last_name,\n lic_en.name_suffix,\n lic_en.street_address,\n lic_en.po_box,\n lic_en.locality,\n zip_code.locality_text AS locality_,\n \"County\".county_text AS county,\n (territory_id::text || ' - '::text) ||\n COALESCE(govt_region.territory_text, '???'::character\n varying)::text AS state,\n zip9_format(lic_en.postal_code::text) AS postal_code,\n lic_en.full_name,\n lic_en._entity_name,\n lic_en._first_name,\n lic_en._last_name,\n lic_en.zip5,\n zip_code.zip_location,\n maidenhead(zip_code.zip_location) AS maidenhead,\n govt_region.geo_region\n FROM lic_en\n JOIN govt_region USING (territory_id, country_id)\n LEFT JOIN zip_code USING (territory_id, country_id, zip5)\n LEFT JOIN \"County\" USING (territory_id, country_id,\n fips_county);\n\nVIEW lic_en:\n\n=> \\d+ lic_en\n View \"Callsign.lic_en\"\n Column | Type | Collation | Nullable |\n Default | Storage | Description\n-----------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n fcc_reg_num | character(10) | | \n | | extended |\n licensee_id | character(9) | | \n | | extended |\n subgroup_id_num | character(3) | | \n | | extended |\n applicant_type | character(1) | | \n | | extended |\n entity_type | character(2) | | \n | | extended |\n entity_name | character varying(200) | | \n | | extended |\n attention | character varying(35) | | \n | | extended |\n first_name | character varying(20) | | \n | | extended |\n middle_init | character(1) | | \n | | extended |\n last_name | character varying(20) | | \n | | extended |\n name_suffix | character(3) | | \n | | extended |\n street_address | character varying(60) | | \n | | extended |\n po_box | text | | \n | | extended |\n locality | character varying | | \n | | extended |\n territory_id | character(2) | | \n | | extended |\n postal_code | character(9) | | \n | | extended |\n full_name | text | | \n | | extended |\n _entity_name | text | | \n | | extended |\n _first_name | text | | \n | | extended |\n _last_name | text | | \n | | extended |\n zip5 | character(5) | | \n | | extended |\n country_id | character(2) | | \n | | extended |\n View definition:\n SELECT _lic_en.sys_id,\n _lic_en.callsign,\n _lic_en.fcc_reg_num,\n _lic_en.licensee_id,\n _lic_en.subgroup_id_num,\n _lic_en.applicant_type,\n _lic_en.entity_type,\n _lic_en.entity_name,\n _lic_en.attention,\n _lic_en.first_name,\n _lic_en.middle_init,\n _lic_en.last_name,\n _lic_en.name_suffix,\n _lic_en.street_address,\n _lic_en.po_box,\n _lic_en.locality,\n _lic_en.territory_id,\n _lic_en.postal_code,\n _lic_en.full_name,\n _lic_en._entity_name,\n _lic_en._first_name,\n _lic_en._last_name,\n _lic_en.zip5,\n _lic_en.country_id\n FROM _lic_en;\n\nVIEW _lic_en:\n\n=> \\d+ _lic_en\n View \"Callsign._lic_en\"\n Column | Type | Collation | Nullable |\n Default | Storage | Description\n-----------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n fcc_reg_num | character(10) | | \n | | extended |\n licensee_id | character(9) | | \n | | extended |\n subgroup_id_num | character(3) | | \n | | extended |\n applicant_type | character(1) | | \n | | extended |\n entity_type | character(2) | | \n | | extended |\n entity_name | character varying(200) | | \n | | extended |\n attention | character varying(35) | | \n | | extended |\n first_name | character varying(20) | | \n | | extended |\n middle_init | character(1) | | \n | | extended |\n last_name | character varying(20) | | \n | | extended |\n name_suffix | character(3) | | \n | | extended |\n street_address | character varying(60) | | \n | | extended |\n po_box | text | | \n | | extended |\n locality | character varying | | \n | | extended |\n territory_id | character(2) | | \n | | extended |\n postal_code | character(9) | | \n | | extended |\n full_name | text | | \n | | extended |\n _entity_name | text | | \n | | extended |\n _first_name | text | | \n | | extended |\n _last_name | text | | \n | | extended |\n zip5 | character(5) | | \n | | extended |\n country_id | character(2) | | \n | | extended |\n View definition:\n SELECT \"_EN\".unique_system_identifier AS sys_id,\n \"_EN\".callsign,\n \"_EN\".frn AS fcc_reg_num,\n \"_EN\".licensee_id,\n \"_EN\".sgin AS subgroup_id_num,\n \"_EN\".applicant_type_code AS applicant_type,\n \"_EN\".entity_type,\n \"_EN\".entity_name,\n \"_EN\".attention_line AS attention,\n \"_EN\".first_name,\n \"_EN\".mi AS middle_init,\n \"_EN\".last_name,\n \"_EN\".suffix AS name_suffix,\n \"_EN\".street_address,\n po_box_format(\"_EN\".po_box::text) AS po_box,\n \"_EN\".city AS locality,\n \"_EN\".state AS territory_id,\n \"_EN\".zip_code AS postal_code,\n initcap(((COALESCE(\"_EN\".first_name::text || ' '::text,\n ''::text) || COALESCE(\"_EN\".mi::text || ' '::text, ''::text)) ||\n \"_EN\".last_name::text) || COALESCE(' '::text ||\n \"_EN\".suffix::text, ''::text)) AS full_name,\n initcap(\"_EN\".entity_name::text) AS _entity_name,\n initcap(\"_EN\".first_name::text) AS _first_name,\n initcap(\"_EN\".last_name::text) AS _last_name,\n \"_EN\".zip_code::character(5) AS zip5,\n \"_EN\".country_id\n FROM \"UlsLic\".\"_EN\";\n\nTABLE \"UlsLic\".\"_EN\":\n\n=> \\d+ \"UlsLic\".\"_EN\"\n Table\n \"UlsLic._EN\"\n Column | Type | Collation |\n Nullable | Default | Storage | Stats target | Description\n--------------------------+------------------------+-----------+----------+---------+----------+--------------+-------------\n record_type | character(2) | |\n not null | | extended | |\n unique_system_identifier | integer | |\n not null | | plain | |\n uls_file_number | character(14) | \n | | | extended | |\n ebf_number | character varying(30) | \n | | | extended | |\n callsign | character(10) | \n | | | extended | |\n entity_type | character(2) | \n | | | extended | |\n licensee_id | character(9) | \n | | | extended | |\n entity_name | character varying(200) | \n | | | extended | |\n first_name | character varying(20) | \n | | | extended | |\n mi | character(1) | \n | | | extended | |\n last_name | character varying(20) | \n | | | extended | |\n suffix | character(3) | \n | | | extended | |\n phone | character(10) | \n | | | extended | |\n fax | character(10) | \n | | | extended | |\n email | character varying(50) | \n | | | extended | |\n street_address | character varying(60) | \n | | | extended | |\n city | character varying | \n | | | extended | |\n state | character(2) | \n | | | extended | |\n zip_code | character(9) | \n | | | extended | |\n po_box | character varying(20) | \n | | | extended | |\n attention_line | character varying(35) | \n | | | extended | |\n sgin | character(3) | \n | | | extended | |\n frn | character(10) | \n | | | extended | |\n applicant_type_code | character(1) | \n | | | extended | |\n applicant_type_other | character(40) | \n | | | extended | |\n status_code | character(1) | \n | | | extended | |\n status_date | \"MySql\".datetime | \n | | | plain | |\n lic_category_code | character(1) | \n | | | extended | |\n linked_license_id | numeric(9,0) | \n | | | main | |\n linked_callsign | character(10) | \n | | | extended | |\n country_id | character(2) | \n | | | extended | |\n Indexes:\n \"_EN_pkey\" PRIMARY KEY, btree (unique_system_identifier)\n \"_EN__entity_name\" btree (initcap(entity_name::text))\n \"_EN__first_name\" btree (initcap(first_name::text))\n \"_EN__last_name\" btree (initcap(last_name::text))\n \"_EN__zip5\" btree ((zip_code::character(5)))\n \"_EN_callsign\" btree (callsign)\n \"_EN_fcc_reg_num\" btree (frn)\n \"_EN_licensee_id\" btree (licensee_id)\n Check constraints:\n \"_EN_record_type_check\" CHECK (record_type = 'EN'::bpchar)\n Foreign-key constraints:\n \"_EN_applicant_type_code_fkey\" FOREIGN KEY\n (applicant_type_code) REFERENCES\n \"FccLookup\".\"_ApplicantType\"(app_type_id\n )\n \"_EN_entity_type_fkey\" FOREIGN KEY (entity_type) REFERENCES\n \"FccLookup\".\"_EntityType\"(entity_id)\n \"_EN_state_fkey\" FOREIGN KEY (state, country_id) REFERENCES\n \"BaseLookup\".\"_Territory\"(territory_id, country_id)\n \"_EN_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFERENCES\n \"UlsLic\".\"_HD\"(unique_system_i\n dentifier) ON UPDATE CASCADE ON DELETE CASCADE\n\n\nVIEW lic_hd_:\n\n=> \\d+ lic_hd_\n View \"Callsign.lic_hd_\"\n Column | Type | Collation | Nullable | Default\n | Storage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | \n | plain |\n callsign | character(10) | | | \n | extended |\n uls_file_num | character(14) | | | \n | extended |\n radio_service | text | | | \n | extended |\n license_status | text | | | \n | extended |\n grant_date | date | | | \n | plain |\n effective_date | date | | | \n | plain |\n cancel_date | date | | | \n | plain |\n expire_date | date | | | \n | plain |\n end_date | date | | | \n | plain |\n available_date | date | | | \n | plain |\n last_action_date | date | | | \n | plain |\n View definition:\n SELECT lic_hd.sys_id,\n lic_hd.callsign,\n lic_hd.uls_file_num,\n (lic_hd.radio_service::text || ' - '::text) || COALESCE((\n SELECT \"RadioService\".service_text\n FROM \"RadioService\"\n WHERE lic_hd.radio_service = \"RadioService\".service_id\n LIMIT 1), '???'::character varying)::text AS\n radio_service,\n (lic_hd.license_status::text || ' - '::text) || COALESCE((\n SELECT \"LicStatus\".status_text\n FROM \"LicStatus\"\n WHERE lic_hd.license_status = \"LicStatus\".status_id\n LIMIT 1), '???'::character varying)::text AS\n license_status,\n lic_hd.grant_date,\n lic_hd.effective_date,\n lic_hd.cancel_date,\n lic_hd.expire_date,\n LEAST(lic_hd.cancel_date, lic_hd.expire_date) AS end_date,\n CASE\n WHEN lic_hd.cancel_date < lic_hd.expire_date THEN\n GREATEST((lic_hd.cancel_date + '2 years'::interval)::date,\n lic_hd.last_action_date + 30)\n WHEN lic_hd.license_status = 'A'::bpchar AND\n uls_date() > (lic_hd.expire_date + '2 years'::interval)::date\n THEN NULL::date\n ELSE (lic_hd.expire_date + '2 years'::interval)::date\n END + 1 AS available_date,\n lic_hd.last_action_date\n FROM lic_hd;\n\nVIEW lic_hd:\n\n=> \\d+ lic_hd\n View \"Callsign.lic_hd\"\n Column | Type | Collation | Nullable | Default\n | Storage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | \n | plain |\n callsign | character(10) | | | \n | extended |\n uls_file_num | character(14) | | | \n | extended |\n radio_service | character(2) | | | \n | extended |\n license_status | character(1) | | | \n | extended |\n grant_date | date | | | \n | plain |\n effective_date | date | | | \n | plain |\n cancel_date | date | | | \n | plain |\n expire_date | date | | | \n | plain |\n last_action_date | date | | | \n | plain |\n View definition:\n SELECT _lic_hd.sys_id,\n _lic_hd.callsign,\n _lic_hd.uls_file_num,\n _lic_hd.radio_service,\n _lic_hd.license_status,\n _lic_hd.grant_date,\n _lic_hd.effective_date,\n _lic_hd.cancel_date,\n _lic_hd.expire_date,\n _lic_hd.last_action_date\n FROM _lic_hd;\n\nVIEW _lic_hd:\n\n=> \\d+ _lic_hd\n View \"Callsign._lic_hd\"\n Column | Type | Collation | Nullable | Default\n | Storage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | \n | plain |\n callsign | character(10) | | | \n | extended |\n uls_file_num | character(14) | | | \n | extended |\n radio_service | character(2) | | | \n | extended |\n license_status | character(1) | | | \n | extended |\n grant_date | date | | | \n | plain |\n effective_date | date | | | \n | plain |\n cancel_date | date | | | \n | plain |\n expire_date | date | | | \n | plain |\n last_action_date | date | | | \n | plain |\n View definition:\n SELECT \"_HD\".unique_system_identifier AS sys_id,\n \"_HD\".callsign,\n \"_HD\".uls_file_number AS uls_file_num,\n \"_HD\".radio_service_code AS radio_service,\n \"_HD\".license_status,\n \"_HD\".grant_date,\n \"_HD\".effective_date,\n \"_HD\".cancellation_date AS cancel_date,\n \"_HD\".expired_date AS expire_date,\n \"_HD\".last_action_date\n FROM \"UlsLic\".\"_HD\";\n\nTABLE \"UlsLic\".\"_HD\":\n\n=> \\d+ \"UlsLic\".\"_HD\"\n Table\n \"UlsLic._HD\"\n Column | Type | Collation\n | Nullable | Default | Storage | Stats target | Descr\n iption\n------------------------------+-----------------------+-----------+----------+---------+----------+--------------+------\n -------\n record_type | character(2) | \n | not null | | extended | |\n unique_system_identifier | integer | \n | not null | | plain | |\n uls_file_number | character(14) | \n | | | extended | |\n ebf_number | character varying(30) | \n | | | extended | |\n callsign | character(10) | \n | | | extended | |\n license_status | character(1) | \n | | | extended | |\n radio_service_code | character(2) | \n | | | extended | |\n grant_date | date | \n | | | plain | |\n expired_date | date | \n | | | plain | |\n cancellation_date | date | \n | | | plain | |\n eligibility_rule_num | character(10) | \n | | | extended | |\n applicant_type_code_reserved | character(1) | \n | | | extended | |\n alien | character(1) | \n | | | extended | |\n alien_government | character(1) | \n | | | extended | |\n alien_corporation | character(1) | \n | | | extended | |\n alien_officer | character(1) | \n | | | extended | |\n alien_control | character(1) | \n | | | extended | |\n revoked | character(1) | \n | | | extended | |\n convicted | character(1) | \n | | | extended | |\n adjudged | character(1) | \n | | | extended | |\n involved_reserved | character(1) | \n | | | extended | |\n common_carrier | character(1) | \n | | | extended | |\n non_common_carrier | character(1) | \n | | | extended | |\n private_comm | character(1) | \n | | | extended | |\n fixed | character(1) | \n | | | extended | |\n mobile | character(1) | \n | | | extended | |\n radiolocation | character(1) | \n | | | extended | |\n satellite | character(1) | \n | | | extended | |\n developmental_or_sta | character(1) | \n | | | extended | |\n interconnected_service | character(1) | \n | | | extended | |\n certifier_first_name | character varying(20) | \n | | | extended | |\n certifier_mi | character varying | \n | | | extended | |\n certifier_last_name | character varying | \n | | | extended | |\n certifier_suffix | character(3) | \n | | | extended | |\n certifier_title | character(40) | \n | | | extended | |\n gender | character(1) | \n | | | extended | |\n african_american | character(1) | \n | | | extended | |\n native_american | character(1) | \n | | | extended | |\n hawaiian | character(1) | \n | | | extended | |\n asian | character(1) | \n | | | extended | |\n white | character(1) | \n | | | extended | |\n ethnicity | character(1) | \n | | | extended | |\n effective_date | date | \n | | | plain | |\n last_action_date | date | \n | | | plain | |\n auction_id | integer | \n | | | plain | |\n reg_stat_broad_serv | character(1) | \n | | | extended | |\n band_manager | character(1) | \n | | | extended | |\n type_serv_broad_serv | character(1) | \n | | | extended | |\n alien_ruling | character(1) | \n | | | extended | |\n licensee_name_change | character(1) | \n | | | extended | |\n whitespace_ind | character(1) | \n | | | extended | |\n additional_cert_choice | character(1) | \n | | | extended | |\n additional_cert_answer | character(1) | \n | | | extended | |\n discontinuation_ind | character(1) | \n | | | extended | |\n regulatory_compliance_ind | character(1) | \n | | | extended | |\n dummy1 | character varying | \n | | | extended | |\n dummy2 | character varying | \n | | | extended | |\n dummy3 | character varying | \n | | | extended | |\n dummy4 | character varying | \n | | | extended | |\n Indexes:\n \"_HD_pkey\" PRIMARY KEY, btree (unique_system_identifier)\n \"_HD_callsign\" btree (callsign)\n \"_HD_grant_date\" btree (grant_date)\n \"_HD_last_action_date\" btree (last_action_date)\n \"_HD_uls_file_num\" btree (uls_file_number)\n Check constraints:\n \"_HD_record_type_check\" CHECK (record_type = 'HD'::bpchar)\n Foreign-key constraints:\n \"_HD_license_status_fkey\" FOREIGN KEY (license_status)\n REFERENCES \"FccLookup\".\"_LicStatus\"(status_id)\n \"_HD_radio_service_code_fkey\" FOREIGN KEY (radio_service_code)\n REFERENCES \"FccLookup\".\"_RadioService\"(service_id)\n Referenced by:\n TABLE \"\"UlsLic\".\"_AM\"\" CONSTRAINT\n \"_AM_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFEREN\n CES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON\n DELETE CASCADE\n TABLE \"\"UlsLic\".\"_CO\"\" CONSTRAINT\n \"_CO_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFEREN\n CES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON\n DELETE CASCADE\n TABLE \"\"UlsLic\".\"_EN\"\" CONSTRAINT\n \"_EN_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFEREN\n CES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON\n DELETE CASCADE\n TABLE \"\"UlsLic\".\"_HS\"\" CONSTRAINT\n \"_HS_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFEREN\n CES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON\n DELETE CASCADE\n TABLE \"\"UlsLic\".\"_LA\"\" CONSTRAINT\n \"_LA_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFEREN\n CES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON\n DELETE CASCADE\n TABLE \"\"UlsLic\".\"_SC\"\" CONSTRAINT\n \"_SC_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFEREN\n CES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON\n DELETE CASCADE\n TABLE \"\"UlsLic\".\"_SF\"\" CONSTRAINT\n \"_SF_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFEREN\n CES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON\n DELETE CASCADE\n\nVIEW lic_am_:\n\n => \\d+ lic_am_\n View \"Callsign.lic_am_\"\n Column | Type | Collation | Nullable |\n Default | Storage | Description\n------------------+-----------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n uls_region | \"MySql\".tinyint | | \n | | plain |\n callsign_group | text | | \n | | extended |\n operator_group | text | | \n | | extended |\n operator_class | text | | \n | | extended |\n prev_class | text | | \n | | extended |\n prev_callsign | character(10) | | \n | | extended |\n vanity_type | text | | \n | | extended |\n is_trustee | character(1) | | \n | | extended |\n trustee_callsign | character(10) | | \n | | extended |\n trustee_name | character varying(50) | | \n | | extended |\n View definition:\n SELECT lic_am.sys_id,\n lic_am.callsign,\n lic_am.uls_region,\n ( SELECT (\"CallsignGroup\".group_id::text || ' - '::text) ||\n \"CallsignGroup\".match_text::text\n FROM \"CallsignGroup\"\n WHERE lic_am.callsign ~ \"CallsignGroup\".pattern::text\n LIMIT 1) AS callsign_group,\n ( SELECT (oper_group.group_id::text || ' - '::text) ||\n oper_group.group_text::text\n FROM oper_group\n WHERE lic_am.operator_class = oper_group.class_id\n LIMIT 1) AS operator_group,\n (lic_am.operator_class::text || ' - '::text) || COALESCE((\n SELECT \"OperatorClass\".class_text\n FROM \"OperatorClass\"\n WHERE lic_am.operator_class = \"OperatorClass\".class_id\n LIMIT 1), '???'::character varying)::text AS\n operator_class,\n (lic_am.prev_class::text || ' - '::text) || COALESCE(( SELECT\n \"OperatorClass\".class_text\n FROM \"OperatorClass\"\n WHERE lic_am.prev_class = \"OperatorClass\".class_id\n LIMIT 1), '???'::character varying)::text AS prev_class,\n lic_am.prev_callsign,\n (lic_am.vanity_type::text || ' - '::text) || COALESCE(( SELECT\n \"VanityType\".vanity_text\n FROM \"VanityType\"\n WHERE lic_am.vanity_type = \"VanityType\".vanity_id\n LIMIT 1), '???'::character varying)::text AS vanity_type,\n lic_am.is_trustee,\n lic_am.trustee_callsign,\n lic_am.trustee_name\n FROM lic_am;\n\nVIEW lic_am:\n\n=> \\d+ lic_am\n View \"Callsign.lic_am\"\n Column | Type | Collation | Nullable |\n Default | Storage | Description\n------------------+-----------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n uls_region | \"MySql\".tinyint | | \n | | plain |\n uls_group | character(1) | | \n | | extended |\n operator_class | character(1) | | \n | | extended |\n prev_callsign | character(10) | | \n | | extended |\n prev_class | character(1) | | \n | | extended |\n vanity_type | character(1) | | \n | | extended |\n is_trustee | character(1) | | \n | | extended |\n trustee_callsign | character(10) | | \n | | extended |\n trustee_name | character varying(50) | | \n | | extended |\n View definition:\n SELECT _lic_am.sys_id,\n _lic_am.callsign,\n _lic_am.uls_region,\n _lic_am.uls_group,\n _lic_am.operator_class,\n _lic_am.prev_callsign,\n _lic_am.prev_class,\n _lic_am.vanity_type,\n _lic_am.is_trustee,\n _lic_am.trustee_callsign,\n _lic_am.trustee_name\n FROM _lic_am;\n\nVIEW _lic_am:\n\n=> \\d+ _lic_am\n View \"Callsign._lic_am\"\n Column | Type | Collation | Nullable |\n Default | Storage | Description\n------------------+-----------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n uls_region | \"MySql\".tinyint | | \n | | plain |\n uls_group | character(1) | | \n | | extended |\n operator_class | character(1) | | \n | | extended |\n prev_callsign | character(10) | | \n | | extended |\n prev_class | character(1) | | \n | | extended |\n vanity_type | character(1) | | \n | | extended |\n is_trustee | character(1) | | \n | | extended |\n trustee_callsign | character(10) | | \n | | extended |\n trustee_name | character varying(50) | | \n | | extended |\n View definition:\n SELECT \"_AM\".unique_system_identifier AS sys_id,\n \"_AM\".callsign,\n \"_AM\".region_code AS uls_region,\n \"_AM\".group_code AS uls_group,\n \"_AM\".operator_class,\n \"_AM\".previous_callsign AS prev_callsign,\n \"_AM\".previous_operator_class AS prev_class,\n \"_AM\".vanity_callsign_change AS vanity_type,\n \"_AM\".trustee_indicator AS is_trustee,\n \"_AM\".trustee_callsign,\n \"_AM\".trustee_name\n FROM \"UlsLic\".\"_AM\";\n\nTABLE \"UlsLic\".\"_AM\":\n\n=> \\d+ \"UlsLic\".\"_AM\"\n Table\n \"UlsLic._AM\"\n Column | Type | Collation |\n Nullable | Default | Storage | Stats target | Description\n----------------------------+-----------------------+-----------+----------+---------+----------+--------------+-------------\n record_type | character(2) | |\n not null | | extended | |\n unique_system_identifier | integer | |\n not null | | plain | |\n uls_file_number | character(14) | \n | | | extended | |\n ebf_number | character varying(30) | \n | | | extended | |\n callsign | character(10) | \n | | | extended | |\n operator_class | character(1) | \n | | | extended | |\n group_code | character(1) | \n | | | extended | |\n region_code | \"MySql\".tinyint | \n | | | plain | |\n trustee_callsign | character(10) | \n | | | extended | |\n trustee_indicator | character(1) | \n | | | extended | |\n physician_certification | character(1) | \n | | | extended | |\n ve_signature | character(1) | \n | | | extended | |\n systematic_callsign_change | character(1) | \n | | | extended | |\n vanity_callsign_change | character(1) | \n | | | extended | |\n vanity_relationship | character(12) | \n | | | extended | |\n previous_callsign | character(10) | \n | | | extended | |\n previous_operator_class | character(1) | \n | | | extended | |\n trustee_name | character varying(50) | \n | | | extended | |\n Indexes:\n \"_AM_pkey\" PRIMARY KEY, btree (unique_system_identifier)\n \"_AM_callsign\" btree (callsign)\n \"_AM_prev_callsign\" btree (previous_callsign)\n \"_AM_trustee_callsign\" btree (trustee_callsign)\n Check constraints:\n \"_AM_record_type_check\" CHECK (record_type = 'AM'::bpchar)\n Foreign-key constraints:\n \"_AM_operator_class_fkey\" FOREIGN KEY (operator_class)\n REFERENCES \"FccLookup\".\"_OperatorClass\"(class_id)\n \"_AM_previous_operator_class_fkey\" FOREIGN KEY\n (previous_operator_class) REFERENCES\n \"FccLookup\".\"_OperatorClass\"(cla\n ss_id)\n \"_AM_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFERENCES\n \"UlsLic\".\"_HD\"(unique_system_i\n dentifier) ON UPDATE CASCADE ON DELETE CASCADE\n \"_AM_vanity_callsign_change_fkey\" FOREIGN KEY\n (vanity_callsign_change) REFERENCES\n \"FccLookup\".\"_VanityType\"(vanity_i\n d)",
"msg_date": "Thu, 27 May 2021 20:41:14 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 5/27/21 8:41 PM, Dean Gibson (DB Administrator) wrote:\n> I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4 at \n> one point), gradually moving to v9.0 w/ replication in 2010. In 2017 I \n> moved my 20GB database to AWS/RDS, gradually upgrading to v9.6, & was \n> entirely satisfied with the result.\n> \n> In March of this year, AWS announced that v9.6 was nearing end of \n> support, & AWS would forcibly upgrade everyone to v12 on January 22, \n> 2022, if users did not perform the upgrade earlier. My first attempt \n> was successful as far as the upgrade itself, but complex queries that \n> normally ran in a couple of seconds on v9.x, were taking minutes in v12.\n\nDid you run a plain \nANALYZE(https://www.postgresql.org/docs/12/sql-analyze.html) on the \ntables in the new install?\n\n> \n> I didn't have the time in March to diagnose the problem, other than some \n> futile adjustments to server parameters, so I reverted back to a saved \n> copy of my v9.6 data.\n> \n> On Sunday, being retired, I decided to attempt to solve the issue in \n> earnest. I have now spent five days (about 14 hours a day), trying \n> various things. Keeping the v9.6 data online for web users, I've \n> \"forked\" the data into a new copy, & updated it in turn to PostgreSQL \n> v10, v11, v12, & v13. All exhibit the same problem: As you will see \n> below, it appears that versions 10 & above are doing a sequential scan \n> of some of the \"large\" (200K rows) tables. Note that the expected & \n> actual run times for v9.6 & v13.2 both differ by more than *two orders \n> of magnitude*. Rather than post a huge eMail (ha ha), I'll start with \n> this one, that shows an \"EXPLAIN ANALYZE\" from both v9.6 & v13.2, \n> followed by the related table & view definitions. With one exception, \n> table definitions are from the FCC (Federal Communications Commission); \n> the view definitions are my own.\n> \n\n\n\n\n-- \nAdrian Klaver\[email protected]\n\n\n",
"msg_date": "Fri, 28 May 2021 08:12:52 -0700",
"msg_from": "Adrian Klaver <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-05-28 08:12, Adrian Klaver wrote:\n> On 5/27/21 8:41 PM, Dean Gibson (DB Administrator) wrote:\n>> I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4 \n>> at one point), gradually moving to v9.0 w/ replication in 2010. In \n>> 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to \n>> v9.6, & was entirely satisfied with the result.\n>>\n>> In March of this year, AWS announced that v9.6 was nearing end of \n>> support, & AWS would forcibly upgrade everyone to v12 on January 22, \n>> 2022, if users did not perform the upgrade earlier. My first attempt \n>> was successful as far as the upgrade itself, but complex queries that \n>> normally ran in a couple of seconds on v9.x, were taking minutes in v12.\n>\n> Did you run a plain \n> ANALYZE(https://www.postgresql.org/docs/12/sql-analyze.html) on the \n> tables in the new install?\n\nAfter each upgrade (to 10, 11, 12, & 13), I did a \"VACUUM FULL \nANALYZE\". On 10 through 12, it took about 45 minutes & significant CPU \nactivity, & temporarily doubled the size of the disk space required. As \nyou know, that disk space is not shrinkable under AWS's RDS. On v13, it \ntook 10 hours with limited CPU activity, & actually slightly less disk \nspace required.\n\n\n\n\n\n\n\nOn 2021-05-28 08:12, Adrian Klaver\n wrote:\n\nOn\n 5/27/21 8:41 PM, Dean Gibson (DB Administrator) wrote: \nI started to use PostgreSQL v7.3 in 2003\n on my home Linux systems (4 at one point), gradually moving to\n v9.0 w/ replication in 2010. In 2017 I moved my 20GB database\n to AWS/RDS, gradually upgrading to v9.6, & was entirely\n satisfied with the result. \n\n In March of this year, AWS announced that v9.6 was nearing end\n of support, & AWS would forcibly upgrade everyone to v12 on\n January 22, 2022, if users did not perform the upgrade earlier. \n My first attempt was successful as far as the upgrade itself,\n but complex queries that normally ran in a couple of seconds on\n v9.x, were taking minutes in v12. \n\n\n Did you run a plain ANALYZE(https://www.postgresql.org/docs/12/sql-analyze.html)\n on the tables in the new install? \n\n\n After each upgrade (to 10, 11, 12, & 13), I did a \"VACUUM FULL\n ANALYZE\". On 10 through 12, it took about 45 minutes &\n significant CPU activity, & temporarily doubled the size of the\n disk space required. As you know, that disk space is not shrinkable\n under AWS's RDS. On v13, it took 10 hours with limited CPU\n activity, & actually slightly less disk space required.",
"msg_date": "Fri, 28 May 2021 11:40:29 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 5/28/21 1:40 PM, Dean Gibson (DB Administrator) wrote:\n> On 2021-05-28 08:12, Adrian Klaver wrote:\n>> On 5/27/21 8:41 PM, Dean Gibson (DB Administrator) wrote:\n>>> I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4 at \n>>> one point), gradually moving to v9.0 w/ replication in 2010. In 2017 I \n>>> moved my 20GB database to AWS/RDS, gradually upgrading to v9.6, & was \n>>> entirely satisfied with the result.\n>>>\n>>> In March of this year, AWS announced that v9.6 was nearing end of \n>>> support, & AWS would forcibly upgrade everyone to v12 on January 22, \n>>> 2022, if users did not perform the upgrade earlier. My first attempt \n>>> was successful as far as the upgrade itself, but complex queries that \n>>> normally ran in a couple of seconds on v9.x, were taking minutes in v12.\n>>\n>> Did you run a plain \n>> ANALYZE(https://www.postgresql.org/docs/12/sql-analyze.html) on the \n>> tables in the new install?\n>\n> After each upgrade (to 10, 11, 12, & 13), I did a \"VACUUM FULL ANALYZE\". \n> On 10 through 12, it took about 45 minutes & significant CPU activity, & \n> temporarily doubled the size of the disk space required. As you know, \n> that disk space is not shrinkable under AWS's RDS. On v13, it took 10 \n> hours with limited CPU activity, & actually slightly less disk space \n> required.\n\nUnder normal conditions, VACUUM FULL is pointless on a freshly-loaded \ndatabase; in RDS, it's *anti-useful*.\n\nThat's why Adrian asked if you did a plain ANALYZE.\n\n-- \nAngular momentum makes the world go 'round.\n\n\n\n\n\n\n On 5/28/21 1:40 PM, Dean Gibson (DB Administrator) wrote:\n\n\nOn 2021-05-28 08:12, Adrian Klaver\n wrote:\n\nOn\n 5/27/21 8:41 PM, Dean Gibson (DB Administrator) wrote: \nI started to use PostgreSQL v7.3 in 2003\n on my home Linux systems (4 at one point), gradually moving to\n v9.0 w/ replication in 2010. In 2017 I moved my 20GB database\n to AWS/RDS, gradually upgrading to v9.6, & was entirely\n satisfied with the result. \n\n In March of this year, AWS announced that v9.6 was nearing end\n of support, & AWS would forcibly upgrade everyone to v12\n on January 22, 2022, if users did not perform the upgrade\n earlier. My first attempt was successful as far as the\n upgrade itself, but complex queries that normally ran in a\n couple of seconds on v9.x, were taking minutes in v12. \n\n\n Did you run a plain ANALYZE(https://www.postgresql.org/docs/12/sql-analyze.html)\n on the tables in the new install? \n\n\n After each upgrade (to 10, 11, 12, & 13), I did a \"VACUUM FULL\n ANALYZE\". On 10 through 12, it took about 45 minutes &\n significant CPU activity, & temporarily doubled the size of\n the disk space required. As you know, that disk space is not\n shrinkable under AWS's RDS. On v13, it took 10 hours with limited\n CPU activity, & actually slightly less disk space required. \n\n\n Under normal conditions, VACUUM FULL is pointless on a\n freshly-loaded database; in RDS, it's anti-useful.\n\n That's why Adrian asked if you did a plain ANALYZE.\n\n-- \n Angular momentum makes the world go 'round.",
"msg_date": "Fri, 28 May 2021 14:38:02 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-05-28 12:38, Ron wrote:\n> On 5/28/21 1:40 PM, Dean Gibson (DB Administrator) wrote:\n>> On 2021-05-28 08:12, Adrian Klaver wrote:\n>>> On 5/27/21 8:41 PM, Dean Gibson (DB Administrator) wrote:\n>>>> I started to use PostgreSQL v7.3 in 2003 on my home Linux systems \n>>>> (4 at one point), gradually moving to v9.0 w/ replication in 2010. \n>>>> In 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to \n>>>> v9.6, & was entirely satisfied with the result.\n>>>>\n>>>> In March of this year, AWS announced that v9.6 was nearing end of \n>>>> support, & AWS would forcibly upgrade everyone to v12 on January \n>>>> 22, 2022, if users did not perform the upgrade earlier. My first \n>>>> attempt was successful as far as the upgrade itself, but complex \n>>>> queries that normally ran in a couple of seconds on v9.x, were \n>>>> taking minutes in v12.\n>>>\n>>> Did you run a plain \n>>> ANALYZE(https://www.postgresql.org/docs/12/sql-analyze.html) on the \n>>> tables in the new install?\n>>\n>> After each upgrade (to 10, 11, 12, & 13), I did a \"VACUUM FULL \n>> ANALYZE\". On 10 through 12, it took about 45 minutes & significant \n>> CPU activity, & temporarily doubled the size of the disk space \n>> required. As you know, that disk space is not shrinkable under AWS's \n>> RDS. On v13, it took 10 hours with limited CPU activity, & actually \n>> slightly less disk space required.\n>\n> Under normal conditions, VACUUM FULL is pointless on a freshly-loaded \n> database; in RDS, it's *anti-useful*.\n>\n> That's why Adrian asked if you did a plain ANALYZE.\n\nJust now did. No change in EXPLAIN ANALYZE output.\n\n\n\n\n\n\n\n On 2021-05-28 12:38, Ron wrote:\n\n\n On 5/28/21 1:40 PM, Dean Gibson (DB Administrator) wrote:\n\n\nOn 2021-05-28 08:12, Adrian Klaver\n wrote:\n\nOn\n 5/27/21 8:41 PM, Dean Gibson (DB Administrator) wrote: \nI started to use PostgreSQL v7.3 in\n 2003 on my home Linux systems (4 at one point), gradually\n moving to v9.0 w/ replication in 2010. In 2017 I moved my\n 20GB database to AWS/RDS, gradually upgrading to v9.6, &\n was entirely satisfied with the result. \n\n In March of this year, AWS announced that v9.6 was nearing\n end of support, & AWS would forcibly upgrade everyone to\n v12 on January 22, 2022, if users did not perform the\n upgrade earlier. My first attempt was successful as far as\n the upgrade itself, but complex queries that normally ran in\n a couple of seconds on v9.x, were taking minutes in v12. \n\n\n Did you run a plain ANALYZE(https://www.postgresql.org/docs/12/sql-analyze.html)\n on the tables in the new install? \n\n\n After each upgrade (to 10, 11, 12, & 13), I did a \"VACUUM\n FULL ANALYZE\". On 10 through 12, it took about 45 minutes &\n significant CPU activity, & temporarily doubled the size of\n the disk space required. As you know, that disk space is not\n shrinkable under AWS's RDS. On v13, it took 10 hours with\n limited CPU activity, & actually slightly less disk space\n required. \n\n\n Under normal conditions, VACUUM FULL is pointless on a\n freshly-loaded database; in RDS, it's anti-useful.\n\n That's why Adrian asked if you did a plain ANALYZE.\n\n Just now did. No change in EXPLAIN ANALYZE output.",
"msg_date": "Fri, 28 May 2021 15:06:10 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 5/28/21 5:06 PM, Dean Gibson (DB Administrator) wrote:\n> On 2021-05-28 12:38, Ron wrote:\n>> On 5/28/21 1:40 PM, Dean Gibson (DB Administrator) wrote:\n>>> On 2021-05-28 08:12, Adrian Klaver wrote:\n>>>> On 5/27/21 8:41 PM, Dean Gibson (DB Administrator) wrote:\n>>>>> I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4 \n>>>>> at one point), gradually moving to v9.0 w/ replication in 2010. In \n>>>>> 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to v9.6, \n>>>>> & was entirely satisfied with the result.\n>>>>>\n>>>>> In March of this year, AWS announced that v9.6 was nearing end of \n>>>>> support, & AWS would forcibly upgrade everyone to v12 on January 22, \n>>>>> 2022, if users did not perform the upgrade earlier. My first attempt \n>>>>> was successful as far as the upgrade itself, but complex queries that \n>>>>> normally ran in a couple of seconds on v9.x, were taking minutes in v12.\n>>>>\n>>>> Did you run a plain \n>>>> ANALYZE(https://www.postgresql.org/docs/12/sql-analyze.html) on the \n>>>> tables in the new install?\n>>>\n>>> After each upgrade (to 10, 11, 12, & 13), I did a \"VACUUM FULL \n>>> ANALYZE\". On 10 through 12, it took about 45 minutes & significant CPU \n>>> activity, & temporarily doubled the size of the disk space required. As \n>>> you know, that disk space is not shrinkable under AWS's RDS. On v13, it \n>>> took 10 hours with limited CPU activity, & actually slightly less disk \n>>> space required.\n>>\n>> Under normal conditions, VACUUM FULL is pointless on a freshly-loaded \n>> database; in RDS, it's *anti-useful*.\n>>\n>> That's why Adrian asked if you did a plain ANALYZE.\n>\n> Just now did. No change in EXPLAIN ANALYZE output.\n\nDid it run in less than 10 hours?\n\n-- \nAngular momentum makes the world go 'round.\n\n\n\n\n\n\n On 5/28/21 5:06 PM, Dean Gibson (DB Administrator) wrote:\n\n\n On 2021-05-28 12:38, Ron wrote:\n\n\n On 5/28/21 1:40 PM, Dean Gibson (DB Administrator) wrote:\n\n\nOn 2021-05-28 08:12, Adrian\n Klaver wrote:\n\nOn\n 5/27/21 8:41 PM, Dean Gibson (DB Administrator) wrote: \nI started to use PostgreSQL v7.3 in\n 2003 on my home Linux systems (4 at one point), gradually\n moving to v9.0 w/ replication in 2010. In 2017 I moved my\n 20GB database to AWS/RDS, gradually upgrading to v9.6,\n & was entirely satisfied with the result. \n\n In March of this year, AWS announced that v9.6 was nearing\n end of support, & AWS would forcibly upgrade everyone\n to v12 on January 22, 2022, if users did not perform the\n upgrade earlier. My first attempt was successful as far\n as the upgrade itself, but complex queries that normally\n ran in a couple of seconds on v9.x, were taking minutes in\n v12. \n\n\n Did you run a plain ANALYZE(https://www.postgresql.org/docs/12/sql-analyze.html)\n on the tables in the new install? \n\n\n After each upgrade (to 10, 11, 12, & 13), I did a \"VACUUM\n FULL ANALYZE\". On 10 through 12, it took about 45 minutes\n & significant CPU activity, & temporarily doubled the\n size of the disk space required. As you know, that disk space\n is not shrinkable under AWS's RDS. On v13, it took 10 hours\n with limited CPU activity, & actually slightly less disk\n space required. \n\n\n Under normal conditions, VACUUM FULL is pointless on a\n freshly-loaded database; in RDS, it's anti-useful.\n\n That's why Adrian asked if you did a plain ANALYZE.\n\n Just now did. No change in EXPLAIN ANALYZE output.\n\n\n Did it run in less than 10 hours?\n\n-- \n Angular momentum makes the world go 'round.",
"msg_date": "Fri, 28 May 2021 18:51:18 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-05-28 16:51, Ron wrote:\n> On 5/28/21 5:06 PM, Dean Gibson (DB Administrator) wrote:\n>> On 2021-05-28 12:38, Ron wrote:\n>>> On 5/28/21 1:40 PM, Dean Gibson (DB Administrator) wrote:\n>>>> On 2021-05-28 08:12, Adrian Klaver wrote:\n>>>>> On 5/27/21 8:41 PM, Dean Gibson (DB Administrator) wrote:\n>>>>>> I started to use PostgreSQL v7.3 in 2003 on my home Linux systems \n>>>>>> (4 at one point), gradually moving to v9.0 w/ replication in \n>>>>>> 2010. In 2017 I moved my 20GB database to AWS/RDS, gradually \n>>>>>> upgrading to v9.6, & was entirely satisfied with the result.\n>>>>>>\n>>>>>> In March of this year, AWS announced that v9.6 was nearing end of \n>>>>>> support, & AWS would forcibly upgrade everyone to v12 on January \n>>>>>> 22, 2022, if users did not perform the upgrade earlier. My first \n>>>>>> attempt was successful as far as the upgrade itself, but complex \n>>>>>> queries that normally ran in a couple of seconds on v9.x, were \n>>>>>> taking minutes in v12.\n>>>>>\n>>>>> Did you run a plain \n>>>>> ANALYZE(https://www.postgresql.org/docs/12/sql-analyze.html) on \n>>>>> the tables in the new install?\n>>>>\n>>>> After each upgrade (to 10, 11, 12, & 13), I did a \"VACUUM FULL \n>>>> ANALYZE\". On 10 through 12, it took about 45 minutes & significant \n>>>> CPU activity, & temporarily doubled the size of the disk space \n>>>> required. As you know, that disk space is not shrinkable under \n>>>> AWS's RDS. On v13, it took 10 hours with limited CPU activity, & \n>>>> actually slightly less disk space required.\n>>>\n>>> Under normal conditions, VACUUM FULL is pointless on a \n>>> freshly-loaded database; in RDS, it's *anti-useful*.\n>>>\n>>> That's why Adrian asked if you did a plain ANALYZE.\n>>\n>> Just now did. No change in EXPLAIN ANALYZE output.\n>\n> Did it run in less than 10 hours?\n>\n\nThe original VACUUM FULL ANALYZE ran in 10 hours. The plain ANALYZE ran \nin 88 seconds.\n\n\n\n\n\n\n On 2021-05-28 16:51, Ron wrote:\n\n\n On 5/28/21 5:06 PM, Dean Gibson (DB Administrator) wrote:\n\n\n On 2021-05-28 12:38, Ron wrote:\n\n\n On 5/28/21 1:40 PM, Dean Gibson (DB Administrator) wrote:\n\n\nOn 2021-05-28 08:12, Adrian\n Klaver wrote:\n\nOn\n 5/27/21 8:41 PM, Dean Gibson (DB Administrator) wrote: \nI started to use PostgreSQL v7.3\n in 2003 on my home Linux systems (4 at one point),\n gradually moving to v9.0 w/ replication in 2010. In\n 2017 I moved my 20GB database to AWS/RDS, gradually\n upgrading to v9.6, & was entirely satisfied with the\n result. \n\n In March of this year, AWS announced that v9.6 was\n nearing end of support, & AWS would forcibly upgrade\n everyone to v12 on January 22, 2022, if users did not\n perform the upgrade earlier. My first attempt was\n successful as far as the upgrade itself, but complex\n queries that normally ran in a couple of seconds on\n v9.x, were taking minutes in v12. \n\n\n Did you run a plain ANALYZE(https://www.postgresql.org/docs/12/sql-analyze.html)\n on the tables in the new install? \n\n\n After each upgrade (to 10, 11, 12, & 13), I did a\n \"VACUUM FULL ANALYZE\". On 10 through 12, it took about 45\n minutes & significant CPU activity, & temporarily\n doubled the size of the disk space required. As you know,\n that disk space is not shrinkable under AWS's RDS. On v13,\n it took 10 hours with limited CPU activity, & actually\n slightly less disk space required. \n\n\n Under normal conditions, VACUUM FULL is pointless on a\n freshly-loaded database; in RDS, it's anti-useful.\n\n That's why Adrian asked if you did a plain ANALYZE.\n\n Just now did. No change in EXPLAIN ANALYZE output.\n\n\n Did it run in less than 10 hours?\n\n\n\n The original VACUUM FULL ANALYZE ran in 10 hours. The plain ANALYZE\n ran in 88 seconds.",
"msg_date": "Fri, 28 May 2021 17:38:43 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "Le 29/05/2021 à 02:38, Dean Gibson (DB Administrator) a écrit :\n> On 2021-05-28 16:51, Ron wrote:\n>> On 5/28/21 5:06 PM, Dean Gibson (DB Administrator) wrote:\n>>> On 2021-05-28 12:38, Ron wrote:\n>>>> On 5/28/21 1:40 PM, Dean Gibson (DB Administrator) wrote:\n>>>>> On 2021-05-28 08:12, Adrian Klaver wrote:\n>>>>>> On 5/27/21 8:41 PM, Dean Gibson (DB Administrator) wrote:\n>>>>>>> I started to use PostgreSQL v7.3 in 2003 on my home Linux\n>>>>>>> systems (4 at one point), gradually moving to v9.0 w/\n>>>>>>> replication in 2010. In 2017 I moved my 20GB database to\n>>>>>>> AWS/RDS, gradually upgrading to v9.6, & was entirely satisfied\n>>>>>>> with the result.\n>>>>>>>\n>>>>>>> In March of this year, AWS announced that v9.6 was nearing end\n>>>>>>> of support, & AWS would forcibly upgrade everyone to v12 on\n>>>>>>> January 22, 2022, if users did not perform the upgrade earlier. \n>>>>>>> My first attempt was successful as far as the upgrade itself,\n>>>>>>> but complex queries that normally ran in a couple of seconds on\n>>>>>>> v9.x, were taking minutes in v12.\n>>>>>>\n>>>>>> Did you run a plain\n>>>>>> ANALYZE(https://www.postgresql.org/docs/12/sql-analyze.html) on\n>>>>>> the tables in the new install?\n>>>>>\n>>>>> After each upgrade (to 10, 11, 12, & 13), I did a \"VACUUM FULL\n>>>>> ANALYZE\". On 10 through 12, it took about 45 minutes &\n>>>>> significant CPU activity, & temporarily doubled the size of the\n>>>>> disk space required. As you know, that disk space is not\n>>>>> shrinkable under AWS's RDS. On v13, it took 10 hours with limited\n>>>>> CPU activity, & actually slightly less disk space required.\n>>>>\n>>>> Under normal conditions, VACUUM FULL is pointless on a\n>>>> freshly-loaded database; in RDS, it's *anti-useful*.\n>>>>\n>>>> That's why Adrian asked if you did a plain ANALYZE.\n>>>\n>>> Just now did. No change in EXPLAIN ANALYZE output.\n>>\n>> Did it run in less than 10 hours?\n>>\n>\n> The original VACUUM FULL ANALYZE ran in 10 hours. The plain ANALYZE\n> ran in 88 seconds.\n\nOne possibility is that your data has a distribution that defeats the\nANALYZE sampling strategy.\n\nIf that is the case you can force ANALYZE to do a better job by\nincreasing the default_statistics_target value (100 by default) and\nreload the configuration. This will sample more data from your table\nwhich should help the planner find out what the value distribution looks\nlike for a column and why using an index for conditions involving it is\na better solution.\nThe last time I had to use this setting to solve this kind of problem I\nended with :\n\ndefault_statistics_target = 500\n\nBut obviously the value suited to your case could be different (I'd\nincrease it until the planner uses the correct index). Note that\nincreasing it increases the costs of maintaining statistics (so you\ndon't want to increase this by several orders of magnitude blindly) but\nthe default value seems fairly conservative to me.\n\nFor reference and more fine-tuned settings using per table statistics\nconfiguration and multi-column statistics for complex situations, see :\n- https://www.postgresql.org/docs/13/runtime-config-query.html\n- https://www.postgresql.org/docs/13/planner-stats.html\n\n-- \nLionel Bouton\ngérant de JTEK SARL\nhttps://www.linkedin.com/in/lionelbouton/\n\n\n\n\n\n\n\nLe 29/05/2021 à 02:38, Dean Gibson (DB\n Administrator) a écrit :\n\n\n\n On 2021-05-28 16:51, Ron wrote:\n\n\n On 5/28/21 5:06 PM, Dean Gibson (DB Administrator) wrote:\n\n\n On 2021-05-28 12:38, Ron wrote:\n\n\n On 5/28/21 1:40 PM, Dean Gibson (DB Administrator) wrote:\n\n\nOn 2021-05-28 08:12, Adrian\n Klaver wrote:\n\nOn\n 5/27/21 8:41 PM, Dean Gibson (DB Administrator) wrote: \nI started to use PostgreSQL v7.3\n in 2003 on my home Linux systems (4 at one point),\n gradually moving to v9.0 w/ replication in 2010. In\n 2017 I moved my 20GB database to AWS/RDS, gradually\n upgrading to v9.6, & was entirely satisfied with\n the result. \n\n In March of this year, AWS announced that v9.6 was\n nearing end of support, & AWS would forcibly\n upgrade everyone to v12 on January 22, 2022, if users\n did not perform the upgrade earlier. My first attempt\n was successful as far as the upgrade itself, but\n complex queries that normally ran in a couple of\n seconds on v9.x, were taking minutes in v12. \n\n\n Did you run a plain ANALYZE(https://www.postgresql.org/docs/12/sql-analyze.html)\n on the tables in the new install? \n\n\n After each upgrade (to 10, 11, 12, & 13), I did a\n \"VACUUM FULL ANALYZE\". On 10 through 12, it took about 45\n minutes & significant CPU activity, & temporarily\n doubled the size of the disk space required. As you know,\n that disk space is not shrinkable under AWS's RDS. On\n v13, it took 10 hours with limited CPU activity, &\n actually slightly less disk space required. \n\n\n Under normal conditions, VACUUM FULL is pointless on a\n freshly-loaded database; in RDS, it's anti-useful.\n\n That's why Adrian asked if you did a plain ANALYZE.\n\n Just now did. No change in EXPLAIN ANALYZE output.\n\n\n Did it run in less than 10 hours?\n\n\n\n The original VACUUM FULL ANALYZE ran in 10 hours. The plain\n ANALYZE ran in 88 seconds.\n\n\n One possibility is that your data has a distribution that defeats\n the ANALYZE sampling strategy.\n\n If that is the case you can force ANALYZE to do a better job by\n increasing the default_statistics_target value (100 by default) and\n reload the configuration. This will sample more data from your table\n which should help the planner find out what the value distribution\n looks like for a column and why using an index for conditions\n involving it is a better solution.\n The last time I had to use this setting to solve this kind of\n problem I ended with :\n\n default_statistics_target = 500\n\n But obviously the value suited to your case could be different (I'd\n increase it until the planner uses the correct index). Note that\n increasing it increases the costs of maintaining statistics (so you\n don't want to increase this by several orders of magnitude blindly)\n but the default value seems fairly conservative to me.\n\n For reference and more fine-tuned settings using per table\n statistics configuration and multi-column statistics for complex\n situations, see :\n - https://www.postgresql.org/docs/13/runtime-config-query.html\n - https://www.postgresql.org/docs/13/planner-stats.html\n\n-- \nLionel Bouton\ngérant de JTEK SARL\nhttps://www.linkedin.com/in/lionelbouton/",
"msg_date": "Sat, 29 May 2021 12:40:46 +0200",
"msg_from": "Lionel Bouton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 5/28/21 5:38 PM, Dean Gibson (DB Administrator) wrote:\n\n>>\n>> Did it run in less than 10 hours?\n>>\n> \n> The original VACUUM FULL ANALYZE ran in 10 hours. The plain ANALYZE ran \n> in 88 seconds.\n\n\nCan you repeat your EXPLAIN (ANALYZE, BUFFERS) of the query from your \nfirst post and post them here:\n\nhttps://explain.depesz.com/\n\nOther information:\n1) A diff of your configuration settings between 9.6 and 13.2.\n\n2) Are you running on the same AWS instance type for the two versions of \nPostgres?\n\nIt is not necessary to repeat the table/view definitions as they are \navailable in the first post.\n\n-- \nAdrian Klaver\[email protected]\n\n\n",
"msg_date": "Sat, 29 May 2021 09:25:48 -0700",
"msg_from": "Adrian Klaver <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-05-29 09:25, Adrian Klaver wrote:\n> On 5/28/21 5:38 PM, Dean Gibson (DB Administrator) wrote:\n>\n> Can you repeat your EXPLAIN (ANALYZE, BUFFERS) of the query from your \n> first post and post them here:\n>\n> https://explain.depesz.com/\n>\n> Other information:\n> 1) A diff of your configuration settings between 9.6 and 13.2.\n>\n> 2) Are you running on the same AWS instance type for the two versions \n> of Postgres?\n>\n> It is not necessary to repeat the table/view definitions as they are \n> available in the first post.\n\nDone.\n\n1.There's probably about a hundred, but almost all are differences in \nthe default values. The most interesting (from my point of view) is my \nsetting work_mem in 8000 on v9.6, & 16000 (after 8000 didn't help) on \nv13. Doing a compare right now between the DEFAULT parameters for 9.6 & \n13, RDS reports 93 differences in the default parameters between the two.\n\n2. For v13, I moved from db.t2.micro to db.t3.micro, because RDS \nrequired that for v13. However, for the v10, 11, 12 upgrades, I kept \ndb.t2.micro.\n\nMeanwhile, I've been doing some checking. If I remove \"CAST( \nlicense_status AS CHAR ) = 'A'\", the problem disappears. Changing the \nJOIN to a RIGHT JOIN, & replacing WHERE with ON, also \"solves\" the \nproblem, but there is an extra row where license_status is NULL, due to \nthe RIGHT JOIN. Currently trying to figure that out (why did the CAST \n... match 'A', if it is null?)...\n\n\n\n\n\n\n\nOn 2021-05-29 09:25, Adrian Klaver\n wrote:\n\nOn\n 5/28/21 5:38 PM, Dean Gibson (DB Administrator) wrote:\n \n\n Can you repeat your EXPLAIN (ANALYZE, BUFFERS) of the query from\n your first post and post them here:\n \n\nhttps://explain.depesz.com/\n\n\n Other information:\n \n 1) A diff of your configuration settings between 9.6 and 13.2.\n \n\n 2) Are you running on the same AWS instance type for the two\n versions of Postgres?\n \n\n It is not necessary to repeat the table/view definitions as they\n are available in the first post.\n\n\n Done.\n\n 1.There's probably about a hundred, but almost all are differences\n in the default values. The most interesting (from my point of view)\n is my setting work_mem in 8000 on v9.6, & 16000 (after 8000\n didn't help) on v13. Doing a compare right now between the DEFAULT\n parameters for 9.6 & 13, RDS reports 93 differences in the\n default parameters between the two. \n\n 2. For v13, I moved from db.t2.micro to db.t3.micro, because RDS\n required that for v13. However, for the v10, 11, 12 upgrades, I\n kept db.t2.micro.\n\n Meanwhile, I've been doing some checking. If I remove \"CAST(\n license_status AS CHAR ) = 'A'\", the problem disappears. Changing\n the JOIN to a RIGHT JOIN, & replacing WHERE with ON, also\n \"solves\" the problem, but there is an extra row where license_status\n is NULL, due to the RIGHT JOIN. Currently trying to figure that out\n (why did the CAST ... match 'A', if it is null?)...",
"msg_date": "Sat, 29 May 2021 12:59:47 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\nOn 5/29/21 3:59 PM, Dean Gibson (DB Administrator) wrote:\n>\n>\n> Meanwhile, I've been doing some checking. If I remove \"CAST(\n> license_status AS CHAR ) = 'A'\", the problem disappears. Changing the\n> JOIN to a RIGHT JOIN, & replacing WHERE with ON, also \"solves\" the\n> problem, but there is an extra row where license_status is NULL, due\n> to the RIGHT JOIN. Currently trying to figure that out (why did the\n> CAST ... match 'A', if it is null?)...\n\n\nWhy are you using this expression? It's something you almost never want\nto do in my experience. Why not use the substr() function to get the\nfirst character?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 29 May 2021 16:35:27 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On Sat, May 29, 2021, 4:40 AM Lionel Bouton <[email protected]> wrote:\n\n> The last time I had to use this setting to solve this kind of problem I\n> ended with :\n>\n> default_statistics_target = 500\n>\n> But obviously the value suited to your case could be different (I'd\n> increase it until the planner uses the correct index). Note that increasing\n> it increases the costs of maintaining statistics (so you don't want to\n> increase this by several orders of magnitude blindly) but the default value\n> seems fairly conservative to me.\n>\n\nIt also increases planning time since those distribution statistics need to\nbe consumed and decisions have to be made.\n\nOn Sat, May 29, 2021, 4:40 AM Lionel Bouton <[email protected]> wrote:\n The last time I had to use this setting to solve this kind of\n problem I ended with :\n\n default_statistics_target = 500\n\n But obviously the value suited to your case could be different (I'd\n increase it until the planner uses the correct index). Note that\n increasing it increases the costs of maintaining statistics (so you\n don't want to increase this by several orders of magnitude blindly)\n but the default value seems fairly conservative to me.It also increases planning time since those distribution statistics need to be consumed and decisions have to be made.",
"msg_date": "Sat, 29 May 2021 16:34:57 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "I tried 500, to no avail. Since each change involves a delay as RDS \nreadjusts, I'm going down a different path at the moment.\n\nOn 2021-05-29 03:40, Lionel Bouton wrote:\n> Le 29/05/2021 à 02:38, Dean Gibson (DB Administrator) a écrit :\n>> The original VACUUM FULL ANALYZE ran in 10 hours. The plain ANALYZE \n>> ran in 88 seconds.\n>\n> One possibility is that your data has a distribution that defeats the \n> ANALYZE sampling strategy.\n>\n> If that is the case you can force ANALYZE to do a better job by \n> increasing the default_statistics_target value (100 by default) and \n> reload the configuration. This will sample more data from your table \n> which should help the planner find out what the value distribution \n> looks like for a column and why using an index for conditions \n> involving it is a better solution.\n> The last time I had to use this setting to solve this kind of problem \n> I ended with :\n>\n> default_statistics_target = 500\n>\n> But obviously the value suited to your case could be different (I'd \n> increase it until the planner uses the correct index). Note that \n> increasing it increases the costs of maintaining statistics (so you \n> don't want to increase this by several orders of magnitude blindly) \n> but the default value seems fairly conservative to me.\n>\n> For reference and more fine-tuned settings using per table statistics \n> configuration and multi-column statistics for complex situations, see :\n> - https://www.postgresql.org/docs/13/runtime-config-query.html\n> - https://www.postgresql.org/docs/13/planner-stats.html\n>\n> -- \n> Lionel Bouton\n> gérant de JTEK SARL\n> https://www.linkedin.com/in/lionelbouton/\n\n\n\n\n\n\n\n I tried 500, to no avail. Since each change involves a delay as RDS\n readjusts, I'm going down a different path at the moment.\n\nOn 2021-05-29 03:40, Lionel Bouton\n wrote:\n\n\n\nLe 29/05/2021 à 02:38, Dean Gibson\n (DB Administrator) a écrit :\n\n\n\n The original VACUUM FULL ANALYZE ran in 10 hours. The plain\n ANALYZE ran in 88 seconds.\n\n\n One possibility is that your data has a distribution that defeats\n the ANALYZE sampling strategy.\n\n If that is the case you can force ANALYZE to do a better job by\n increasing the default_statistics_target value (100 by default)\n and reload the configuration. This will sample more data from your\n table which should help the planner find out what the value\n distribution looks like for a column and why using an index for\n conditions involving it is a better solution.\n The last time I had to use this setting to solve this kind of\n problem I ended with :\n\n default_statistics_target = 500\n\n But obviously the value suited to your case could be different\n (I'd increase it until the planner uses the correct index). Note\n that increasing it increases the costs of maintaining statistics\n (so you don't want to increase this by several orders of magnitude\n blindly) but the default value seems fairly conservative to me.\n\n For reference and more fine-tuned settings using per table\n statistics configuration and multi-column statistics for complex\n situations, see :\n - https://www.postgresql.org/docs/13/runtime-config-query.html\n - https://www.postgresql.org/docs/13/planner-stats.html\n\n-- \nLionel Bouton\ngérant de JTEK SARL\nhttps://www.linkedin.com/in/lionelbouton/",
"msg_date": "Sat, 29 May 2021 15:47:39 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-05-29 13:35, Andrew Dunstan wrote:\n> On 5/29/21 3:59 PM, Dean Gibson (DB Administrator) wrote:\n>> Meanwhile, I've been doing some checking. If I remove \"CAST(\n>> license_status AS CHAR ) = 'A'\", the problem disappears. Changing the\n>> JOIN to a RIGHT JOIN, & replacing WHERE with ON, also \"solves\" the\n>> problem, but there is an extra row where license_status is NULL, due\n>> to the RIGHT JOIN. Currently trying to figure that out (why did the\n>> CAST ... match 'A', if it is null?)...\n> Why are you using this expression? It's something you almost never want\n> to do in my experience. Why not use the substr() function to get the\n> first character?\n>\n> cheers\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n\nAlthough it doesn't matter in this case, I do it because in general, it \nchanges the type of the value from CHAR to bptext or whatever it is, & \nthat has causes comparison issues in the past. It's just a matter of \nhabit for me when working with CHAR() types.\n\nBut this case, where it doesn't matter, I'd use LEFT().\n\n\n\n\n\n\nOn 2021-05-29 13:35, Andrew Dunstan\n wrote:\n\n\nOn 5/29/21 3:59 PM, Dean Gibson (DB Administrator) wrote:\n\n\nMeanwhile, I've been doing some checking. If I remove \"CAST(\nlicense_status AS CHAR ) = 'A'\", the problem disappears. Changing the\nJOIN to a RIGHT JOIN, & replacing WHERE with ON, also \"solves\" the\nproblem, but there is an extra row where license_status is NULL, due\nto the RIGHT JOIN. Currently trying to figure that out (why did the\nCAST ... match 'A', if it is null?)...\n\n\nWhy are you using this expression? It's something you almost never want\nto do in my experience. Why not use the substr() function to get the\nfirst character?\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n Although it doesn't matter in this case, I do it because in general,\n it changes the type of the value from CHAR to bptext or whatever it\n is, & that has causes comparison issues in the past. It's just\n a matter of habit for me when working with CHAR() types.\n\n But this case, where it doesn't matter, I'd use LEFT().",
"msg_date": "Sun, 6 Jun 2021 16:49:24 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\nOn 6/6/21 7:49 PM, Dean Gibson (DB Administrator) wrote:\n> On 2021-05-29 13:35, Andrew Dunstan wrote:\n>> On 5/29/21 3:59 PM, Dean Gibson (DB Administrator) wrote:\n>>> Meanwhile, I've been doing some checking. If I remove \"CAST(\n>>> license_status AS CHAR ) = 'A'\", the problem disappears. Changing the\n>>> JOIN to a RIGHT JOIN, & replacing WHERE with ON, also \"solves\" the\n>>> problem, but there is an extra row where license_status is NULL, due\n>>> to the RIGHT JOIN. Currently trying to figure that out (why did the\n>>> CAST ... match 'A', if it is null?)...\n>> Why are you using this expression? It's something you almost never want\n>> to do in my experience. Why not use the substr() function to get the\n>> first character?\n>>\n>\n> Although it doesn't matter in this case, I do it because in general,\n> it changes the type of the value from CHAR to bptext or whatever it\n> is, & that has causes comparison issues in the past. It's just a\n> matter of habit for me when working with CHAR() types.\n>\n> But this case, where it doesn't matter, I'd use LEFT().\n\n\n\nThat raises the issue of why you're using CHAR(n) fields. Just about\nevery consultant I know advises simply avoiding them. :-)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 7 Jun 2021 07:52:44 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-06-07 04:52, Andrew Dunstan wrote:\n> On 6/6/21 7:49 PM, Dean Gibson (DB Administrator) wrote:\n>> On 2021-05-29 13:35, Andrew Dunstan wrote:\n>>> On 5/29/21 3:59 PM, Dean Gibson (DB Administrator) wrote:\n>>>> ... If I remove \"CAST( license_status AS CHAR ) = 'A'\", ...\n>>> Why are you using this expression? It's something you almost never want to do in my experience. Why not use the substr() function to get the\n>>> first character?\n>> Although it doesn't matter in this case, I do it because in general, it changes the type of the value from CHAR to bptext or whatever it is, & that has caused comparison issues in the past. It's just a matter of habit for me when working with CHAR() types.\n>>\n>> But this case, where it doesn't matter, I'd use LEFT().\n>>\n>>\n>> That raises the issue of why you're using CHAR(n) fields. Just about every consultant I know advises simply avoiding them. :-)\n>>\n>> cheers, andrew\n>>\n>> --\n>> Andrew Dunstan\n>> EDB: https://www.enterprisedb.com\n\nAs I mentioned earlier, both the data & the table definitions come from \nthe FCC, the latter in the form of text files containing their formal \nSQL definitions. These often change (like two weeks ago). There are 18 \ntables currently of interest to me, with between 30 & 60 fields in each \ntable. Further, the entire data set is replaced every Sunday, with \ndaily updates during the week. About 1/6th of the text fields are \ndefined as VARCHAR; the rest are CHAR. All of the text fields that are \nused as indexes, are CHAR.\n\nBeing mindful of the fact that trailing blanks are significant in CHAR \nfields, I find it easier to keep the original FCC table definitions, & \nremap them to VIEWs containing the fields I am interested in. I've been \ndoing this with the FCC data for over 15 years, starting with PostgreSQL \n7.3.\n\nAs far as needing a consultant in DB design, the FCC is planning a new \nDB architecture \"soon\", & they sorely need one. When they export the \ndata to the public (delimited by \"|\"), they don't escape some characters \nlike \"|\", \"\\\", & <cr>. That makes it fun ...\n\n-- Dean\n\n\n\n\n\n\nOn 2021-06-07 04:52, Andrew Dunstan\n wrote:\n\n\n\nOn 6/6/21 7:49 PM, Dean Gibson (DB Administrator) wrote:\n\n\nOn 2021-05-29 13:35, Andrew Dunstan wrote:\n\n\nOn 5/29/21 3:59 PM, Dean Gibson (DB Administrator) wrote:\n\n\n... If I remove \"CAST( license_status AS CHAR ) = 'A'\", ...\n\n\nWhy are you using this expression? It's something you almost never want to do in my experience. Why not use the substr() function to get the\nfirst character?\n\n\n\nAlthough it doesn't matter in this case, I do it because in general, it changes the type of the value from CHAR to bptext or whatever it is, & that has caused comparison issues in the past. It's just a matter of habit for me when working with CHAR() types.\n\nBut this case, where it doesn't matter, I'd use LEFT().\n\n\nThat raises the issue of why you're using CHAR(n) fields. Just about every consultant I know advises simply avoiding them. :-)\n\ncheers, andrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n\n\n As I mentioned earlier, both the data & the table definitions\n come from the FCC, the latter in the form of text files containing\n their formal SQL definitions. These often change (like two weeks\n ago). There are 18 tables currently of interest to me, with between\n 30 & 60 fields in each table. Further, the entire data set is\n replaced every Sunday, with daily updates during the week. About\n 1/6th of the text fields are defined as VARCHAR; the rest are\n CHAR. All of the text fields that are used as indexes, are CHAR.\n\n Being mindful of the fact that trailing blanks are significant in\n CHAR fields, I find it easier to keep the original FCC table\n definitions, & remap them to VIEWs containing the fields I am\n interested in. I've been doing this with the FCC data for over 15\n years, starting with PostgreSQL 7.3.\n\n As far as needing a consultant in DB design, the FCC is planning a\n new DB architecture \"soon\", & they sorely need one. When they\n export the data to the public (delimited by \"|\"), they don't escape\n some characters like \"|\", \"\\\", & <cr>. That makes it fun\n ...\n\n -- Dean",
"msg_date": "Mon, 7 Jun 2021 10:27:13 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
}
] |
[
{
"msg_contents": "[Reposted to the proper list]\n\nI started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4 at \none point), gradually moving to v9.0 w/ replication in 2010. In 2017 I \nmoved my 20GB database to AWS/RDS, gradually upgrading to v9.6, & was \nentirely satisfied with the result.\n\nIn March of this year, AWS announced that v9.6 was nearing end of \nsupport, & AWS would forcibly upgrade everyone to v12 on January 22, \n2022, if users did not perform the upgrade earlier. My first attempt \nwas successful as far as the upgrade itself, but complex queries that \nnormally ran in a couple of seconds on v9.x, were taking minutes in v12.\n\nI didn't have the time in March to diagnose the problem, other than some \nfutile adjustments to server parameters, so I reverted back to a saved \ncopy of my v9.6 data.\n\nOn Sunday, being retired, I decided to attempt to solve the issue in \nearnest. I have now spent five days (about 14 hours a day), trying \nvarious things, including adding additional indexes. Keeping the v9.6 \ndata online for web users, I've \"forked\" the data into new copies, & \nupdated them in turn to PostgreSQL v10, v11, v12, & v13. All exhibit \nthe same problem: As you will see below, it appears that versions 10 & \nabove are doing a sequential scan of some of the \"large\" (200K rows) \ntables. Note that the expected & actual run times both differ for v9.6 \n& v13.2, by more than *two orders of magnitude*. Rather than post a huge \neMail (ha ha), I'll start with this one, that shows an \"EXPLAIN ANALYZE\" \nfrom both v9.6 & v13.2, followed by the related table & view \ndefinitions. With one exception, table definitions are from the FCC \n(Federal Communications Commission); the view definitions are my own.\n\n*Here's from v9.6:*\n\n=> EXPLAIN ANALYZE SELECT club_count, extra_count, region_count, \ncallsign AS trustee_callsign, applicant_type, entity_name, licensee_id \nAS _lid FROM genclub_multi_ WHERE club_count >= 5 ORDER BY extra_count \nDESC, club_count DESC, entity_name;\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=407.13..407.13 rows=1 width=94) (actual \ntime=348.850..348.859 rows=43 loops=1)\n Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC, \n\"_EN\".entity_name\n Sort Method: quicksort Memory: 31kB\n -> Nested Loop (cost=4.90..407.12 rows=1 width=94) (actual \ntime=7.587..348.732 rows=43 loops=1)\n -> Nested Loop (cost=4.47..394.66 rows=1 width=94) (actual \ntime=5.740..248.149 rows=43 loops=1)\n -> Nested Loop Left Join (cost=4.04..382.20 rows=1 \nwidth=79) (actual time=2.458..107.908 rows=55 loops=1)\n -> Hash Join (cost=3.75..380.26 rows=1 width=86) \n(actual time=2.398..106.990 rows=55 loops=1)\n Hash Cond: ((\"_EN\".country_id = \n\"_GovtRegion\".country_id) AND (\"_EN\".state = \"_GovtRegion\".territory_id))\n -> Nested Loop (cost=0.43..376.46 rows=47 \nwidth=94) (actual time=2.294..106.736 rows=55 loops=1)\n -> Seq Scan on \"_Club\" \n(cost=0.00..4.44 rows=44 width=35) (actual time=0.024..0.101 rows=44 \nloops=1)\n Filter: (club_count >= 5)\n Rows Removed by Filter: 151\n -> Index Scan using \"_EN_callsign\" on \n\"_EN\" (cost=0.43..8.45 rows=1 width=69) (actual time=2.179..2.420 \nrows=1 loops=44)\n Index Cond: (callsign = \n\"_Club\".trustee_callsign)\n -> Hash (cost=1.93..1.93 rows=93 width=7) \n(actual time=0.071..0.071 rows=88 loops=1)\n Buckets: 1024 Batches: 1 Memory \nUsage: 12kB\n -> Seq Scan on \"_GovtRegion\" \n(cost=0.00..1.93 rows=93 width=7) (actual time=0.010..0.034 rows=93 loops=1)\n -> Nested Loop (cost=0.29..1.93 rows=1 width=7) \n(actual time=0.012..0.014 rows=1 loops=55)\n Join Filter: (\"_IsoCountry\".iso_alpha2 = \n\"_Territory\".country_id)\n Rows Removed by Join Filter: 0\n -> Index Only Scan using \n\"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..1.62 rows=1 \nwidth=3) (actual time=0.006..0.006 rows=1 loops=55)\n Index Cond: (iso_alpha2 = \n\"_GovtRegion\".country_id)\n Heap Fetches: 55\n -> Index Only Scan using \"_Territory_pkey\" \non \"_Territory\" (cost=0.14..0.29 rows=1 width=7)\n (actual time=0.004..0.005 rows=1 loops=55)\n Index Cond: (territory_id = \n\"_GovtRegion\".territory_id)\n Heap Fetches: 59\n -> Index Scan using \"_HD_pkey\" on \"_HD\" \n(cost=0.43..12.45 rows=1 width=15) (actual time=2.548..2.548 rows=1 \nloops=55)\n Index Cond: (unique_system_identifier = \n\"_EN\".unique_system_identifier)\n Filter: ((\"_EN\".callsign = callsign) AND \n(((((license_status)::text || ' - '::text) || (COALESCE((SubPlan 2), \n'???'::character varying))::text))::character(1) = 'A'::bpchar))\n Rows Removed by Filter: 0\n SubPlan 2\n -> Limit (cost=0.15..8.17 rows=1 width=32) \n(actual time=0.006..0.007 rows=1 loops=55)\n -> Index Scan using \"_LicStatus_pkey\" on \n\"_LicStatus\" (cost=0.15..8.17 rows=1 width=32) (actual \ntime=0.005..0.005 rows=1 loops=55)\n Index Cond: (\"_HD\".license_status = \nstatus_id)\n -> Index Scan using \"_AM_pkey\" on \"_AM\" (cost=0.43..4.27 \nrows=1 width=15) (actual time=2.325..2.325 rows=1 loops=43)\n Index Cond: (unique_system_identifier = \n\"_EN\".unique_system_identifier)\n Filter: (\"_EN\".callsign = callsign)\n SubPlan 1\n -> Limit (cost=0.15..8.17 rows=1 width=32) (actual \ntime=0.007..0.007 rows=1 loops=43)\n -> Index Scan using \"_ApplicantType_pkey\" on \n\"_ApplicantType\" (cost=0.15..8.17 rows=1 width=32) (actual \ntime=0.005..0.005 rows=1 loops=43)\n Index Cond: (\"_EN\".applicant_type_code = \napp_type_id)\n Planning time: 13.490 ms\n Execution time: 349.182 ms\n(43 rows)\n\n\n*Here's from v13.2:*\n\n=> EXPLAIN ANALYZE SELECT club_count, extra_count, region_count, \ncallsign AS trustee_callsign, applicant_type, entity_name, licensee_id \nAS _lid FROM genclub_multi_ WHERE club_count >= 5 ORDER BY extra_count \nDESC, club_count DESC, entity_name;\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=144365.60..144365.60 rows=1 width=94) (actual \ntime=31898.860..31901.922 rows=43 loops=1)\n Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC, \n\"_EN\".entity_name\n Sort Method: quicksort Memory: 31kB\n -> Nested Loop (cost=58055.66..144365.59 rows=1 width=94) (actual \ntime=6132.403..31894.233 rows=43 loops=1)\n -> Nested Loop (cost=58055.51..144364.21 rows=1 width=62) \n(actual time=1226.085..30337.921 rows=837792 loops=1)\n -> Nested Loop Left Join (cost=58055.09..144360.38 \nrows=1 width=59) (actual time=1062.414..12471.456 rows=1487153 loops=1)\n -> Hash Join (cost=58054.80..144359.69 rows=1 \nwidth=66) (actual time=1061.330..6635.041 rows=1487153 loops=1)\n Hash Cond: ((\"_EN\".unique_system_identifier \n= \"_AM\".unique_system_identifier) AND (\"_EN\".callsign = \"_AM\".callsign))\n -> Hash Join (cost=3.33..53349.72 \nrows=1033046 width=51) (actual time=2.151..3433.178 rows=1487153 loops=1)\n Hash Cond: ((\"_EN\".country_id = \n\"_GovtRegion\".country_id) AND (\"_EN\".state = \"_GovtRegion\".territory_id))\n -> Seq Scan on \"_EN\" \n(cost=0.00..45288.05 rows=1509005 width=60) (actual time=0.037..2737.054 \nrows=1508736 loops=1)\n -> Hash (cost=1.93..1.93 rows=93 \nwidth=7) (actual time=0.706..1.264 rows=88 loops=1)\n Buckets: 1024 Batches: 1 Memory \nUsage: 12kB\n -> Seq Scan on \"_GovtRegion\" \n(cost=0.00..1.93 rows=93 width=7) (actual time=0.013..0.577 rows=93 loops=1)\n -> Hash (cost=28093.99..28093.99 \nrows=1506699 width=15) (actual time=1055.587..1055.588 rows=1506474 loops=1)\n Buckets: 131072 Batches: 32 Memory \nUsage: 3175kB\n -> Seq Scan on \"_AM\" \n(cost=0.00..28093.99 rows=1506699 width=15) (actual time=0.009..742.774 \nrows=1506474 loops=1)\n -> Nested Loop (cost=0.29..0.68 rows=1 width=7) \n(actual time=0.003..0.004 rows=1 loops=1487153)\n Join Filter: (\"_IsoCountry\".iso_alpha2 = \n\"_Territory\".country_id)\n Rows Removed by Join Filter: 0\n -> Index Only Scan using \n\"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38 rows=1 \nwidth=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n Index Cond: (iso_alpha2 = \n\"_GovtRegion\".country_id)\n Heap Fetches: 1487153\n -> Index Only Scan using \"_Territory_pkey\" \non \"_Territory\" (cost=0.14..0.29 rows=1 width=7) (actual \ntime=0.001..0.001 rows=1 loops=1487153)\n Index Cond: (territory_id = \n\"_GovtRegion\".territory_id)\n Heap Fetches: 1550706\n -> Index Scan using \"_HD_pkey\" on \"_HD\" \n(cost=0.43..3.82 rows=1 width=15) (actual time=0.012..0.012 rows=1 \nloops=1487153)\n Index Cond: (unique_system_identifier = \n\"_EN\".unique_system_identifier)\n Filter: ((\"_EN\".callsign = callsign) AND \n(((((license_status)::text || ' - '::text) || (COALESCE((SubPlan 2), \n'???'::character varying))::text))::character(1) = 'A'::bpchar))\n Rows Removed by Filter: 0\n SubPlan 2\n -> Limit (cost=0.00..1.07 rows=1 width=13) \n(actual time=0.001..0.001 rows=1 loops=1487153)\n -> Seq Scan on \"_LicStatus\" \n(cost=0.00..1.07 rows=1 width=13) (actual time=0.000..0.000 rows=1 \nloops=1487153)\n Filter: (\"_HD\".license_status = \nstatus_id)\n Rows Removed by Filter: 1\n -> Index Scan using \"_Club_pkey\" on \"_Club\" (cost=0.14..0.17 \nrows=1 width=35) (actual time=0.002..0.002 rows=0 loops=837792)\n Index Cond: (trustee_callsign = \"_EN\".callsign)\n Filter: (club_count >= 5)\n Rows Removed by Filter: 0\n SubPlan 1\n -> Limit (cost=0.00..1.20 rows=1 width=15) (actual \ntime=0.060..0.060 rows=1 loops=43)\n -> Seq Scan on \"_ApplicantType\" (cost=0.00..1.20 \nrows=1 width=15) (actual time=0.016..0.016 rows=1 loops=43)\n Filter: (\"_EN\".applicant_type_code = app_type_id)\n Rows Removed by Filter: 7\n Planning Time: 173.753 ms\n Execution Time: 31919.601 ms\n(46 rows)\n\n\n*VIEW genclub_multi_:*\n\n=> \\d+ genclub_multi_\n View \"Callsign.genclub_multi_\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n------------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n fcc_reg_num | character(10) | | | | \nextended |\n licensee_id | character(9) | | | | \nextended |\n subgroup_id_num | character(3) | | | | \nextended |\n applicant_type | text | | | | \nextended |\n entity_type | text | | | | \nextended |\n entity_name | character varying(200) | | | | \nextended |\n attention | character varying(35) | | | | \nextended |\n first_name | character varying(20) | | | | \nextended |\n middle_init | character(1) | | | | \nextended |\n last_name | character varying(20) | | | | \nextended |\n name_suffix | character(3) | | | | \nextended |\n street_address | character varying(60) | | | | \nextended |\n po_box | text | | | | \nextended |\n locality | character varying | | | | \nextended |\n locality_ | character varying | | | | \nextended |\n county | character varying | | | | \nextended |\n state | text | | | | \nextended |\n postal_code | text | | | | \nextended |\n full_name | text | | | | \nextended |\n _entity_name | text | | | | \nextended |\n _first_name | text | | | | \nextended |\n _last_name | text | | | | \nextended |\n zip5 | character(5) | | | | \nextended |\n zip_location | \"GeoPosition\" | | | | \nextended |\n maidenhead | bpchar | | | | \nextended |\n geo_region | smallint | | | | \nplain |\n uls_file_num | character(14) | | | | \nextended |\n radio_service | text | | | | \nextended |\n license_status | text | | | | \nextended |\n grant_date | date | | | | \nplain |\n effective_date | date | | | | \nplain |\n cancel_date | date | | | | \nplain |\n expire_date | date | | | | \nplain |\n end_date | date | | | | \nplain |\n available_date | date | | | | \nplain |\n last_action_date | date | | | | \nplain |\n uls_region | \"MySql\".tinyint | | | | \nplain |\n callsign_group | text | | | | \nextended |\n operator_group | text | | | | \nextended |\n operator_class | text | | | | \nextended |\n prev_class | text | | | | \nextended |\n prev_callsign | character(10) | | | | \nextended |\n vanity_type | text | | | | \nextended |\n is_trustee | character(1) | | | | \nextended |\n trustee_callsign | character(10) | | | | \nextended |\n trustee_name | character varying(50) | | | | \nextended |\n validity | integer | | | | \nplain |\n club_count | bigint | | | | \nplain |\n extra_count | bigint | | | | \nplain |\n region_count | bigint | | | | \nplain |\nView definition:\n SELECT licjb_.sys_id,\n licjb_.callsign,\n licjb_.fcc_reg_num,\n licjb_.licensee_id,\n licjb_.subgroup_id_num,\n licjb_.applicant_type,\n licjb_.entity_type,\n licjb_.entity_name,\n licjb_.attention,\n licjb_.first_name,\n licjb_.middle_init,\n licjb_.last_name,\n licjb_.name_suffix,\n licjb_.street_address,\n licjb_.po_box,\n licjb_.locality,\n licjb_.locality_,\n licjb_.county,\n licjb_.state,\n licjb_.postal_code,\n licjb_.full_name,\n licjb_._entity_name,\n licjb_._first_name,\n licjb_._last_name,\n licjb_.zip5,\n licjb_.zip_location,\n licjb_.maidenhead,\n licjb_.geo_region,\n licjb_.uls_file_num,\n licjb_.radio_service,\n licjb_.license_status,\n licjb_.grant_date,\n licjb_.effective_date,\n licjb_.cancel_date,\n licjb_.expire_date,\n licjb_.end_date,\n licjb_.available_date,\n licjb_.last_action_date,\n licjb_.uls_region,\n licjb_.callsign_group,\n licjb_.operator_group,\n licjb_.operator_class,\n licjb_.prev_class,\n licjb_.prev_callsign,\n licjb_.vanity_type,\n licjb_.is_trustee,\n licjb_.trustee_callsign,\n licjb_.trustee_name,\n licjb_.validity,\n gen.club_count,\n gen.extra_count,\n gen.region_count\n FROM licjb_,\n \"GenLicClub\" gen\n WHERE licjb_.callsign = gen.trustee_callsign AND \nlicjb_.license_status::character(1) = 'A'::bpchar;\n*\n**VIEW GenLicClub:*\n\n=> \\d+ \"GenLicClub\"\n View \"Callsign.GenLicClub\"\n Column | Type | Collation | Nullable | Default | \nStorage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n trustee_callsign | character(10) | | | | extended |\n club_count | bigint | | | | plain |\n extra_count | bigint | | | | plain |\n region_count | bigint | | | | plain |\nView definition:\n SELECT \"_Club\".trustee_callsign,\n \"_Club\".club_count,\n \"_Club\".extra_count,\n \"_Club\".region_count\n FROM \"GenLic\".\"_Club\";\n\n*TABLE \"GenLic\".\"_Club\":*\n\n=> \\d+ \"GenLic\".\"_Club\"\n Table \"GenLic._Club\"\n Column | Type | Collation | Nullable | Default | \nStorage | Stats target | Description\n------------------+---------------+-----------+----------+---------+----------+--------------+-------------\n trustee_callsign | character(10) | | not null | | extended \n| |\n club_count | bigint | | | | plain \n| |\n extra_count | bigint | | | | plain \n| |\n region_count | bigint | | | | plain \n| |\nIndexes:\n \"_Club_pkey\" PRIMARY KEY, btree (trustee_callsign)\n\n*VIEW licjb_:*\n\n=> \\d+ licjb_\n View \"Callsign.licjb_\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n------------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n fcc_reg_num | character(10) | | | | \nextended |\n licensee_id | character(9) | | | | \nextended |\n subgroup_id_num | character(3) | | | | \nextended |\n applicant_type | text | | | | \nextended |\n entity_type | text | | | | \nextended |\n entity_name | character varying(200) | | | | \nextended |\n attention | character varying(35) | | | | \nextended |\n first_name | character varying(20) | | | | \nextended |\n middle_init | character(1) | | | | \nextended |\n last_name | character varying(20) | | | | \nextended |\n name_suffix | character(3) | | | | \nextended |\n street_address | character varying(60) | | | | \nextended |\n po_box | text | | | | \nextended |\n locality | character varying | | | | \nextended |\n locality_ | character varying | | | | \nextended |\n county | character varying | | | | \nextended |\n state | text | | | | \nextended |\n postal_code | text | | | | \nextended |\n full_name | text | | | | \nextended |\n _entity_name | text | | | | \nextended |\n _first_name | text | | | | \nextended |\n _last_name | text | | | | \nextended |\n zip5 | character(5) | | | | \nextended |\n zip_location | \"GeoPosition\" | | | | \nextended |\n maidenhead | bpchar | | | | \nextended |\n geo_region | smallint | | | | \nplain |\n uls_file_num | character(14) | | | | \nextended |\n radio_service | text | | | | \nextended |\n license_status | text | | | | \nextended |\n grant_date | date | | | | \nplain |\n effective_date | date | | | | \nplain |\n cancel_date | date | | | | \nplain |\n expire_date | date | | | | \nplain |\n end_date | date | | | | \nplain |\n available_date | date | | | | \nplain |\n last_action_date | date | | | | \nplain |\n uls_region | \"MySql\".tinyint | | | | \nplain |\n callsign_group | text | | | | \nextended |\n operator_group | text | | | | \nextended |\n operator_class | text | | | | \nextended |\n prev_class | text | | | | \nextended |\n prev_callsign | character(10) | | | | \nextended |\n vanity_type | text | | | | \nextended |\n is_trustee | character(1) | | | | \nextended |\n trustee_callsign | character(10) | | | | \nextended |\n trustee_name | character varying(50) | | | | \nextended |\n validity | integer | | | | \nplain |\nView definition:\n SELECT lic_en_.sys_id,\n lic_en_.callsign,\n lic_en_.fcc_reg_num,\n lic_en_.licensee_id,\n lic_en_.subgroup_id_num,\n lic_en_.applicant_type,\n lic_en_.entity_type,\n lic_en_.entity_name,\n lic_en_.attention,\n lic_en_.first_name,\n lic_en_.middle_init,\n lic_en_.last_name,\n lic_en_.name_suffix,\n lic_en_.street_address,\n lic_en_.po_box,\n lic_en_.locality,\n lic_en_.locality_,\n lic_en_.county,\n lic_en_.state,\n lic_en_.postal_code,\n lic_en_.full_name,\n lic_en_._entity_name,\n lic_en_._first_name,\n lic_en_._last_name,\n lic_en_.zip5,\n lic_en_.zip_location,\n lic_en_.maidenhead,\n lic_en_.geo_region,\n lic_hd_.uls_file_num,\n lic_hd_.radio_service,\n lic_hd_.license_status,\n lic_hd_.grant_date,\n lic_hd_.effective_date,\n lic_hd_.cancel_date,\n lic_hd_.expire_date,\n lic_hd_.end_date,\n lic_hd_.available_date,\n lic_hd_.last_action_date,\n lic_am_.uls_region,\n lic_am_.callsign_group,\n lic_am_.operator_group,\n lic_am_.operator_class,\n lic_am_.prev_class,\n lic_am_.prev_callsign,\n lic_am_.vanity_type,\n lic_am_.is_trustee,\n lic_am_.trustee_callsign,\n lic_am_.trustee_name,\n CASE\n WHEN lic_am_.vanity_type::character(1) = ANY \n(ARRAY['A'::bpchar, 'C'::bpchar]) THEN verify_callsign(lic_en_.callsign, \nlic_en_.licensee_id, lic_hd_.grant_date, lic_en_.state::bpchar, \nlic_am_.operator_class::bpchar, lic_en_.applicant_type::bpchar, \nlic_am_.trustee_callsign)\n ELSE NULL::integer\n END AS validity\n FROM lic_en_\n JOIN lic_hd_ USING (sys_id, callsign)\n JOIN lic_am_ USING (sys_id, callsign);\n\n*VIEW lic_en_:*\n\n=> \\d+ lic_en_\n View \"Callsign.lic_en_\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n-----------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n fcc_reg_num | character(10) | | | | \nextended |\n licensee_id | character(9) | | | | \nextended |\n subgroup_id_num | character(3) | | | | \nextended |\n applicant_type | text | | | | \nextended |\n entity_type | text | | | | \nextended |\n entity_name | character varying(200) | | | | \nextended |\n attention | character varying(35) | | | | \nextended |\n first_name | character varying(20) | | | | \nextended |\n middle_init | character(1) | | | | \nextended |\n last_name | character varying(20) | | | | \nextended |\n name_suffix | character(3) | | | | \nextended |\n street_address | character varying(60) | | | | \nextended |\n po_box | text | | | | \nextended |\n locality | character varying | | | | \nextended |\n locality_ | character varying | | | | \nextended |\n county | character varying | | | | \nextended |\n state | text | | | | \nextended |\n postal_code | text | | | | \nextended |\n full_name | text | | | | \nextended |\n _entity_name | text | | | | \nextended |\n _first_name | text | | | | \nextended |\n _last_name | text | | | | \nextended |\n zip5 | character(5) | | | | \nextended |\n zip_location | \"GeoPosition\" | | | | \nextended |\n maidenhead | bpchar | | | | \nextended |\n geo_region | smallint | | | | \nplain |\nView definition:\n SELECT lic_en.sys_id,\n lic_en.callsign,\n lic_en.fcc_reg_num,\n lic_en.licensee_id,\n lic_en.subgroup_id_num,\n (lic_en.applicant_type::text || ' - '::text) || COALESCE(( SELECT \n\"ApplicantType\".app_type_text\n FROM \"ApplicantType\"\n WHERE lic_en.applicant_type = \"ApplicantType\".app_type_id\n LIMIT 1), '???'::character varying)::text AS applicant_type,\n (lic_en.entity_type::text || ' - '::text) || COALESCE(( SELECT \n\"EntityType\".entity_text\n FROM \"EntityType\"\n WHERE lic_en.entity_type = \"EntityType\".entity_id\n LIMIT 1), '???'::character varying)::text AS entity_type,\n lic_en.entity_name,\n lic_en.attention,\n lic_en.first_name,\n lic_en.middle_init,\n lic_en.last_name,\n lic_en.name_suffix,\n lic_en.street_address,\n lic_en.po_box,\n lic_en.locality,\n zip_code.locality_text AS locality_,\n \"County\".county_text AS county,\n (territory_id::text || ' - '::text) || \nCOALESCE(govt_region.territory_text, '???'::character varying)::text AS \nstate,\n zip9_format(lic_en.postal_code::text) AS postal_code,\n lic_en.full_name,\n lic_en._entity_name,\n lic_en._first_name,\n lic_en._last_name,\n lic_en.zip5,\n zip_code.zip_location,\n maidenhead(zip_code.zip_location) AS maidenhead,\n govt_region.geo_region\n FROM lic_en\n JOIN govt_region USING (territory_id, country_id)\n LEFT JOIN zip_code USING (territory_id, country_id, zip5)\n LEFT JOIN \"County\" USING (territory_id, country_id, fips_county);\n\n*VIEW lic_en:*\n\n=> \\d+ lic_en\n View \"Callsign.lic_en\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n-----------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n fcc_reg_num | character(10) | | | | \nextended |\n licensee_id | character(9) | | | | \nextended |\n subgroup_id_num | character(3) | | | | \nextended |\n applicant_type | character(1) | | | | \nextended |\n entity_type | character(2) | | | | \nextended |\n entity_name | character varying(200) | | | | \nextended |\n attention | character varying(35) | | | | \nextended |\n first_name | character varying(20) | | | | \nextended |\n middle_init | character(1) | | | | \nextended |\n last_name | character varying(20) | | | | \nextended |\n name_suffix | character(3) | | | | \nextended |\n street_address | character varying(60) | | | | \nextended |\n po_box | text | | | | \nextended |\n locality | character varying | | | | \nextended |\n territory_id | character(2) | | | | \nextended |\n postal_code | character(9) | | | | \nextended |\n full_name | text | | | | \nextended |\n _entity_name | text | | | | \nextended |\n _first_name | text | | | | \nextended |\n _last_name | text | | | | \nextended |\n zip5 | character(5) | | | | \nextended |\n country_id | character(2) | | | | \nextended |\nView definition:\n SELECT _lic_en.sys_id,\n _lic_en.callsign,\n _lic_en.fcc_reg_num,\n _lic_en.licensee_id,\n _lic_en.subgroup_id_num,\n _lic_en.applicant_type,\n _lic_en.entity_type,\n _lic_en.entity_name,\n _lic_en.attention,\n _lic_en.first_name,\n _lic_en.middle_init,\n _lic_en.last_name,\n _lic_en.name_suffix,\n _lic_en.street_address,\n _lic_en.po_box,\n _lic_en.locality,\n _lic_en.territory_id,\n _lic_en.postal_code,\n _lic_en.full_name,\n _lic_en._entity_name,\n _lic_en._first_name,\n _lic_en._last_name,\n _lic_en.zip5,\n _lic_en.country_id\n FROM _lic_en;\n\n*VIEW _lic_en:*\n\n=> \\d+ _lic_en\n View \"Callsign._lic_en\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n-----------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n fcc_reg_num | character(10) | | | | \nextended |\n licensee_id | character(9) | | | | \nextended |\n subgroup_id_num | character(3) | | | | \nextended |\n applicant_type | character(1) | | | | \nextended |\n entity_type | character(2) | | | | \nextended |\n entity_name | character varying(200) | | | | \nextended |\n attention | character varying(35) | | | | \nextended |\n first_name | character varying(20) | | | | \nextended |\n middle_init | character(1) | | | | \nextended |\n last_name | character varying(20) | | | | \nextended |\n name_suffix | character(3) | | | | \nextended |\n street_address | character varying(60) | | | | \nextended |\n po_box | text | | | | \nextended |\n locality | character varying | | | | \nextended |\n territory_id | character(2) | | | | \nextended |\n postal_code | character(9) | | | | \nextended |\n full_name | text | | | | \nextended |\n _entity_name | text | | | | \nextended |\n _first_name | text | | | | \nextended |\n _last_name | text | | | | \nextended |\n zip5 | character(5) | | | | \nextended |\n country_id | character(2) | | | | \nextended |\nView definition:\n SELECT \"_EN\".unique_system_identifier AS sys_id,\n \"_EN\".callsign,\n \"_EN\".frn AS fcc_reg_num,\n \"_EN\".licensee_id,\n \"_EN\".sgin AS subgroup_id_num,\n \"_EN\".applicant_type_code AS applicant_type,\n \"_EN\".entity_type,\n \"_EN\".entity_name,\n \"_EN\".attention_line AS attention,\n \"_EN\".first_name,\n \"_EN\".mi AS middle_init,\n \"_EN\".last_name,\n \"_EN\".suffix AS name_suffix,\n \"_EN\".street_address,\n po_box_format(\"_EN\".po_box::text) AS po_box,\n \"_EN\".city AS locality,\n \"_EN\".state AS territory_id,\n \"_EN\".zip_code AS postal_code,\n initcap(((COALESCE(\"_EN\".first_name::text || ' '::text, ''::text) \n|| COALESCE(\"_EN\".mi::text || ' '::text, ''::text)) || \n\"_EN\".last_name::text) || COALESCE(' '::text || \"_EN\".suffix::text, \n''::text)) AS full_name,\n initcap(\"_EN\".entity_name::text) AS _entity_name,\n initcap(\"_EN\".first_name::text) AS _first_name,\n initcap(\"_EN\".last_name::text) AS _last_name,\n \"_EN\".zip_code::character(5) AS zip5,\n \"_EN\".country_id\n FROM \"UlsLic\".\"_EN\";\n\n*TABLE \"UlsLic\".\"_EN\"**:*\n\n=> \\d+ \"UlsLic\".\"_EN\"\n Table \"UlsLic._EN\"\n Column | Type | Collation | \nNullable | Default | Storage | Stats target | Description\n--------------------------+------------------------+-----------+----------+---------+----------+--------------+-------------\n record_type | character(2) | | not \nnull | | extended | |\n unique_system_identifier | integer | | not \nnull | | plain | |\n uls_file_number | character(14) | | \n| | extended | |\n ebf_number | character varying(30) | | \n| | extended | |\n callsign | character(10) | | \n| | extended | |\n entity_type | character(2) | | \n| | extended | |\n licensee_id | character(9) | | \n| | extended | |\n entity_name | character varying(200) | | \n| | extended | |\n first_name | character varying(20) | | \n| | extended | |\n mi | character(1) | | \n| | extended | |\n last_name | character varying(20) | | \n| | extended | |\n suffix | character(3) | | \n| | extended | |\n phone | character(10) | | \n| | extended | |\n fax | character(10) | | \n| | extended | |\n email | character varying(50) | | \n| | extended | |\n street_address | character varying(60) | | \n| | extended | |\n city | character varying | | \n| | extended | |\n state | character(2) | | \n| | extended | |\n zip_code | character(9) | | \n| | extended | |\n po_box | character varying(20) | | \n| | extended | |\n attention_line | character varying(35) | | \n| | extended | |\n sgin | character(3) | | \n| | extended | |\n frn | character(10) | | \n| | extended | |\n applicant_type_code | character(1) | | \n| | extended | |\n applicant_type_other | character(40) | | \n| | extended | |\n status_code | character(1) | | \n| | extended | |\n status_date | \"MySql\".datetime | | \n| | plain | |\n lic_category_code | character(1) | | \n| | extended | |\n linked_license_id | numeric(9,0) | | \n| | main | |\n linked_callsign | character(10) | | \n| | extended | |\n country_id | character(2) | | \n| | extended | |\nIndexes:\n \"_EN_pkey\" PRIMARY KEY, btree (unique_system_identifier)\n \"_EN__entity_name\" btree (initcap(entity_name::text))\n \"_EN__first_name\" btree (initcap(first_name::text))\n \"_EN__last_name\" btree (initcap(last_name::text))\n \"_EN__zip5\" btree ((zip_code::character(5)))\n \"_EN_callsign\" btree (callsign)\n \"_EN_fcc_reg_num\" btree (frn)\n \"_EN_licensee_id\" btree (licensee_id)\nCheck constraints:\n \"_EN_record_type_check\" CHECK (record_type = 'EN'::bpchar)\nForeign-key constraints:\n \"_EN_applicant_type_code_fkey\" FOREIGN KEY (applicant_type_code) \nREFERENCES \"FccLookup\".\"_ApplicantType\"(app_type_id\n)\n \"_EN_entity_type_fkey\" FOREIGN KEY (entity_type) REFERENCES \n\"FccLookup\".\"_EntityType\"(entity_id)\n \"_EN_state_fkey\" FOREIGN KEY (state, country_id) REFERENCES \n\"BaseLookup\".\"_Territory\"(territory_id, country_id)\n \"_EN_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFERENCES \"UlsLic\".\"_HD\"(unique_system_i\ndentifier) ON UPDATE CASCADE ON DELETE CASCADE\n\n\n*VIEW lic_hd_:*\n\n=> \\d+ lic_hd_\n View \"Callsign.lic_hd_\"\n Column | Type | Collation | Nullable | Default | \nStorage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | plain |\n callsign | character(10) | | | | extended |\n uls_file_num | character(14) | | | | extended |\n radio_service | text | | | | extended |\n license_status | text | | | | extended |\n grant_date | date | | | | plain |\n effective_date | date | | | | plain |\n cancel_date | date | | | | plain |\n expire_date | date | | | | plain |\n end_date | date | | | | plain |\n available_date | date | | | | plain |\n last_action_date | date | | | | plain |\nView definition:\n SELECT lic_hd.sys_id,\n lic_hd.callsign,\n lic_hd.uls_file_num,\n (lic_hd.radio_service::text || ' - '::text) || COALESCE(( SELECT \n\"RadioService\".service_text\n FROM \"RadioService\"\n WHERE lic_hd.radio_service = \"RadioService\".service_id\n LIMIT 1), '???'::character varying)::text AS radio_service,\n (lic_hd.license_status::text || ' - '::text) || COALESCE(( SELECT \n\"LicStatus\".status_text\n FROM \"LicStatus\"\n WHERE lic_hd.license_status = \"LicStatus\".status_id\n LIMIT 1), '???'::character varying)::text AS license_status,\n lic_hd.grant_date,\n lic_hd.effective_date,\n lic_hd.cancel_date,\n lic_hd.expire_date,\n LEAST(lic_hd.cancel_date, lic_hd.expire_date) AS end_date,\n CASE\n WHEN lic_hd.cancel_date < lic_hd.expire_date THEN \nGREATEST((lic_hd.cancel_date + '2 years'::interval)::date, \nlic_hd.last_action_date + 30)\n WHEN lic_hd.license_status = 'A'::bpchar AND uls_date() > \n(lic_hd.expire_date + '2 years'::interval)::date THEN NULL::date\n ELSE (lic_hd.expire_date + '2 years'::interval)::date\n END + 1 AS available_date,\n lic_hd.last_action_date\n FROM lic_hd;\n\n*VIEW lic_hd:*\n\n=> \\d+ lic_hd\n View \"Callsign.lic_hd\"\n Column | Type | Collation | Nullable | Default | \nStorage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | plain |\n callsign | character(10) | | | | extended |\n uls_file_num | character(14) | | | | extended |\n radio_service | character(2) | | | | extended |\n license_status | character(1) | | | | extended |\n grant_date | date | | | | plain |\n effective_date | date | | | | plain |\n cancel_date | date | | | | plain |\n expire_date | date | | | | plain |\n last_action_date | date | | | | plain |\nView definition:\n SELECT _lic_hd.sys_id,\n _lic_hd.callsign,\n _lic_hd.uls_file_num,\n _lic_hd.radio_service,\n _lic_hd.license_status,\n _lic_hd.grant_date,\n _lic_hd.effective_date,\n _lic_hd.cancel_date,\n _lic_hd.expire_date,\n _lic_hd.last_action_date\n FROM _lic_hd;\n\n*VIEW _lic_hd:*\n\n=> \\d+ _lic_hd\n View \"Callsign._lic_hd\"\n Column | Type | Collation | Nullable | Default | \nStorage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | plain |\n callsign | character(10) | | | | extended |\n uls_file_num | character(14) | | | | extended |\n radio_service | character(2) | | | | extended |\n license_status | character(1) | | | | extended |\n grant_date | date | | | | plain |\n effective_date | date | | | | plain |\n cancel_date | date | | | | plain |\n expire_date | date | | | | plain |\n last_action_date | date | | | | plain |\nView definition:\n SELECT \"_HD\".unique_system_identifier AS sys_id,\n \"_HD\".callsign,\n \"_HD\".uls_file_number AS uls_file_num,\n \"_HD\".radio_service_code AS radio_service,\n \"_HD\".license_status,\n \"_HD\".grant_date,\n \"_HD\".effective_date,\n \"_HD\".cancellation_date AS cancel_date,\n \"_HD\".expired_date AS expire_date,\n \"_HD\".last_action_date\n FROM \"UlsLic\".\"_HD\";\n\n*TABLE **\"UlsLic\".\"_HD\"**:*\n\n=> \\d+ \"UlsLic\".\"_HD\"\n Table \"UlsLic._HD\"\n Column | Type | Collation | \nNullable | Default | Storage | Stats target | Descr\niption\n------------------------------+-----------------------+-----------+----------+---------+----------+--------------+------\n-------\n record_type | character(2) | | not null \n| | extended | |\n unique_system_identifier | integer | | not null \n| | plain | |\n uls_file_number | character(14) | | \n| | extended | |\n ebf_number | character varying(30) | | \n| | extended | |\n callsign | character(10) | | \n| | extended | |\n license_status | character(1) | | \n| | extended | |\n radio_service_code | character(2) | | \n| | extended | |\n grant_date | date | | \n| | plain | |\n expired_date | date | | \n| | plain | |\n cancellation_date | date | | \n| | plain | |\n eligibility_rule_num | character(10) | | \n| | extended | |\n applicant_type_code_reserved | character(1) | | \n| | extended | |\n alien | character(1) | | \n| | extended | |\n alien_government | character(1) | | \n| | extended | |\n alien_corporation | character(1) | | \n| | extended | |\n alien_officer | character(1) | | \n| | extended | |\n alien_control | character(1) | | \n| | extended | |\n revoked | character(1) | | \n| | extended | |\n convicted | character(1) | | \n| | extended | |\n adjudged | character(1) | | \n| | extended | |\n involved_reserved | character(1) | | \n| | extended | |\n common_carrier | character(1) | | \n| | extended | |\n non_common_carrier | character(1) | | \n| | extended | |\n private_comm | character(1) | | \n| | extended | |\n fixed | character(1) | | \n| | extended | |\n mobile | character(1) | | \n| | extended | |\n radiolocation | character(1) | | \n| | extended | |\n satellite | character(1) | | \n| | extended | |\n developmental_or_sta | character(1) | | \n| | extended | |\n interconnected_service | character(1) | | \n| | extended | |\n certifier_first_name | character varying(20) | | \n| | extended | |\n certifier_mi | character varying | | \n| | extended | |\n certifier_last_name | character varying | | \n| | extended | |\n certifier_suffix | character(3) | | \n| | extended | |\n certifier_title | character(40) | | \n| | extended | |\n gender | character(1) | | \n| | extended | |\n african_american | character(1) | | \n| | extended | |\n native_american | character(1) | | \n| | extended | |\n hawaiian | character(1) | | \n| | extended | |\n asian | character(1) | | \n| | extended | |\n white | character(1) | | \n| | extended | |\n ethnicity | character(1) | | \n| | extended | |\n effective_date | date | | \n| | plain | |\n last_action_date | date | | \n| | plain | |\n auction_id | integer | | \n| | plain | |\n reg_stat_broad_serv | character(1) | | \n| | extended | |\n band_manager | character(1) | | \n| | extended | |\n type_serv_broad_serv | character(1) | | \n| | extended | |\n alien_ruling | character(1) | | \n| | extended | |\n licensee_name_change | character(1) | | \n| | extended | |\n whitespace_ind | character(1) | | \n| | extended | |\n additional_cert_choice | character(1) | | \n| | extended | |\n additional_cert_answer | character(1) | | \n| | extended | |\n discontinuation_ind | character(1) | | \n| | extended | |\n regulatory_compliance_ind | character(1) | | \n| | extended | |\n dummy1 | character varying | | \n| | extended | |\n dummy2 | character varying | | \n| | extended | |\n dummy3 | character varying | | \n| | extended | |\n dummy4 | character varying | | \n| | extended | |\nIndexes:\n \"_HD_pkey\" PRIMARY KEY, btree (unique_system_identifier)\n \"_HD_callsign\" btree (callsign)\n \"_HD_grant_date\" btree (grant_date)\n \"_HD_last_action_date\" btree (last_action_date)\n \"_HD_uls_file_num\" btree (uls_file_number)\nCheck constraints:\n \"_HD_record_type_check\" CHECK (record_type = 'HD'::bpchar)\nForeign-key constraints:\n \"_HD_license_status_fkey\" FOREIGN KEY (license_status) REFERENCES \n\"FccLookup\".\"_LicStatus\"(status_id)\n \"_HD_radio_service_code_fkey\" FOREIGN KEY (radio_service_code) \nREFERENCES \"FccLookup\".\"_RadioService\"(service_id)\nReferenced by:\n TABLE \"\"UlsLic\".\"_AM\"\" CONSTRAINT \n\"_AM_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFEREN\nCES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON DELETE \nCASCADE\n TABLE \"\"UlsLic\".\"_CO\"\" CONSTRAINT \n\"_CO_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFEREN\nCES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON DELETE \nCASCADE\n TABLE \"\"UlsLic\".\"_EN\"\" CONSTRAINT \n\"_EN_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFEREN\nCES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON DELETE \nCASCADE\n TABLE \"\"UlsLic\".\"_HS\"\" CONSTRAINT \n\"_HS_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFEREN\nCES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON DELETE \nCASCADE\n TABLE \"\"UlsLic\".\"_LA\"\" CONSTRAINT \n\"_LA_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFEREN\nCES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON DELETE \nCASCADE\n TABLE \"\"UlsLic\".\"_SC\"\" CONSTRAINT \n\"_SC_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFEREN\nCES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON DELETE \nCASCADE\n TABLE \"\"UlsLic\".\"_SF\"\" CONSTRAINT \n\"_SF_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFEREN\nCES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON DELETE \nCASCADE\n\n*VIEW lic_am_:*\n\n=> \\d+ lic_am_\n View \"Callsign.lic_am_\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n------------------+-----------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n uls_region | \"MySql\".tinyint | | | | \nplain |\n callsign_group | text | | | | \nextended |\n operator_group | text | | | | \nextended |\n operator_class | text | | | | \nextended |\n prev_class | text | | | | \nextended |\n prev_callsign | character(10) | | | | \nextended |\n vanity_type | text | | | | \nextended |\n is_trustee | character(1) | | | | \nextended |\n trustee_callsign | character(10) | | | | \nextended |\n trustee_name | character varying(50) | | | | \nextended |\nView definition:\n SELECT lic_am.sys_id,\n lic_am.callsign,\n lic_am.uls_region,\n ( SELECT (\"CallsignGroup\".group_id::text || ' - '::text) || \n\"CallsignGroup\".match_text::text\n FROM \"CallsignGroup\"\n WHERE lic_am.callsign ~ \"CallsignGroup\".pattern::text\n LIMIT 1) AS callsign_group,\n ( SELECT (oper_group.group_id::text || ' - '::text) || \noper_group.group_text::text\n FROM oper_group\n WHERE lic_am.operator_class = oper_group.class_id\n LIMIT 1) AS operator_group,\n (lic_am.operator_class::text || ' - '::text) || COALESCE(( SELECT \n\"OperatorClass\".class_text\n FROM \"OperatorClass\"\n WHERE lic_am.operator_class = \"OperatorClass\".class_id\n LIMIT 1), '???'::character varying)::text AS operator_class,\n (lic_am.prev_class::text || ' - '::text) || COALESCE(( SELECT \n\"OperatorClass\".class_text\n FROM \"OperatorClass\"\n WHERE lic_am.prev_class = \"OperatorClass\".class_id\n LIMIT 1), '???'::character varying)::text AS prev_class,\n lic_am.prev_callsign,\n (lic_am.vanity_type::text || ' - '::text) || COALESCE(( SELECT \n\"VanityType\".vanity_text\n FROM \"VanityType\"\n WHERE lic_am.vanity_type = \"VanityType\".vanity_id\n LIMIT 1), '???'::character varying)::text AS vanity_type,\n lic_am.is_trustee,\n lic_am.trustee_callsign,\n lic_am.trustee_name\n FROM lic_am;\n\n*VIEW lic_am:*\n\n=> \\d+ lic_am\n View \"Callsign.lic_am\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n------------------+-----------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n uls_region | \"MySql\".tinyint | | | | \nplain |\n uls_group | character(1) | | | | \nextended |\n operator_class | character(1) | | | | \nextended |\n prev_callsign | character(10) | | | | \nextended |\n prev_class | character(1) | | | | \nextended |\n vanity_type | character(1) | | | | \nextended |\n is_trustee | character(1) | | | | \nextended |\n trustee_callsign | character(10) | | | | \nextended |\n trustee_name | character varying(50) | | | | \nextended |\nView definition:\n SELECT _lic_am.sys_id,\n _lic_am.callsign,\n _lic_am.uls_region,\n _lic_am.uls_group,\n _lic_am.operator_class,\n _lic_am.prev_callsign,\n _lic_am.prev_class,\n _lic_am.vanity_type,\n _lic_am.is_trustee,\n _lic_am.trustee_callsign,\n _lic_am.trustee_name\n FROM _lic_am;\n\n*VIEW _lic_am:*\n\n=> \\d+ _lic_am\n View \"Callsign._lic_am\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n------------------+-----------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n uls_region | \"MySql\".tinyint | | | | \nplain |\n uls_group | character(1) | | | | \nextended |\n operator_class | character(1) | | | | \nextended |\n prev_callsign | character(10) | | | | \nextended |\n prev_class | character(1) | | | | \nextended |\n vanity_type | character(1) | | | | \nextended |\n is_trustee | character(1) | | | | \nextended |\n trustee_callsign | character(10) | | | | \nextended |\n trustee_name | character varying(50) | | | | \nextended |\nView definition:\n SELECT \"_AM\".unique_system_identifier AS sys_id,\n \"_AM\".callsign,\n \"_AM\".region_code AS uls_region,\n \"_AM\".group_code AS uls_group,\n \"_AM\".operator_class,\n \"_AM\".previous_callsign AS prev_callsign,\n \"_AM\".previous_operator_class AS prev_class,\n \"_AM\".vanity_callsign_change AS vanity_type,\n \"_AM\".trustee_indicator AS is_trustee,\n \"_AM\".trustee_callsign,\n \"_AM\".trustee_name\n FROM \"UlsLic\".\"_AM\";\n\n*TABLE **\"UlsLic\".\"_AM\"**:*\n\n=> \\d+ \"UlsLic\".\"_AM\"\n Table \"UlsLic._AM\"\n Column | Type | Collation | \nNullable | Default | Storage | Stats target | Description\n----------------------------+-----------------------+-----------+----------+---------+----------+--------------+-------------\n record_type | character(2) | | not \nnull | | extended | |\n unique_system_identifier | integer | | not \nnull | | plain | |\n uls_file_number | character(14) | | \n| | extended | |\n ebf_number | character varying(30) | | \n| | extended | |\n callsign | character(10) | | \n| | extended | |\n operator_class | character(1) | | \n| | extended | |\n group_code | character(1) | | \n| | extended | |\n region_code | \"MySql\".tinyint | | \n| | plain | |\n trustee_callsign | character(10) | | \n| | extended | |\n trustee_indicator | character(1) | | \n| | extended | |\n physician_certification | character(1) | | \n| | extended | |\n ve_signature | character(1) | | \n| | extended | |\n systematic_callsign_change | character(1) | | \n| | extended | |\n vanity_callsign_change | character(1) | | \n| | extended | |\n vanity_relationship | character(12) | | \n| | extended | |\n previous_callsign | character(10) | | \n| | extended | |\n previous_operator_class | character(1) | | \n| | extended | |\n trustee_name | character varying(50) | | \n| | extended | |\nIndexes:\n \"_AM_pkey\" PRIMARY KEY, btree (unique_system_identifier)\n \"_AM_callsign\" btree (callsign)\n \"_AM_prev_callsign\" btree (previous_callsign)\n \"_AM_trustee_callsign\" btree (trustee_callsign)\nCheck constraints:\n \"_AM_record_type_check\" CHECK (record_type = 'AM'::bpchar)\nForeign-key constraints:\n \"_AM_operator_class_fkey\" FOREIGN KEY (operator_class) REFERENCES \n\"FccLookup\".\"_OperatorClass\"(class_id)\n \"_AM_previous_operator_class_fkey\" FOREIGN KEY \n(previous_operator_class) REFERENCES \"FccLookup\".\"_OperatorClass\"(cla\nss_id)\n \"_AM_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFERENCES \"UlsLic\".\"_HD\"(unique_system_i\ndentifier) ON UPDATE CASCADE ON DELETE CASCADE\n \"_AM_vanity_callsign_change_fkey\" FOREIGN KEY \n(vanity_callsign_change) REFERENCES \"FccLookup\".\"_VanityType\"(vanity_i\nd)\n\n\n\n\n\n\n\n [Reposted to the proper list]\n\n I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4\n at one point), gradually moving to v9.0 w/ replication in 2010. In\n 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to\n v9.6, & was entirely satisfied with the result.\n\n In March of this year, AWS announced that v9.6 was nearing end of\n support, & AWS would forcibly upgrade everyone to v12 on January\n 22, 2022, if users did not perform the upgrade earlier. My first\n attempt was successful as far as the upgrade itself, but complex\n queries that normally ran in a couple of seconds on v9.x, were\n taking minutes in v12.\n\n I didn't have the time in March to diagnose the problem, other than\n some futile adjustments to server parameters, so I reverted back to\n a saved copy of my v9.6 data.\n\n On Sunday, being retired, I decided to attempt to solve the issue in\n earnest. I have now spent five days (about 14 hours a day), trying\n various things, including adding additional indexes. Keeping the\n v9.6 data online for web users, I've \"forked\" the data into new\n copies, & updated them in turn to PostgreSQL v10, v11, v12,\n & v13. All exhibit the same problem: As you will see below, it\n appears that versions 10 & above are doing a sequential scan of\n some of the \"large\" (200K rows) tables. Note that the expected\n & actual run times both differ for v9.6 & v13.2, by more\n than two orders of magnitude. Rather than post a huge eMail\n (ha ha), I'll start with this one, that shows an \"EXPLAIN ANALYZE\"\n from both v9.6 & v13.2, followed by the related table & view\n definitions. With one exception, table definitions are from the FCC\n (Federal Communications Commission); the view definitions are my\n own.\n\nHere's from v9.6:\n\n=> EXPLAIN ANALYZE SELECT\n club_count, extra_count, region_count, callsign AS\n trustee_callsign, applicant_type, entity_name, licensee_id AS _lid\n FROM genclub_multi_ WHERE club_count >= 5 ORDER BY extra_count\n DESC, club_count DESC, entity_name;\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=407.13..407.13 rows=1 width=94) (actual\n time=348.850..348.859 rows=43 loops=1)\n Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC,\n \"_EN\".entity_name\n Sort Method: quicksort Memory: 31kB\n -> Nested Loop (cost=4.90..407.12 rows=1 width=94) (actual\n time=7.587..348.732 rows=43 loops=1)\n -> Nested Loop (cost=4.47..394.66 rows=1 width=94)\n (actual time=5.740..248.149 rows=43 loops=1)\n -> Nested Loop Left Join (cost=4.04..382.20\n rows=1 width=79) (actual time=2.458..107.908 rows=55 loops=1)\n -> Hash Join (cost=3.75..380.26 rows=1\n width=86) (actual time=2.398..106.990 rows=55 loops=1)\n Hash Cond: ((\"_EN\".country_id =\n \"_GovtRegion\".country_id) AND (\"_EN\".state =\n \"_GovtRegion\".territory_id))\n -> Nested Loop (cost=0.43..376.46\n rows=47 width=94) (actual time=2.294..106.736 rows=55 loops=1)\n -> Seq Scan on \"_Club\" \n (cost=0.00..4.44 rows=44 width=35) (actual time=0.024..0.101\n rows=44 loops=1)\n Filter: (club_count >=\n 5)\n Rows Removed by Filter: 151\n -> Index Scan using\n \"_EN_callsign\" on \"_EN\" (cost=0.43..8.45 rows=1 width=69) (actual\n time=2.179..2.420 rows=1 loops=44)\n Index Cond: (callsign =\n \"_Club\".trustee_callsign)\n -> Hash (cost=1.93..1.93 rows=93\n width=7) (actual time=0.071..0.071 rows=88 loops=1)\n Buckets: 1024 Batches: 1 Memory\n Usage: 12kB\n -> Seq Scan on \"_GovtRegion\" \n (cost=0.00..1.93 rows=93 width=7) (actual time=0.010..0.034\n rows=93 loops=1)\n -> Nested Loop (cost=0.29..1.93 rows=1\n width=7) (actual time=0.012..0.014 rows=1 loops=55)\n Join Filter: (\"_IsoCountry\".iso_alpha2\n = \"_Territory\".country_id)\n Rows Removed by Join Filter: 0\n -> Index Only Scan using\n \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..1.62\n rows=1 width=3) (actual time=0.006..0.006 rows=1 loops=55)\n Index Cond: (iso_alpha2 =\n \"_GovtRegion\".country_id)\n Heap Fetches: 55\n -> Index Only Scan using\n \"_Territory_pkey\" on \"_Territory\" (cost=0.14..0.29 rows=1\n width=7)\n (actual time=0.004..0.005 rows=1 loops=55)\n Index Cond: (territory_id =\n \"_GovtRegion\".territory_id)\n Heap Fetches: 59\n -> Index Scan using \"_HD_pkey\" on \"_HD\" \n (cost=0.43..12.45 rows=1 width=15) (actual time=2.548..2.548\n rows=1 loops=55)\n Index Cond: (unique_system_identifier =\n \"_EN\".unique_system_identifier)\n Filter: ((\"_EN\".callsign = callsign) AND\n (((((license_status)::text || ' - '::text) || (COALESCE((SubPlan\n 2), '???'::character varying))::text))::character(1) =\n 'A'::bpchar))\n Rows Removed by Filter: 0\n SubPlan 2\n -> Limit (cost=0.15..8.17 rows=1\n width=32) (actual time=0.006..0.007 rows=1 loops=55)\n -> Index Scan using\n \"_LicStatus_pkey\" on \"_LicStatus\" (cost=0.15..8.17 rows=1\n width=32) (actual time=0.005..0.005 rows=1 loops=55)\n Index Cond:\n (\"_HD\".license_status = status_id)\n -> Index Scan using \"_AM_pkey\" on \"_AM\" \n (cost=0.43..4.27 rows=1 width=15) (actual time=2.325..2.325 rows=1\n loops=43)\n Index Cond: (unique_system_identifier =\n \"_EN\".unique_system_identifier)\n Filter: (\"_EN\".callsign = callsign)\n SubPlan 1\n -> Limit (cost=0.15..8.17 rows=1 width=32) (actual\n time=0.007..0.007 rows=1 loops=43)\n -> Index Scan using \"_ApplicantType_pkey\" on\n \"_ApplicantType\" (cost=0.15..8.17 rows=1 width=32) (actual\n time=0.005..0.005 rows=1 loops=43)\n Index Cond: (\"_EN\".applicant_type_code =\n app_type_id)\n Planning time: 13.490 ms\n Execution time: 349.182 ms\n (43 rows)\n\n\nHere's from v13.2: \n\n=> EXPLAIN ANALYZE SELECT\n club_count, extra_count, region_count, callsign AS\n trustee_callsign, applicant_type, entity_name, licensee_id AS _lid\n FROM genclub_multi_ WHERE club_count >= 5 ORDER BY extra_count\n DESC, club_count DESC, entity_name;\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=144365.60..144365.60 rows=1 width=94) (actual\n time=31898.860..31901.922 rows=43 loops=1)\n Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC,\n \"_EN\".entity_name\n Sort Method: quicksort Memory: 31kB\n -> Nested Loop (cost=58055.66..144365.59 rows=1 width=94)\n (actual time=6132.403..31894.233 rows=43 loops=1)\n -> Nested Loop (cost=58055.51..144364.21 rows=1\n width=62) (actual time=1226.085..30337.921 rows=837792 loops=1)\n -> Nested Loop Left Join \n (cost=58055.09..144360.38 rows=1 width=59) (actual\n time=1062.414..12471.456 rows=1487153 loops=1)\n -> Hash Join (cost=58054.80..144359.69\n rows=1 width=66) (actual time=1061.330..6635.041 rows=1487153\n loops=1)\n Hash Cond:\n ((\"_EN\".unique_system_identifier = \"_AM\".unique_system_identifier)\n AND (\"_EN\".callsign = \"_AM\".callsign))\n -> Hash Join (cost=3.33..53349.72\n rows=1033046 width=51) (actual time=2.151..3433.178 rows=1487153\n loops=1)\n Hash Cond: ((\"_EN\".country_id =\n \"_GovtRegion\".country_id) AND (\"_EN\".state =\n \"_GovtRegion\".territory_id))\n -> Seq Scan on \"_EN\" \n (cost=0.00..45288.05 rows=1509005 width=60) (actual\n time=0.037..2737.054 rows=1508736 loops=1)\n -> Hash (cost=1.93..1.93\n rows=93 width=7) (actual time=0.706..1.264 rows=88 loops=1)\n Buckets: 1024 Batches: 1 \n Memory Usage: 12kB\n -> Seq Scan on\n \"_GovtRegion\" (cost=0.00..1.93 rows=93 width=7) (actual\n time=0.013..0.577 rows=93 loops=1)\n -> Hash (cost=28093.99..28093.99\n rows=1506699 width=15) (actual time=1055.587..1055.588\n rows=1506474 loops=1)\n Buckets: 131072 Batches: 32 \n Memory Usage: 3175kB\n -> Seq Scan on \"_AM\" \n (cost=0.00..28093.99 rows=1506699 width=15) (actual\n time=0.009..742.774 rows=1506474 loops=1)\n -> Nested Loop (cost=0.29..0.68 rows=1\n width=7) (actual time=0.003..0.004 rows=1 loops=1487153)\n Join Filter: (\"_IsoCountry\".iso_alpha2\n = \"_Territory\".country_id)\n Rows Removed by Join Filter: 0\n -> Index Only Scan using\n \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38\n rows=1 width=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n Index Cond: (iso_alpha2 =\n \"_GovtRegion\".country_id)\n Heap Fetches: 1487153\n -> Index Only Scan using\n \"_Territory_pkey\" on \"_Territory\" (cost=0.14..0.29 rows=1\n width=7) (actual time=0.001..0.001 rows=1 loops=1487153)\n Index Cond: (territory_id =\n \"_GovtRegion\".territory_id)\n Heap Fetches: 1550706\n -> Index Scan using \"_HD_pkey\" on \"_HD\" \n (cost=0.43..3.82 rows=1 width=15) (actual time=0.012..0.012 rows=1\n loops=1487153)\n Index Cond: (unique_system_identifier =\n \"_EN\".unique_system_identifier)\n Filter: ((\"_EN\".callsign = callsign) AND\n (((((license_status)::text || ' - '::text) || (COALESCE((SubPlan\n 2), '???'::character varying))::text))::character(1) =\n 'A'::bpchar))\n Rows Removed by Filter: 0\n SubPlan 2\n -> Limit (cost=0.00..1.07 rows=1\n width=13) (actual time=0.001..0.001 rows=1 loops=1487153)\n -> Seq Scan on \"_LicStatus\" \n (cost=0.00..1.07 rows=1 width=13) (actual time=0.000..0.000 rows=1\n loops=1487153)\n Filter: (\"_HD\".license_status =\n status_id)\n Rows Removed by Filter: 1\n -> Index Scan using \"_Club_pkey\" on \"_Club\" \n (cost=0.14..0.17 rows=1 width=35) (actual time=0.002..0.002 rows=0\n loops=837792)\n Index Cond: (trustee_callsign = \"_EN\".callsign)\n Filter: (club_count >= 5)\n Rows Removed by Filter: 0\n SubPlan 1\n -> Limit (cost=0.00..1.20 rows=1 width=15) (actual\n time=0.060..0.060 rows=1 loops=43)\n -> Seq Scan on \"_ApplicantType\" \n (cost=0.00..1.20 rows=1 width=15) (actual time=0.016..0.016 rows=1\n loops=43)\n Filter: (\"_EN\".applicant_type_code =\n app_type_id)\n Rows Removed by Filter: 7\n Planning Time: 173.753 ms\n Execution Time: 31919.601 ms\n (46 rows)\n\n\nVIEW genclub_multi_:\n\n => \\d+ genclub_multi_\n View\n \"Callsign.genclub_multi_\"\n Column | Type | Collation | Nullable\n | Default | Storage | Description\n------------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n fcc_reg_num | character(10) | | \n | | extended |\n licensee_id | character(9) | | \n | | extended |\n subgroup_id_num | character(3) | | \n | | extended |\n applicant_type | text | | \n | | extended |\n entity_type | text | | \n | | extended |\n entity_name | character varying(200) | | \n | | extended |\n attention | character varying(35) | | \n | | extended |\n first_name | character varying(20) | | \n | | extended |\n middle_init | character(1) | | \n | | extended |\n last_name | character varying(20) | | \n | | extended |\n name_suffix | character(3) | | \n | | extended |\n street_address | character varying(60) | | \n | | extended |\n po_box | text | | \n | | extended |\n locality | character varying | | \n | | extended |\n locality_ | character varying | | \n | | extended |\n county | character varying | | \n | | extended |\n state | text | | \n | | extended |\n postal_code | text | | \n | | extended |\n full_name | text | | \n | | extended |\n _entity_name | text | | \n | | extended |\n _first_name | text | | \n | | extended |\n _last_name | text | | \n | | extended |\n zip5 | character(5) | | \n | | extended |\n zip_location | \"GeoPosition\" | | \n | | extended |\n maidenhead | bpchar | | \n | | extended |\n geo_region | smallint | | \n | | plain |\n uls_file_num | character(14) | | \n | | extended |\n radio_service | text | | \n | | extended |\n license_status | text | | \n | | extended |\n grant_date | date | | \n | | plain |\n effective_date | date | | \n | | plain |\n cancel_date | date | | \n | | plain |\n expire_date | date | | \n | | plain |\n end_date | date | | \n | | plain |\n available_date | date | | \n | | plain |\n last_action_date | date | | \n | | plain |\n uls_region | \"MySql\".tinyint | | \n | | plain |\n callsign_group | text | | \n | | extended |\n operator_group | text | | \n | | extended |\n operator_class | text | | \n | | extended |\n prev_class | text | | \n | | extended |\n prev_callsign | character(10) | | \n | | extended |\n vanity_type | text | | \n | | extended |\n is_trustee | character(1) | | \n | | extended |\n trustee_callsign | character(10) | | \n | | extended |\n trustee_name | character varying(50) | | \n | | extended |\n validity | integer | | \n | | plain |\n club_count | bigint | | \n | | plain |\n extra_count | bigint | | \n | | plain |\n region_count | bigint | | \n | | plain |\n View definition:\n SELECT licjb_.sys_id,\n licjb_.callsign,\n licjb_.fcc_reg_num,\n licjb_.licensee_id,\n licjb_.subgroup_id_num,\n licjb_.applicant_type,\n licjb_.entity_type,\n licjb_.entity_name,\n licjb_.attention,\n licjb_.first_name,\n licjb_.middle_init,\n licjb_.last_name,\n licjb_.name_suffix,\n licjb_.street_address,\n licjb_.po_box,\n licjb_.locality,\n licjb_.locality_,\n licjb_.county,\n licjb_.state,\n licjb_.postal_code,\n licjb_.full_name,\n licjb_._entity_name,\n licjb_._first_name,\n licjb_._last_name,\n licjb_.zip5,\n licjb_.zip_location,\n licjb_.maidenhead,\n licjb_.geo_region,\n licjb_.uls_file_num,\n licjb_.radio_service,\n licjb_.license_status,\n licjb_.grant_date,\n licjb_.effective_date,\n licjb_.cancel_date,\n licjb_.expire_date,\n licjb_.end_date,\n licjb_.available_date,\n licjb_.last_action_date,\n licjb_.uls_region,\n licjb_.callsign_group,\n licjb_.operator_group,\n licjb_.operator_class,\n licjb_.prev_class,\n licjb_.prev_callsign,\n licjb_.vanity_type,\n licjb_.is_trustee,\n licjb_.trustee_callsign,\n licjb_.trustee_name,\n licjb_.validity,\n gen.club_count,\n gen.extra_count,\n gen.region_count\n FROM licjb_,\n \"GenLicClub\" gen\n WHERE licjb_.callsign = gen.trustee_callsign AND\n licjb_.license_status::character(1) = 'A'::bpchar;\n\nVIEW GenLicClub:\n\n=> \\d+ \"GenLicClub\"\n View \"Callsign.GenLicClub\"\n Column | Type | Collation | Nullable | Default\n | Storage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n trustee_callsign | character(10) | | | \n | extended |\n club_count | bigint | | | \n | plain |\n extra_count | bigint | | | \n | plain |\n region_count | bigint | | | \n | plain |\n View definition:\n SELECT \"_Club\".trustee_callsign,\n \"_Club\".club_count,\n \"_Club\".extra_count,\n \"_Club\".region_count\n FROM \"GenLic\".\"_Club\";\n\nTABLE \"GenLic\".\"_Club\":\n\n=> \\d+ \"GenLic\".\"_Club\"\n Table \"GenLic._Club\"\n Column | Type | Collation | Nullable | Default\n | Storage | Stats target | Description\n------------------+---------------+-----------+----------+---------+----------+--------------+-------------\n trustee_callsign | character(10) | | not null | \n | extended | |\n club_count | bigint | | | \n | plain | |\n extra_count | bigint | | | \n | plain | |\n region_count | bigint | | | \n | plain | |\n Indexes:\n \"_Club_pkey\" PRIMARY KEY, btree (trustee_callsign)\n\nVIEW licjb_:\n\n=> \\d+ licjb_\n View \"Callsign.licjb_\"\n Column | Type | Collation | Nullable\n | Default | Storage | Description\n------------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n fcc_reg_num | character(10) | | \n | | extended |\n licensee_id | character(9) | | \n | | extended |\n subgroup_id_num | character(3) | | \n | | extended |\n applicant_type | text | | \n | | extended |\n entity_type | text | | \n | | extended |\n entity_name | character varying(200) | | \n | | extended |\n attention | character varying(35) | | \n | | extended |\n first_name | character varying(20) | | \n | | extended |\n middle_init | character(1) | | \n | | extended |\n last_name | character varying(20) | | \n | | extended |\n name_suffix | character(3) | | \n | | extended |\n street_address | character varying(60) | | \n | | extended |\n po_box | text | | \n | | extended |\n locality | character varying | | \n | | extended |\n locality_ | character varying | | \n | | extended |\n county | character varying | | \n | | extended |\n state | text | | \n | | extended |\n postal_code | text | | \n | | extended |\n full_name | text | | \n | | extended |\n _entity_name | text | | \n | | extended |\n _first_name | text | | \n | | extended |\n _last_name | text | | \n | | extended |\n zip5 | character(5) | | \n | | extended |\n zip_location | \"GeoPosition\" | | \n | | extended |\n maidenhead | bpchar | | \n | | extended |\n geo_region | smallint | | \n | | plain |\n uls_file_num | character(14) | | \n | | extended |\n radio_service | text | | \n | | extended |\n license_status | text | | \n | | extended |\n grant_date | date | | \n | | plain |\n effective_date | date | | \n | | plain |\n cancel_date | date | | \n | | plain |\n expire_date | date | | \n | | plain |\n end_date | date | | \n | | plain |\n available_date | date | | \n | | plain |\n last_action_date | date | | \n | | plain |\n uls_region | \"MySql\".tinyint | | \n | | plain |\n callsign_group | text | | \n | | extended |\n operator_group | text | | \n | | extended |\n operator_class | text | | \n | | extended |\n prev_class | text | | \n | | extended |\n prev_callsign | character(10) | | \n | | extended |\n vanity_type | text | | \n | | extended |\n is_trustee | character(1) | | \n | | extended |\n trustee_callsign | character(10) | | \n | | extended |\n trustee_name | character varying(50) | | \n | | extended |\n validity | integer | | \n | | plain |\n View definition:\n SELECT lic_en_.sys_id,\n lic_en_.callsign,\n lic_en_.fcc_reg_num,\n lic_en_.licensee_id,\n lic_en_.subgroup_id_num,\n lic_en_.applicant_type,\n lic_en_.entity_type,\n lic_en_.entity_name,\n lic_en_.attention,\n lic_en_.first_name,\n lic_en_.middle_init,\n lic_en_.last_name,\n lic_en_.name_suffix,\n lic_en_.street_address,\n lic_en_.po_box,\n lic_en_.locality,\n lic_en_.locality_,\n lic_en_.county,\n lic_en_.state,\n lic_en_.postal_code,\n lic_en_.full_name,\n lic_en_._entity_name,\n lic_en_._first_name,\n lic_en_._last_name,\n lic_en_.zip5,\n lic_en_.zip_location,\n lic_en_.maidenhead,\n lic_en_.geo_region,\n lic_hd_.uls_file_num,\n lic_hd_.radio_service,\n lic_hd_.license_status,\n lic_hd_.grant_date,\n lic_hd_.effective_date,\n lic_hd_.cancel_date,\n lic_hd_.expire_date,\n lic_hd_.end_date,\n lic_hd_.available_date,\n lic_hd_.last_action_date,\n lic_am_.uls_region,\n lic_am_.callsign_group,\n lic_am_.operator_group,\n lic_am_.operator_class,\n lic_am_.prev_class,\n lic_am_.prev_callsign,\n lic_am_.vanity_type,\n lic_am_.is_trustee,\n lic_am_.trustee_callsign,\n lic_am_.trustee_name,\n CASE\n WHEN lic_am_.vanity_type::character(1) = ANY\n (ARRAY['A'::bpchar, 'C'::bpchar]) THEN\n verify_callsign(lic_en_.callsign, lic_en_.licensee_id,\n lic_hd_.grant_date, lic_en_.state::bpchar,\n lic_am_.operator_class::bpchar, lic_en_.applicant_type::bpchar,\n lic_am_.trustee_callsign)\n ELSE NULL::integer\n END AS validity\n FROM lic_en_\n JOIN lic_hd_ USING (sys_id, callsign)\n JOIN lic_am_ USING (sys_id, callsign);\n\nVIEW lic_en_:\n\n=> \\d+ lic_en_\n View \"Callsign.lic_en_\"\n Column | Type | Collation | Nullable |\n Default | Storage | Description\n-----------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n fcc_reg_num | character(10) | | \n | | extended |\n licensee_id | character(9) | | \n | | extended |\n subgroup_id_num | character(3) | | \n | | extended |\n applicant_type | text | | \n | | extended |\n entity_type | text | | \n | | extended |\n entity_name | character varying(200) | | \n | | extended |\n attention | character varying(35) | | \n | | extended |\n first_name | character varying(20) | | \n | | extended |\n middle_init | character(1) | | \n | | extended |\n last_name | character varying(20) | | \n | | extended |\n name_suffix | character(3) | | \n | | extended |\n street_address | character varying(60) | | \n | | extended |\n po_box | text | | \n | | extended |\n locality | character varying | | \n | | extended |\n locality_ | character varying | | \n | | extended |\n county | character varying | | \n | | extended |\n state | text | | \n | | extended |\n postal_code | text | | \n | | extended |\n full_name | text | | \n | | extended |\n _entity_name | text | | \n | | extended |\n _first_name | text | | \n | | extended |\n _last_name | text | | \n | | extended |\n zip5 | character(5) | | \n | | extended |\n zip_location | \"GeoPosition\" | | \n | | extended |\n maidenhead | bpchar | | \n | | extended |\n geo_region | smallint | | \n | | plain |\n View definition:\n SELECT lic_en.sys_id,\n lic_en.callsign,\n lic_en.fcc_reg_num,\n lic_en.licensee_id,\n lic_en.subgroup_id_num,\n (lic_en.applicant_type::text || ' - '::text) || COALESCE((\n SELECT \"ApplicantType\".app_type_text\n FROM \"ApplicantType\"\n WHERE lic_en.applicant_type =\n \"ApplicantType\".app_type_id\n LIMIT 1), '???'::character varying)::text AS\n applicant_type,\n (lic_en.entity_type::text || ' - '::text) || COALESCE(( SELECT\n \"EntityType\".entity_text\n FROM \"EntityType\"\n WHERE lic_en.entity_type = \"EntityType\".entity_id\n LIMIT 1), '???'::character varying)::text AS entity_type,\n lic_en.entity_name,\n lic_en.attention,\n lic_en.first_name,\n lic_en.middle_init,\n lic_en.last_name,\n lic_en.name_suffix,\n lic_en.street_address,\n lic_en.po_box,\n lic_en.locality,\n zip_code.locality_text AS locality_,\n \"County\".county_text AS county,\n (territory_id::text || ' - '::text) ||\n COALESCE(govt_region.territory_text, '???'::character\n varying)::text AS state,\n zip9_format(lic_en.postal_code::text) AS postal_code,\n lic_en.full_name,\n lic_en._entity_name,\n lic_en._first_name,\n lic_en._last_name,\n lic_en.zip5,\n zip_code.zip_location,\n maidenhead(zip_code.zip_location) AS maidenhead,\n govt_region.geo_region\n FROM lic_en\n JOIN govt_region USING (territory_id, country_id)\n LEFT JOIN zip_code USING (territory_id, country_id, zip5)\n LEFT JOIN \"County\" USING (territory_id, country_id,\n fips_county);\n\nVIEW lic_en:\n\n=> \\d+ lic_en\n View \"Callsign.lic_en\"\n Column | Type | Collation | Nullable |\n Default | Storage | Description\n-----------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n fcc_reg_num | character(10) | | \n | | extended |\n licensee_id | character(9) | | \n | | extended |\n subgroup_id_num | character(3) | | \n | | extended |\n applicant_type | character(1) | | \n | | extended |\n entity_type | character(2) | | \n | | extended |\n entity_name | character varying(200) | | \n | | extended |\n attention | character varying(35) | | \n | | extended |\n first_name | character varying(20) | | \n | | extended |\n middle_init | character(1) | | \n | | extended |\n last_name | character varying(20) | | \n | | extended |\n name_suffix | character(3) | | \n | | extended |\n street_address | character varying(60) | | \n | | extended |\n po_box | text | | \n | | extended |\n locality | character varying | | \n | | extended |\n territory_id | character(2) | | \n | | extended |\n postal_code | character(9) | | \n | | extended |\n full_name | text | | \n | | extended |\n _entity_name | text | | \n | | extended |\n _first_name | text | | \n | | extended |\n _last_name | text | | \n | | extended |\n zip5 | character(5) | | \n | | extended |\n country_id | character(2) | | \n | | extended |\n View definition:\n SELECT _lic_en.sys_id,\n _lic_en.callsign,\n _lic_en.fcc_reg_num,\n _lic_en.licensee_id,\n _lic_en.subgroup_id_num,\n _lic_en.applicant_type,\n _lic_en.entity_type,\n _lic_en.entity_name,\n _lic_en.attention,\n _lic_en.first_name,\n _lic_en.middle_init,\n _lic_en.last_name,\n _lic_en.name_suffix,\n _lic_en.street_address,\n _lic_en.po_box,\n _lic_en.locality,\n _lic_en.territory_id,\n _lic_en.postal_code,\n _lic_en.full_name,\n _lic_en._entity_name,\n _lic_en._first_name,\n _lic_en._last_name,\n _lic_en.zip5,\n _lic_en.country_id\n FROM _lic_en;\n\nVIEW _lic_en:\n\n=> \\d+ _lic_en\n View \"Callsign._lic_en\"\n Column | Type | Collation | Nullable |\n Default | Storage | Description\n-----------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n fcc_reg_num | character(10) | | \n | | extended |\n licensee_id | character(9) | | \n | | extended |\n subgroup_id_num | character(3) | | \n | | extended |\n applicant_type | character(1) | | \n | | extended |\n entity_type | character(2) | | \n | | extended |\n entity_name | character varying(200) | | \n | | extended |\n attention | character varying(35) | | \n | | extended |\n first_name | character varying(20) | | \n | | extended |\n middle_init | character(1) | | \n | | extended |\n last_name | character varying(20) | | \n | | extended |\n name_suffix | character(3) | | \n | | extended |\n street_address | character varying(60) | | \n | | extended |\n po_box | text | | \n | | extended |\n locality | character varying | | \n | | extended |\n territory_id | character(2) | | \n | | extended |\n postal_code | character(9) | | \n | | extended |\n full_name | text | | \n | | extended |\n _entity_name | text | | \n | | extended |\n _first_name | text | | \n | | extended |\n _last_name | text | | \n | | extended |\n zip5 | character(5) | | \n | | extended |\n country_id | character(2) | | \n | | extended |\n View definition:\n SELECT \"_EN\".unique_system_identifier AS sys_id,\n \"_EN\".callsign,\n \"_EN\".frn AS fcc_reg_num,\n \"_EN\".licensee_id,\n \"_EN\".sgin AS subgroup_id_num,\n \"_EN\".applicant_type_code AS applicant_type,\n \"_EN\".entity_type,\n \"_EN\".entity_name,\n \"_EN\".attention_line AS attention,\n \"_EN\".first_name,\n \"_EN\".mi AS middle_init,\n \"_EN\".last_name,\n \"_EN\".suffix AS name_suffix,\n \"_EN\".street_address,\n po_box_format(\"_EN\".po_box::text) AS po_box,\n \"_EN\".city AS locality,\n \"_EN\".state AS territory_id,\n \"_EN\".zip_code AS postal_code,\n initcap(((COALESCE(\"_EN\".first_name::text || ' '::text,\n ''::text) || COALESCE(\"_EN\".mi::text || ' '::text, ''::text)) ||\n \"_EN\".last_name::text) || COALESCE(' '::text ||\n \"_EN\".suffix::text, ''::text)) AS full_name,\n initcap(\"_EN\".entity_name::text) AS _entity_name,\n initcap(\"_EN\".first_name::text) AS _first_name,\n initcap(\"_EN\".last_name::text) AS _last_name,\n \"_EN\".zip_code::character(5) AS zip5,\n \"_EN\".country_id\n FROM \"UlsLic\".\"_EN\";\n\nTABLE \"UlsLic\".\"_EN\":\n\n=> \\d+ \"UlsLic\".\"_EN\"\n Table\n \"UlsLic._EN\"\n Column | Type | Collation |\n Nullable | Default | Storage | Stats target | Description\n--------------------------+------------------------+-----------+----------+---------+----------+--------------+-------------\n record_type | character(2) | |\n not null | | extended | |\n unique_system_identifier | integer | |\n not null | | plain | |\n uls_file_number | character(14) | \n | | | extended | |\n ebf_number | character varying(30) | \n | | | extended | |\n callsign | character(10) | \n | | | extended | |\n entity_type | character(2) | \n | | | extended | |\n licensee_id | character(9) | \n | | | extended | |\n entity_name | character varying(200) | \n | | | extended | |\n first_name | character varying(20) | \n | | | extended | |\n mi | character(1) | \n | | | extended | |\n last_name | character varying(20) | \n | | | extended | |\n suffix | character(3) | \n | | | extended | |\n phone | character(10) | \n | | | extended | |\n fax | character(10) | \n | | | extended | |\n email | character varying(50) | \n | | | extended | |\n street_address | character varying(60) | \n | | | extended | |\n city | character varying | \n | | | extended | |\n state | character(2) | \n | | | extended | |\n zip_code | character(9) | \n | | | extended | |\n po_box | character varying(20) | \n | | | extended | |\n attention_line | character varying(35) | \n | | | extended | |\n sgin | character(3) | \n | | | extended | |\n frn | character(10) | \n | | | extended | |\n applicant_type_code | character(1) | \n | | | extended | |\n applicant_type_other | character(40) | \n | | | extended | |\n status_code | character(1) | \n | | | extended | |\n status_date | \"MySql\".datetime | \n | | | plain | |\n lic_category_code | character(1) | \n | | | extended | |\n linked_license_id | numeric(9,0) | \n | | | main | |\n linked_callsign | character(10) | \n | | | extended | |\n country_id | character(2) | \n | | | extended | |\n Indexes:\n \"_EN_pkey\" PRIMARY KEY, btree (unique_system_identifier)\n \"_EN__entity_name\" btree (initcap(entity_name::text))\n \"_EN__first_name\" btree (initcap(first_name::text))\n \"_EN__last_name\" btree (initcap(last_name::text))\n \"_EN__zip5\" btree ((zip_code::character(5)))\n \"_EN_callsign\" btree (callsign)\n \"_EN_fcc_reg_num\" btree (frn)\n \"_EN_licensee_id\" btree (licensee_id)\n Check constraints:\n \"_EN_record_type_check\" CHECK (record_type = 'EN'::bpchar)\n Foreign-key constraints:\n \"_EN_applicant_type_code_fkey\" FOREIGN KEY\n (applicant_type_code) REFERENCES\n \"FccLookup\".\"_ApplicantType\"(app_type_id\n )\n \"_EN_entity_type_fkey\" FOREIGN KEY (entity_type) REFERENCES\n \"FccLookup\".\"_EntityType\"(entity_id)\n \"_EN_state_fkey\" FOREIGN KEY (state, country_id) REFERENCES\n \"BaseLookup\".\"_Territory\"(territory_id, country_id)\n \"_EN_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFERENCES\n \"UlsLic\".\"_HD\"(unique_system_i\n dentifier) ON UPDATE CASCADE ON DELETE CASCADE\n\n\nVIEW lic_hd_:\n\n=> \\d+ lic_hd_\n View \"Callsign.lic_hd_\"\n Column | Type | Collation | Nullable | Default\n | Storage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | \n | plain |\n callsign | character(10) | | | \n | extended |\n uls_file_num | character(14) | | | \n | extended |\n radio_service | text | | | \n | extended |\n license_status | text | | | \n | extended |\n grant_date | date | | | \n | plain |\n effective_date | date | | | \n | plain |\n cancel_date | date | | | \n | plain |\n expire_date | date | | | \n | plain |\n end_date | date | | | \n | plain |\n available_date | date | | | \n | plain |\n last_action_date | date | | | \n | plain |\n View definition:\n SELECT lic_hd.sys_id,\n lic_hd.callsign,\n lic_hd.uls_file_num,\n (lic_hd.radio_service::text || ' - '::text) || COALESCE((\n SELECT \"RadioService\".service_text\n FROM \"RadioService\"\n WHERE lic_hd.radio_service = \"RadioService\".service_id\n LIMIT 1), '???'::character varying)::text AS\n radio_service,\n (lic_hd.license_status::text || ' - '::text) || COALESCE((\n SELECT \"LicStatus\".status_text\n FROM \"LicStatus\"\n WHERE lic_hd.license_status = \"LicStatus\".status_id\n LIMIT 1), '???'::character varying)::text AS\n license_status,\n lic_hd.grant_date,\n lic_hd.effective_date,\n lic_hd.cancel_date,\n lic_hd.expire_date,\n LEAST(lic_hd.cancel_date, lic_hd.expire_date) AS end_date,\n CASE\n WHEN lic_hd.cancel_date < lic_hd.expire_date THEN\n GREATEST((lic_hd.cancel_date + '2 years'::interval)::date,\n lic_hd.last_action_date + 30)\n WHEN lic_hd.license_status = 'A'::bpchar AND\n uls_date() > (lic_hd.expire_date + '2 years'::interval)::date\n THEN NULL::date\n ELSE (lic_hd.expire_date + '2 years'::interval)::date\n END + 1 AS available_date,\n lic_hd.last_action_date\n FROM lic_hd;\n\nVIEW lic_hd:\n\n=> \\d+ lic_hd\n View \"Callsign.lic_hd\"\n Column | Type | Collation | Nullable | Default\n | Storage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | \n | plain |\n callsign | character(10) | | | \n | extended |\n uls_file_num | character(14) | | | \n | extended |\n radio_service | character(2) | | | \n | extended |\n license_status | character(1) | | | \n | extended |\n grant_date | date | | | \n | plain |\n effective_date | date | | | \n | plain |\n cancel_date | date | | | \n | plain |\n expire_date | date | | | \n | plain |\n last_action_date | date | | | \n | plain |\n View definition:\n SELECT _lic_hd.sys_id,\n _lic_hd.callsign,\n _lic_hd.uls_file_num,\n _lic_hd.radio_service,\n _lic_hd.license_status,\n _lic_hd.grant_date,\n _lic_hd.effective_date,\n _lic_hd.cancel_date,\n _lic_hd.expire_date,\n _lic_hd.last_action_date\n FROM _lic_hd;\n\nVIEW _lic_hd:\n\n=> \\d+ _lic_hd\n View \"Callsign._lic_hd\"\n Column | Type | Collation | Nullable | Default\n | Storage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | \n | plain |\n callsign | character(10) | | | \n | extended |\n uls_file_num | character(14) | | | \n | extended |\n radio_service | character(2) | | | \n | extended |\n license_status | character(1) | | | \n | extended |\n grant_date | date | | | \n | plain |\n effective_date | date | | | \n | plain |\n cancel_date | date | | | \n | plain |\n expire_date | date | | | \n | plain |\n last_action_date | date | | | \n | plain |\n View definition:\n SELECT \"_HD\".unique_system_identifier AS sys_id,\n \"_HD\".callsign,\n \"_HD\".uls_file_number AS uls_file_num,\n \"_HD\".radio_service_code AS radio_service,\n \"_HD\".license_status,\n \"_HD\".grant_date,\n \"_HD\".effective_date,\n \"_HD\".cancellation_date AS cancel_date,\n \"_HD\".expired_date AS expire_date,\n \"_HD\".last_action_date\n FROM \"UlsLic\".\"_HD\";\n\nTABLE \"UlsLic\".\"_HD\":\n\n=> \\d+ \"UlsLic\".\"_HD\"\n Table\n \"UlsLic._HD\"\n Column | Type | Collation\n | Nullable | Default | Storage | Stats target | Descr\n iption\n------------------------------+-----------------------+-----------+----------+---------+----------+--------------+------\n -------\n record_type | character(2) | \n | not null | | extended | |\n unique_system_identifier | integer | \n | not null | | plain | |\n uls_file_number | character(14) | \n | | | extended | |\n ebf_number | character varying(30) | \n | | | extended | |\n callsign | character(10) | \n | | | extended | |\n license_status | character(1) | \n | | | extended | |\n radio_service_code | character(2) | \n | | | extended | |\n grant_date | date | \n | | | plain | |\n expired_date | date | \n | | | plain | |\n cancellation_date | date | \n | | | plain | |\n eligibility_rule_num | character(10) | \n | | | extended | |\n applicant_type_code_reserved | character(1) | \n | | | extended | |\n alien | character(1) | \n | | | extended | |\n alien_government | character(1) | \n | | | extended | |\n alien_corporation | character(1) | \n | | | extended | |\n alien_officer | character(1) | \n | | | extended | |\n alien_control | character(1) | \n | | | extended | |\n revoked | character(1) | \n | | | extended | |\n convicted | character(1) | \n | | | extended | |\n adjudged | character(1) | \n | | | extended | |\n involved_reserved | character(1) | \n | | | extended | |\n common_carrier | character(1) | \n | | | extended | |\n non_common_carrier | character(1) | \n | | | extended | |\n private_comm | character(1) | \n | | | extended | |\n fixed | character(1) | \n | | | extended | |\n mobile | character(1) | \n | | | extended | |\n radiolocation | character(1) | \n | | | extended | |\n satellite | character(1) | \n | | | extended | |\n developmental_or_sta | character(1) | \n | | | extended | |\n interconnected_service | character(1) | \n | | | extended | |\n certifier_first_name | character varying(20) | \n | | | extended | |\n certifier_mi | character varying | \n | | | extended | |\n certifier_last_name | character varying | \n | | | extended | |\n certifier_suffix | character(3) | \n | | | extended | |\n certifier_title | character(40) | \n | | | extended | |\n gender | character(1) | \n | | | extended | |\n african_american | character(1) | \n | | | extended | |\n native_american | character(1) | \n | | | extended | |\n hawaiian | character(1) | \n | | | extended | |\n asian | character(1) | \n | | | extended | |\n white | character(1) | \n | | | extended | |\n ethnicity | character(1) | \n | | | extended | |\n effective_date | date | \n | | | plain | |\n last_action_date | date | \n | | | plain | |\n auction_id | integer | \n | | | plain | |\n reg_stat_broad_serv | character(1) | \n | | | extended | |\n band_manager | character(1) | \n | | | extended | |\n type_serv_broad_serv | character(1) | \n | | | extended | |\n alien_ruling | character(1) | \n | | | extended | |\n licensee_name_change | character(1) | \n | | | extended | |\n whitespace_ind | character(1) | \n | | | extended | |\n additional_cert_choice | character(1) | \n | | | extended | |\n additional_cert_answer | character(1) | \n | | | extended | |\n discontinuation_ind | character(1) | \n | | | extended | |\n regulatory_compliance_ind | character(1) | \n | | | extended | |\n dummy1 | character varying | \n | | | extended | |\n dummy2 | character varying | \n | | | extended | |\n dummy3 | character varying | \n | | | extended | |\n dummy4 | character varying | \n | | | extended | |\n Indexes:\n \"_HD_pkey\" PRIMARY KEY, btree (unique_system_identifier)\n \"_HD_callsign\" btree (callsign)\n \"_HD_grant_date\" btree (grant_date)\n \"_HD_last_action_date\" btree (last_action_date)\n \"_HD_uls_file_num\" btree (uls_file_number)\n Check constraints:\n \"_HD_record_type_check\" CHECK (record_type = 'HD'::bpchar)\n Foreign-key constraints:\n \"_HD_license_status_fkey\" FOREIGN KEY (license_status)\n REFERENCES \"FccLookup\".\"_LicStatus\"(status_id)\n \"_HD_radio_service_code_fkey\" FOREIGN KEY (radio_service_code)\n REFERENCES \"FccLookup\".\"_RadioService\"(service_id)\n Referenced by:\n TABLE \"\"UlsLic\".\"_AM\"\" CONSTRAINT\n \"_AM_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFEREN\n CES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON\n DELETE CASCADE\n TABLE \"\"UlsLic\".\"_CO\"\" CONSTRAINT\n \"_CO_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFEREN\n CES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON\n DELETE CASCADE\n TABLE \"\"UlsLic\".\"_EN\"\" CONSTRAINT\n \"_EN_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFEREN\n CES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON\n DELETE CASCADE\n TABLE \"\"UlsLic\".\"_HS\"\" CONSTRAINT\n \"_HS_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFEREN\n CES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON\n DELETE CASCADE\n TABLE \"\"UlsLic\".\"_LA\"\" CONSTRAINT\n \"_LA_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFEREN\n CES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON\n DELETE CASCADE\n TABLE \"\"UlsLic\".\"_SC\"\" CONSTRAINT\n \"_SC_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFEREN\n CES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON\n DELETE CASCADE\n TABLE \"\"UlsLic\".\"_SF\"\" CONSTRAINT\n \"_SF_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFEREN\n CES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON\n DELETE CASCADE\n\nVIEW lic_am_:\n\n => \\d+ lic_am_\n View \"Callsign.lic_am_\"\n Column | Type | Collation | Nullable |\n Default | Storage | Description\n------------------+-----------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n uls_region | \"MySql\".tinyint | | \n | | plain |\n callsign_group | text | | \n | | extended |\n operator_group | text | | \n | | extended |\n operator_class | text | | \n | | extended |\n prev_class | text | | \n | | extended |\n prev_callsign | character(10) | | \n | | extended |\n vanity_type | text | | \n | | extended |\n is_trustee | character(1) | | \n | | extended |\n trustee_callsign | character(10) | | \n | | extended |\n trustee_name | character varying(50) | | \n | | extended |\n View definition:\n SELECT lic_am.sys_id,\n lic_am.callsign,\n lic_am.uls_region,\n ( SELECT (\"CallsignGroup\".group_id::text || ' - '::text) ||\n \"CallsignGroup\".match_text::text\n FROM \"CallsignGroup\"\n WHERE lic_am.callsign ~ \"CallsignGroup\".pattern::text\n LIMIT 1) AS callsign_group,\n ( SELECT (oper_group.group_id::text || ' - '::text) ||\n oper_group.group_text::text\n FROM oper_group\n WHERE lic_am.operator_class = oper_group.class_id\n LIMIT 1) AS operator_group,\n (lic_am.operator_class::text || ' - '::text) || COALESCE((\n SELECT \"OperatorClass\".class_text\n FROM \"OperatorClass\"\n WHERE lic_am.operator_class = \"OperatorClass\".class_id\n LIMIT 1), '???'::character varying)::text AS\n operator_class,\n (lic_am.prev_class::text || ' - '::text) || COALESCE(( SELECT\n \"OperatorClass\".class_text\n FROM \"OperatorClass\"\n WHERE lic_am.prev_class = \"OperatorClass\".class_id\n LIMIT 1), '???'::character varying)::text AS prev_class,\n lic_am.prev_callsign,\n (lic_am.vanity_type::text || ' - '::text) || COALESCE(( SELECT\n \"VanityType\".vanity_text\n FROM \"VanityType\"\n WHERE lic_am.vanity_type = \"VanityType\".vanity_id\n LIMIT 1), '???'::character varying)::text AS vanity_type,\n lic_am.is_trustee,\n lic_am.trustee_callsign,\n lic_am.trustee_name\n FROM lic_am;\n\nVIEW lic_am:\n\n=> \\d+ lic_am\n View \"Callsign.lic_am\"\n Column | Type | Collation | Nullable |\n Default | Storage | Description\n------------------+-----------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n uls_region | \"MySql\".tinyint | | \n | | plain |\n uls_group | character(1) | | \n | | extended |\n operator_class | character(1) | | \n | | extended |\n prev_callsign | character(10) | | \n | | extended |\n prev_class | character(1) | | \n | | extended |\n vanity_type | character(1) | | \n | | extended |\n is_trustee | character(1) | | \n | | extended |\n trustee_callsign | character(10) | | \n | | extended |\n trustee_name | character varying(50) | | \n | | extended |\n View definition:\n SELECT _lic_am.sys_id,\n _lic_am.callsign,\n _lic_am.uls_region,\n _lic_am.uls_group,\n _lic_am.operator_class,\n _lic_am.prev_callsign,\n _lic_am.prev_class,\n _lic_am.vanity_type,\n _lic_am.is_trustee,\n _lic_am.trustee_callsign,\n _lic_am.trustee_name\n FROM _lic_am;\n\nVIEW _lic_am:\n\n=> \\d+ _lic_am\n View \"Callsign._lic_am\"\n Column | Type | Collation | Nullable |\n Default | Storage | Description\n------------------+-----------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n uls_region | \"MySql\".tinyint | | \n | | plain |\n uls_group | character(1) | | \n | | extended |\n operator_class | character(1) | | \n | | extended |\n prev_callsign | character(10) | | \n | | extended |\n prev_class | character(1) | | \n | | extended |\n vanity_type | character(1) | | \n | | extended |\n is_trustee | character(1) | | \n | | extended |\n trustee_callsign | character(10) | | \n | | extended |\n trustee_name | character varying(50) | | \n | | extended |\n View definition:\n SELECT \"_AM\".unique_system_identifier AS sys_id,\n \"_AM\".callsign,\n \"_AM\".region_code AS uls_region,\n \"_AM\".group_code AS uls_group,\n \"_AM\".operator_class,\n \"_AM\".previous_callsign AS prev_callsign,\n \"_AM\".previous_operator_class AS prev_class,\n \"_AM\".vanity_callsign_change AS vanity_type,\n \"_AM\".trustee_indicator AS is_trustee,\n \"_AM\".trustee_callsign,\n \"_AM\".trustee_name\n FROM \"UlsLic\".\"_AM\";\n\nTABLE \"UlsLic\".\"_AM\":\n\n=> \\d+ \"UlsLic\".\"_AM\"\n Table\n \"UlsLic._AM\"\n Column | Type | Collation |\n Nullable | Default | Storage | Stats target | Description\n----------------------------+-----------------------+-----------+----------+---------+----------+--------------+-------------\n record_type | character(2) | |\n not null | | extended | |\n unique_system_identifier | integer | |\n not null | | plain | |\n uls_file_number | character(14) | \n | | | extended | |\n ebf_number | character varying(30) | \n | | | extended | |\n callsign | character(10) | \n | | | extended | |\n operator_class | character(1) | \n | | | extended | |\n group_code | character(1) | \n | | | extended | |\n region_code | \"MySql\".tinyint | \n | | | plain | |\n trustee_callsign | character(10) | \n | | | extended | |\n trustee_indicator | character(1) | \n | | | extended | |\n physician_certification | character(1) | \n | | | extended | |\n ve_signature | character(1) | \n | | | extended | |\n systematic_callsign_change | character(1) | \n | | | extended | |\n vanity_callsign_change | character(1) | \n | | | extended | |\n vanity_relationship | character(12) | \n | | | extended | |\n previous_callsign | character(10) | \n | | | extended | |\n previous_operator_class | character(1) | \n | | | extended | |\n trustee_name | character varying(50) | \n | | | extended | |\n Indexes:\n \"_AM_pkey\" PRIMARY KEY, btree (unique_system_identifier)\n \"_AM_callsign\" btree (callsign)\n \"_AM_prev_callsign\" btree (previous_callsign)\n \"_AM_trustee_callsign\" btree (trustee_callsign)\n Check constraints:\n \"_AM_record_type_check\" CHECK (record_type = 'AM'::bpchar)\n Foreign-key constraints:\n \"_AM_operator_class_fkey\" FOREIGN KEY (operator_class)\n REFERENCES \"FccLookup\".\"_OperatorClass\"(class_id)\n \"_AM_previous_operator_class_fkey\" FOREIGN KEY\n (previous_operator_class) REFERENCES\n \"FccLookup\".\"_OperatorClass\"(cla\n ss_id)\n \"_AM_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFERENCES\n \"UlsLic\".\"_HD\"(unique_system_i\n dentifier) ON UPDATE CASCADE ON DELETE CASCADE\n \"_AM_vanity_callsign_change_fkey\" FOREIGN KEY\n (vanity_callsign_change) REFERENCES\n \"FccLookup\".\"_VanityType\"(vanity_i\n d)",
"msg_date": "Fri, 28 May 2021 11:48:28 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\nOn 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n> [Reposted to the proper list]\n>\n> I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4\n> at one point), gradually moving to v9.0 w/ replication in 2010. In\n> 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to v9.6,\n> & was entirely satisfied with the result.\n>\n> In March of this year, AWS announced that v9.6 was nearing end of\n> support, & AWS would forcibly upgrade everyone to v12 on January 22,\n> 2022, if users did not perform the upgrade earlier. My first attempt\n> was successful as far as the upgrade itself, but complex queries that\n> normally ran in a couple of seconds on v9.x, were taking minutes in v12.\n>\n> I didn't have the time in March to diagnose the problem, other than\n> some futile adjustments to server parameters, so I reverted back to a\n> saved copy of my v9.6 data.\n>\n> On Sunday, being retired, I decided to attempt to solve the issue in\n> earnest. I have now spent five days (about 14 hours a day), trying\n> various things, including adding additional indexes. Keeping the v9.6\n> data online for web users, I've \"forked\" the data into new copies, &\n> updated them in turn to PostgreSQL v10, v11, v12, & v13. All exhibit\n> the same problem: As you will see below, it appears that versions 10\n> & above are doing a sequential scan of some of the \"large\" (200K rows)\n> tables. Note that the expected & actual run times both differ for\n> v9.6 & v13.2, by more than *two orders of magnitude*. Rather than post\n> a huge eMail (ha ha), I'll start with this one, that shows an \"EXPLAIN\n> ANALYZE\" from both v9.6 & v13.2, followed by the related table & view\n> definitions. With one exception, table definitions are from the FCC\n> (Federal Communications Commission); the view definitions are my own.\n>\n>\n>\n\nHave you tried reproducing these results outside RDS, say on an EC2\ninstance running vanilla PostgreSQL?\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 28 May 2021 15:08:19 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "Also, did you check your RDS setting in AWS after upgrading? I run four databases in AWS. I found that the work_mem was set way low after an upgrade. I had to tweak many of my settings.\n\nLance\n\nFrom: Andrew Dunstan <[email protected]>\nDate: Friday, May 28, 2021 at 2:08 PM\nTo: Dean Gibson (DB Administrator) <[email protected]>, [email protected] <[email protected]>\nSubject: Re: AWS forcing PG upgrade from v9.6 a disaster\n\nOn 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n> [Reposted to the proper list]\n>\n> I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4\n> at one point), gradually moving to v9.0 w/ replication in 2010. In\n> 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to v9.6,\n> & was entirely satisfied with the result.\n>\n> In March of this year, AWS announced that v9.6 was nearing end of\n> support, & AWS would forcibly upgrade everyone to v12 on January 22,\n> 2022, if users did not perform the upgrade earlier. My first attempt\n> was successful as far as the upgrade itself, but complex queries that\n> normally ran in a couple of seconds on v9.x, were taking minutes in v12.\n>\n> I didn't have the time in March to diagnose the problem, other than\n> some futile adjustments to server parameters, so I reverted back to a\n> saved copy of my v9.6 data.\n>\n> On Sunday, being retired, I decided to attempt to solve the issue in\n> earnest. I have now spent five days (about 14 hours a day), trying\n> various things, including adding additional indexes. Keeping the v9.6\n> data online for web users, I've \"forked\" the data into new copies, &\n> updated them in turn to PostgreSQL v10, v11, v12, & v13. All exhibit\n> the same problem: As you will see below, it appears that versions 10\n> & above are doing a sequential scan of some of the \"large\" (200K rows)\n> tables. Note that the expected & actual run times both differ for\n> v9.6 & v13.2, by more than *two orders of magnitude*. Rather than post\n> a huge eMail (ha ha), I'll start with this one, that shows an \"EXPLAIN\n> ANALYZE\" from both v9.6 & v13.2, followed by the related table & view\n> definitions. With one exception, table definitions are from the FCC\n> (Federal Communications Commission); the view definitions are my own.\n>\n>\n>\n\nHave you tried reproducing these results outside RDS, say on an EC2\ninstance running vanilla PostgreSQL?\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://urldefense.com/v3/__https://www.enterprisedb.com__;!!DZ3fjg!tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$<https://urldefense.com/v3/__https:/www.enterprisedb.com__;!!DZ3fjg!tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$>\n\n\n\n\n\n\n\n\n\n\n\nAlso, did you check your RDS setting in AWS after upgrading? I run four databases in AWS. I found that the work_mem was set way low after an upgrade. I had to tweak many of my settings.\n \nLance\n \n\nFrom:\nAndrew Dunstan <[email protected]>\nDate: Friday, May 28, 2021 at 2:08 PM\nTo: Dean Gibson (DB Administrator) <[email protected]>, [email protected] <[email protected]>\nSubject: Re: AWS forcing PG upgrade from v9.6 a disaster\n\n\n\nOn 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n> [Reposted to the proper list]\n>\n> I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4\n> at one point), gradually moving to v9.0 w/ replication in 2010. In\n> 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to v9.6,\n> & was entirely satisfied with the result.\n>\n> In March of this year, AWS announced that v9.6 was nearing end of\n> support, & AWS would forcibly upgrade everyone to v12 on January 22,\n> 2022, if users did not perform the upgrade earlier. My first attempt\n> was successful as far as the upgrade itself, but complex queries that\n> normally ran in a couple of seconds on v9.x, were taking minutes in v12.\n>\n> I didn't have the time in March to diagnose the problem, other than\n> some futile adjustments to server parameters, so I reverted back to a\n> saved copy of my v9.6 data.\n>\n> On Sunday, being retired, I decided to attempt to solve the issue in\n> earnest. I have now spent five days (about 14 hours a day), trying\n> various things, including adding additional indexes. Keeping the v9.6\n> data online for web users, I've \"forked\" the data into new copies, &\n> updated them in turn to PostgreSQL v10, v11, v12, & v13. All exhibit\n> the same problem: As you will see below, it appears that versions 10\n> & above are doing a sequential scan of some of the \"large\" (200K rows)\n> tables. Note that the expected & actual run times both differ for\n> v9.6 & v13.2, by more than *two orders of magnitude*. Rather than post\n> a huge eMail (ha ha), I'll start with this one, that shows an \"EXPLAIN\n> ANALYZE\" from both v9.6 & v13.2, followed by the related table & view\n> definitions. With one exception, table definitions are from the FCC\n> (Federal Communications Commission); the view definitions are my own.\n>\n>\n>\n\nHave you tried reproducing these results outside RDS, say on an EC2\ninstance running vanilla PostgreSQL?\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: \nhttps://urldefense.com/v3/__https://www.enterprisedb.com__;!!DZ3fjg!tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$",
"msg_date": "Fri, 28 May 2021 19:18:59 +0000",
"msg_from": "\"Campbell, Lance\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "Hi Lance,\n\nDid you customize the PG 12 DB Parameter group to be in sync as much as \npossible with the 9.6 RDS version?� Or are you using PG12 default DB \nParameter group?\n\nAre you using the same AWS Instance Class?\n\nDid you vacuum analyze all your tables after the upgrade to 12?\n\nRegards,\nMichael Vitale\n\nCampbell, Lance wrote on 5/28/2021 3:18 PM:\n>\n> Also, did you check your RDS setting in AWS after upgrading?� I run \n> four databases in AWS.� I found that the work_mem was set way low \n> after an upgrade.� I had to tweak many of my settings.\n>\n> Lance\n>\n> *From: *Andrew Dunstan <[email protected]>\n> *Date: *Friday, May 28, 2021 at 2:08 PM\n> *To: *Dean Gibson (DB Administrator) <[email protected]>, \n> [email protected] \n> <[email protected]>\n> *Subject: *Re: AWS forcing PG upgrade from v9.6 a disaster\n>\n>\n> On 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n> > [Reposted to the proper list]\n> >\n> > I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4\n> > at one point), gradually moving to v9.0 w/ replication in 2010.� In\n> > 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to v9.6,\n> > & was entirely satisfied with the result.\n> >\n> > In March of this year, AWS announced that v9.6 was nearing end of\n> > support, & AWS would forcibly upgrade everyone to v12 on January 22,\n> > 2022, if users did not perform the upgrade earlier.� My first attempt\n> > was successful as far as the upgrade itself, but complex queries that\n> > normally ran in a couple of seconds on v9.x, were taking minutes in v12.\n> >\n> > I didn't have the time in March to diagnose the problem, other than\n> > some futile adjustments to server parameters, so I reverted back to a\n> > saved copy of my v9.6 data.\n> >\n> > On Sunday, being retired, I decided to attempt to solve the issue in\n> > earnest.� I have now spent five days (about 14 hours a day), trying\n> > various things, including adding additional indexes.� Keeping the v9.6\n> > data online for web users, I've \"forked\" the data into new copies, &\n> > updated them in turn to PostgreSQL v10, v11, v12, & v13.� All exhibit\n> > the same problem:� As you will see below, it appears that versions 10\n> > & above are doing a sequential scan of some of the \"large\" (200K rows)\n> > tables.� Note that the expected & actual run times both differ for\n> > v9.6 & v13.2, by more than *two orders of magnitude*. Rather than post\n> > a huge eMail (ha ha), I'll start with this one, that shows an \"EXPLAIN\n> > ANALYZE\" from both v9.6 & v13.2, followed by the related table & view\n> > definitions.� With one exception, table definitions are from the FCC\n> > (Federal Communications Commission);� the view definitions are my own.\n> >\n> >\n> >\n>\n> Have you tried reproducing these results outside RDS, say on an EC2\n> instance running vanilla PostgreSQL?\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n>\n> --\n> Andrew Dunstan\n> EDB: \n> https://urldefense.com/v3/__https://www.enterprisedb.com__;!!DZ3fjg!tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$ \n> <https://urldefense.com/v3/__https:/www.enterprisedb.com__;%21%21DZ3fjg%21tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$> \n>\n>\n>\n\n\n\n\nHi Lance,\n\nDid you customize the PG 12 DB Parameter group to be in sync as much as \npossible with the 9.6 RDS version?� Or are you using PG12 default DB \nParameter group?\n\nAre you using the same AWS Instance Class?\n\nDid you vacuum analyze all your tables after the upgrade to 12?\n\nRegards,\nMichael Vitale\n\nCampbell, Lance wrote on 5/28/2021 3:18 PM:\n\n\n\n\n\nAlso, did you check your RDS setting in AWS after \nupgrading?� I run four databases in AWS.� I found that the work_mem was \nset way low after an upgrade.� I had to tweak many of my settings.\n�\nLance\n�\n\nFrom:\nAndrew Dunstan \n<[email protected]>\nDate: Friday, May 28, 2021 at 2:08 PM\nTo: Dean Gibson (DB Administrator) \n<[email protected]>, [email protected]\n<[email protected]>\nSubject: Re: AWS forcing PG upgrade from v9.6 a disaster\n\n\n\nOn 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n> [Reposted to the proper list]\n>\n> I started to use PostgreSQL v7.3 in 2003 on my home Linux systems \n(4\n> at one point), gradually moving to v9.0 w/ replication in 2010.� In\n> 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to \nv9.6,\n> & was entirely satisfied with the result.\n>\n> In March of this year, AWS announced that v9.6 was nearing end of\n> support, & AWS would forcibly upgrade everyone to v12 on \nJanuary 22,\n> 2022, if users did not perform the upgrade earlier.� My first \nattempt\n> was successful as far as the upgrade itself, but complex queries \nthat\n> normally ran in a couple of seconds on v9.x, were taking minutes in\n v12.\n>\n> I didn't have the time in March to diagnose the problem, other than\n> some futile adjustments to server parameters, so I reverted back to\n a\n> saved copy of my v9.6 data.\n>\n> On Sunday, being retired, I decided to attempt to solve the issue \nin\n> earnest.� I have now spent five days (about 14 hours a day), trying\n> various things, including adding additional indexes.� Keeping the \nv9.6\n> data online for web users, I've \"forked\" the data into new copies, \n&\n> updated them in turn to PostgreSQL v10, v11, v12, & v13.� All \nexhibit\n> the same problem:� As you will see below, it appears that versions \n10\n> & above are doing a sequential scan of some of the \"large\" \n(200K rows)\n> tables.� Note that the expected & actual run times both differ \nfor\n> v9.6 & v13.2, by more than *two orders of magnitude*. Rather \nthan post\n> a huge eMail (ha ha), I'll start with this one, that shows an \n\"EXPLAIN\n> ANALYZE\" from both v9.6 & v13.2, followed by the related table \n& view\n> definitions.� With one exception, table definitions are from the \nFCC\n> (Federal Communications Commission);� the view definitions are my \nown.\n>\n>\n>\n\nHave you tried reproducing these results outside RDS, say on an EC2\ninstance running vanilla PostgreSQL?\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: \nhttps://urldefense.com/v3/__https://www.enterprisedb.com__;!!DZ3fjg!tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$",
"msg_date": "Fri, 28 May 2021 15:38:58 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "The problem is the plan. The planner massively underestimated the number of\nrows arising from the _EN/_AM join.\n\nUsually postgres is pretty good about running ANALYZE as needed, but it\nmight be a good idea to run it manually to rule that out as a potential\nculprit.\n\n\nOn Fri, May 28, 2021 at 3:19 PM Campbell, Lance <[email protected]> wrote:\n\n> Also, did you check your RDS setting in AWS after upgrading? I run four\n> databases in AWS. I found that the work_mem was set way low after an\n> upgrade. I had to tweak many of my settings.\n>\n>\n>\n> Lance\n>\n>\n>\n> *From: *Andrew Dunstan <[email protected]>\n> *Date: *Friday, May 28, 2021 at 2:08 PM\n> *To: *Dean Gibson (DB Administrator) <[email protected]>,\n> [email protected] <\n> [email protected]>\n> *Subject: *Re: AWS forcing PG upgrade from v9.6 a disaster\n>\n>\n> On 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n> > [Reposted to the proper list]\n> >\n> > I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4\n> > at one point), gradually moving to v9.0 w/ replication in 2010. In\n> > 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to v9.6,\n> > & was entirely satisfied with the result.\n> >\n> > In March of this year, AWS announced that v9.6 was nearing end of\n> > support, & AWS would forcibly upgrade everyone to v12 on January 22,\n> > 2022, if users did not perform the upgrade earlier. My first attempt\n> > was successful as far as the upgrade itself, but complex queries that\n> > normally ran in a couple of seconds on v9.x, were taking minutes in v12.\n> >\n> > I didn't have the time in March to diagnose the problem, other than\n> > some futile adjustments to server parameters, so I reverted back to a\n> > saved copy of my v9.6 data.\n> >\n> > On Sunday, being retired, I decided to attempt to solve the issue in\n> > earnest. I have now spent five days (about 14 hours a day), trying\n> > various things, including adding additional indexes. Keeping the v9.6\n> > data online for web users, I've \"forked\" the data into new copies, &\n> > updated them in turn to PostgreSQL v10, v11, v12, & v13. All exhibit\n> > the same problem: As you will see below, it appears that versions 10\n> > & above are doing a sequential scan of some of the \"large\" (200K rows)\n> > tables. Note that the expected & actual run times both differ for\n> > v9.6 & v13.2, by more than *two orders of magnitude*. Rather than post\n> > a huge eMail (ha ha), I'll start with this one, that shows an \"EXPLAIN\n> > ANALYZE\" from both v9.6 & v13.2, followed by the related table & view\n> > definitions. With one exception, table definitions are from the FCC\n> > (Federal Communications Commission); the view definitions are my own.\n> >\n> >\n> >\n>\n> Have you tried reproducing these results outside RDS, say on an EC2\n> instance running vanilla PostgreSQL?\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n>\n> --\n> Andrew Dunstan\n> EDB:\n> https://urldefense.com/v3/__https://www.enterprisedb.com__;!!DZ3fjg!tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$\n> <https://urldefense.com/v3/__https:/www.enterprisedb.com__;!!DZ3fjg!tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$>\n>\n>\n>\n\nThe problem is the plan. The planner massively underestimated the number of rows arising from the _EN/_AM join. Usually postgres is pretty good about running ANALYZE as needed, but it might be a good idea to run it manually to rule that out as a potential culprit. On Fri, May 28, 2021 at 3:19 PM Campbell, Lance <[email protected]> wrote:\n\n\nAlso, did you check your RDS setting in AWS after upgrading? I run four databases in AWS. I found that the work_mem was set way low after an upgrade. I had to tweak many of my settings.\n \nLance\n \n\nFrom:\nAndrew Dunstan <[email protected]>\nDate: Friday, May 28, 2021 at 2:08 PM\nTo: Dean Gibson (DB Administrator) <[email protected]>, [email protected] <[email protected]>\nSubject: Re: AWS forcing PG upgrade from v9.6 a disaster\n\n\n\nOn 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n> [Reposted to the proper list]\n>\n> I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4\n> at one point), gradually moving to v9.0 w/ replication in 2010. In\n> 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to v9.6,\n> & was entirely satisfied with the result.\n>\n> In March of this year, AWS announced that v9.6 was nearing end of\n> support, & AWS would forcibly upgrade everyone to v12 on January 22,\n> 2022, if users did not perform the upgrade earlier. My first attempt\n> was successful as far as the upgrade itself, but complex queries that\n> normally ran in a couple of seconds on v9.x, were taking minutes in v12.\n>\n> I didn't have the time in March to diagnose the problem, other than\n> some futile adjustments to server parameters, so I reverted back to a\n> saved copy of my v9.6 data.\n>\n> On Sunday, being retired, I decided to attempt to solve the issue in\n> earnest. I have now spent five days (about 14 hours a day), trying\n> various things, including adding additional indexes. Keeping the v9.6\n> data online for web users, I've \"forked\" the data into new copies, &\n> updated them in turn to PostgreSQL v10, v11, v12, & v13. All exhibit\n> the same problem: As you will see below, it appears that versions 10\n> & above are doing a sequential scan of some of the \"large\" (200K rows)\n> tables. Note that the expected & actual run times both differ for\n> v9.6 & v13.2, by more than *two orders of magnitude*. Rather than post\n> a huge eMail (ha ha), I'll start with this one, that shows an \"EXPLAIN\n> ANALYZE\" from both v9.6 & v13.2, followed by the related table & view\n> definitions. With one exception, table definitions are from the FCC\n> (Federal Communications Commission); the view definitions are my own.\n>\n>\n>\n\nHave you tried reproducing these results outside RDS, say on an EC2\ninstance running vanilla PostgreSQL?\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: \nhttps://urldefense.com/v3/__https://www.enterprisedb.com__;!!DZ3fjg!tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$",
"msg_date": "Fri, 28 May 2021 15:39:23 -0400",
"msg_from": "Ryan Bair <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "The plan is also influenced by cost related and memory related config\nsettings such as random_page_cost and work_mem, right? Hence the questions\nif configs are matching or newer versions are using very conservative\n(default) settings.\n\nThe plan is also influenced by cost related and memory related config settings such as random_page_cost and work_mem, right? Hence the questions if configs are matching or newer versions are using very conservative (default) settings.",
"msg_date": "Fri, 28 May 2021 14:11:09 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n\nWhat sticks out for me are these two scans, which balloon from 50-60 \nheap fetches to 1.5M each.\n\n> -> Nested Loop (cost=0.29..0.68 rows=1 width=7) \n> (actual time=0.003..0.004 rows=1 loops=1487153)\n> Join Filter: (\"_IsoCountry\".iso_alpha2 = \n> \"_Territory\".country_id)\n> Rows Removed by Join Filter: 0\n> -> Index Only Scan using \n> \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38 rows=1 \n> width=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n> Index Cond: (iso_alpha2 = \n> \"_GovtRegion\".country_id)\n> Heap Fetches: 1487153\n> -> Index Only Scan using \"_Territory_pkey\" \n> on \"_Territory\" (cost=0.14..0.29 rows=1 width=7) (actual \n> time=0.001..0.001 rows=1 loops=1487153)\n> Index Cond: (territory_id = \n> \"_GovtRegion\".territory_id)\n> Heap Fetches: 1550706\n\nHow did you load the database? pg_dump -> psql/pg_restore?\n\nIf so, did you perform a VACUUM FREEZE after the load?\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPostgres User since 1994\n\n\n",
"msg_date": "Fri, 28 May 2021 16:23:06 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-05-28 12:08, Andrew Dunstan wrote:\n> On 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n>> [Reposted to the proper list]\n>>\n>> ...\n>>\n>>\n>> Have you tried reproducing these results outside RDS, say on an EC2 instance running vanilla PostgreSQL?\n>>\n>> cheers, andrew\n>>\n>> --\n>> Andrew Dunstan\n>> EDB: https://www.enterprisedb.com\n\nThat is step #2 of my backup plan:\n\n 1. Create an EC2 instance running community v9.6. Once that is done\n & running successfully, I'm golden for a long, long time.\n 2. If I am curious (& not worn out), take a snapshot of #1 & update it\n to v13.\n\n\n-- Dean\n\n\n\n\n\n\n\nOn 2021-05-28 12:08, Andrew Dunstan\n wrote:\n\n\n\nOn 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n\n\n[Reposted to the proper list]\n\n...\n\n\nHave you tried reproducing these results outside RDS, say on an EC2 instance running vanilla PostgreSQL?\n\ncheers, andrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n\n\n That is step #2 of my backup plan:\n\n\n Create an EC2 instance running community v9.6. Once that is\n done & running successfully, I'm golden for a long, long\n time.\n\nIf I am curious (& not worn out), take a snapshot of #1\n & update it to v13.\n\n\n -- Dean",
"msg_date": "Fri, 28 May 2021 13:33:18 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-05-28 12:18, Campbell, Lance wrote:\n>\n> Also, did you check your RDS setting in AWS after upgrading?� I run \n> four databases in AWS.� I found that the work_mem was set way low \n> after an upgrade.� I had to tweak many of my settings.\n>\n> Lance\n>\n>\n\nI've wondered a lot about work_mem.� The default setting (which I've \ntried) involves a formula, so I have no idea what the actual value is.� \nSince I have a db.t2.micro (now db.t3.micro) instance with only 1GB of \nRAM, I've tried a value of 8000. No difference.\n\n\n\n\n\n\n\nOn 2021-05-28 12:18, Campbell, Lance\n wrote:\n\n\n\n\n\n\nAlso, did you check your RDS setting in AWS\n after upgrading?� I run four databases in AWS.� I found that\n the work_mem was set way low after an upgrade.� I had to tweak\n many of my settings.\n�\nLance\n\n\n\n\n I've wondered a lot about work_mem.� The default setting (which I've\n tried) involves a formula, so I have no idea what the actual value\n is.� Since I have a db.t2.micro (now db.t3.micro) instance with only\n 1GB of RAM, I've tried a value of 8000. No difference.",
"msg_date": "Fri, 28 May 2021 13:37:41 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "pá 28. 5. 2021 v 21:39 odesílatel Ryan Bair <[email protected]> napsal:\n\n> The problem is the plan. The planner massively underestimated the number\n> of rows arising from the _EN/_AM join.\n>\n> Usually postgres is pretty good about running ANALYZE as needed, but it\n> might be a good idea to run it manually to rule that out as a potential\n> culprit.\n>\n\nyes\n\nthe very strange is pretty high planning time\n\n Planning Time: 173.753 ms\n\nThis is unusually high number - maybe the server has bad CPU or maybe some\nindexes bloating\n\nRegards\n\nPavel\n\nOn Fri, May 28, 2021 at 3:19 PM Campbell, Lance <[email protected]> wrote:\n>\n>> Also, did you check your RDS setting in AWS after upgrading? I run four\n>> databases in AWS. I found that the work_mem was set way low after an\n>> upgrade. I had to tweak many of my settings.\n>>\n>>\n>>\n>> Lance\n>>\n>>\n>>\n>> *From: *Andrew Dunstan <[email protected]>\n>> *Date: *Friday, May 28, 2021 at 2:08 PM\n>> *To: *Dean Gibson (DB Administrator) <[email protected]>,\n>> [email protected] <\n>> [email protected]>\n>> *Subject: *Re: AWS forcing PG upgrade from v9.6 a disaster\n>>\n>>\n>> On 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n>> > [Reposted to the proper list]\n>> >\n>> > I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4\n>> > at one point), gradually moving to v9.0 w/ replication in 2010. In\n>> > 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to v9.6,\n>> > & was entirely satisfied with the result.\n>> >\n>> > In March of this year, AWS announced that v9.6 was nearing end of\n>> > support, & AWS would forcibly upgrade everyone to v12 on January 22,\n>> > 2022, if users did not perform the upgrade earlier. My first attempt\n>> > was successful as far as the upgrade itself, but complex queries that\n>> > normally ran in a couple of seconds on v9.x, were taking minutes in v12.\n>> >\n>> > I didn't have the time in March to diagnose the problem, other than\n>> > some futile adjustments to server parameters, so I reverted back to a\n>> > saved copy of my v9.6 data.\n>> >\n>> > On Sunday, being retired, I decided to attempt to solve the issue in\n>> > earnest. I have now spent five days (about 14 hours a day), trying\n>> > various things, including adding additional indexes. Keeping the v9.6\n>> > data online for web users, I've \"forked\" the data into new copies, &\n>> > updated them in turn to PostgreSQL v10, v11, v12, & v13. All exhibit\n>> > the same problem: As you will see below, it appears that versions 10\n>> > & above are doing a sequential scan of some of the \"large\" (200K rows)\n>> > tables. Note that the expected & actual run times both differ for\n>> > v9.6 & v13.2, by more than *two orders of magnitude*. Rather than post\n>> > a huge eMail (ha ha), I'll start with this one, that shows an \"EXPLAIN\n>> > ANALYZE\" from both v9.6 & v13.2, followed by the related table & view\n>> > definitions. With one exception, table definitions are from the FCC\n>> > (Federal Communications Commission); the view definitions are my own.\n>> >\n>> >\n>> >\n>>\n>> Have you tried reproducing these results outside RDS, say on an EC2\n>> instance running vanilla PostgreSQL?\n>>\n>>\n>> cheers\n>>\n>>\n>> andrew\n>>\n>>\n>>\n>> --\n>> Andrew Dunstan\n>> EDB:\n>> https://urldefense.com/v3/__https://www.enterprisedb.com__;!!DZ3fjg!tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$\n>> <https://urldefense.com/v3/__https:/www.enterprisedb.com__;!!DZ3fjg!tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$>\n>>\n>>\n>>\n\npá 28. 5. 2021 v 21:39 odesílatel Ryan Bair <[email protected]> napsal:The problem is the plan. The planner massively underestimated the number of rows arising from the _EN/_AM join. Usually postgres is pretty good about running ANALYZE as needed, but it might be a good idea to run it manually to rule that out as a potential culprit. yesthe very strange is pretty high planning time Planning Time: 173.753 msThis is unusually high number - maybe the server has bad CPU or maybe some indexes bloatingRegardsPavelOn Fri, May 28, 2021 at 3:19 PM Campbell, Lance <[email protected]> wrote:\n\n\nAlso, did you check your RDS setting in AWS after upgrading? I run four databases in AWS. I found that the work_mem was set way low after an upgrade. I had to tweak many of my settings.\n \nLance\n \n\nFrom:\nAndrew Dunstan <[email protected]>\nDate: Friday, May 28, 2021 at 2:08 PM\nTo: Dean Gibson (DB Administrator) <[email protected]>, [email protected] <[email protected]>\nSubject: Re: AWS forcing PG upgrade from v9.6 a disaster\n\n\n\nOn 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n> [Reposted to the proper list]\n>\n> I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4\n> at one point), gradually moving to v9.0 w/ replication in 2010. In\n> 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to v9.6,\n> & was entirely satisfied with the result.\n>\n> In March of this year, AWS announced that v9.6 was nearing end of\n> support, & AWS would forcibly upgrade everyone to v12 on January 22,\n> 2022, if users did not perform the upgrade earlier. My first attempt\n> was successful as far as the upgrade itself, but complex queries that\n> normally ran in a couple of seconds on v9.x, were taking minutes in v12.\n>\n> I didn't have the time in March to diagnose the problem, other than\n> some futile adjustments to server parameters, so I reverted back to a\n> saved copy of my v9.6 data.\n>\n> On Sunday, being retired, I decided to attempt to solve the issue in\n> earnest. I have now spent five days (about 14 hours a day), trying\n> various things, including adding additional indexes. Keeping the v9.6\n> data online for web users, I've \"forked\" the data into new copies, &\n> updated them in turn to PostgreSQL v10, v11, v12, & v13. All exhibit\n> the same problem: As you will see below, it appears that versions 10\n> & above are doing a sequential scan of some of the \"large\" (200K rows)\n> tables. Note that the expected & actual run times both differ for\n> v9.6 & v13.2, by more than *two orders of magnitude*. Rather than post\n> a huge eMail (ha ha), I'll start with this one, that shows an \"EXPLAIN\n> ANALYZE\" from both v9.6 & v13.2, followed by the related table & view\n> definitions. With one exception, table definitions are from the FCC\n> (Federal Communications Commission); the view definitions are my own.\n>\n>\n>\n\nHave you tried reproducing these results outside RDS, say on an EC2\ninstance running vanilla PostgreSQL?\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: \nhttps://urldefense.com/v3/__https://www.enterprisedb.com__;!!DZ3fjg!tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$",
"msg_date": "Fri, 28 May 2021 22:38:10 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\nOn 5/28/21 4:23 PM, Jan Wieck wrote:\n> On 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n>\n> What sticks out for me are these two scans, which balloon from 50-60\n> heap fetches to 1.5M each.\n>\n>> -> Nested Loop (cost=0.29..0.68 rows=1\n>> width=7) (actual time=0.003..0.004 rows=1 loops=1487153)\n>> Join Filter: (\"_IsoCountry\".iso_alpha2 =\n>> \"_Territory\".country_id)\n>> Rows Removed by Join Filter: 0\n>> -> Index Only Scan using\n>> \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38\n>> rows=1 width=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n>> Index Cond: (iso_alpha2 =\n>> \"_GovtRegion\".country_id)\n>> Heap Fetches: 1487153\n>> -> Index Only Scan using\n>> \"_Territory_pkey\" on \"_Territory\" (cost=0.14..0.29 rows=1 width=7)\n>> (actual time=0.001..0.001 rows=1 loops=1487153)\n>> Index Cond: (territory_id =\n>> \"_GovtRegion\".territory_id)\n>> Heap Fetches: 1550706\n>\n> How did you load the database? pg_dump -> psql/pg_restore?\n>\n> If so, did you perform a VACUUM FREEZE after the load?\n>\n>\n>\n\nJan\n\n\nAIUI he did an RDS upgrade. Surely that's not doing a dump/restore? I\nassume you would know better than him or me what it actually does do :-)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 28 May 2021 17:15:33 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On Fri, May 28, 2021 at 05:15:33PM -0400, Andrew Dunstan wrote:\n> > How did you load the database? pg_dump -> psql/pg_restore?\n> >\n> > If so, did you perform a VACUUM FREEZE after the load?\n> \n> Jan\n> \n> \n> AIUI he did an RDS upgrade. Surely that's not doing a dump/restore? I\n> assume you would know better than him or me what it actually does do :-)\n\nI think it uses pg_upgrade.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 28 May 2021 17:30:52 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "I recently did 20 upgrades from 9.6 to 12.4 and 12.5. No issues and the upgrade process uses pg_upgrade. I don’t know if AWS modified it though. \n\nBob\n\nSent from my PDP11\n\n> On May 28, 2021, at 5:15 PM, Andrew Dunstan <[email protected]> wrote:\n> \n> \n>> On 5/28/21 4:23 PM, Jan Wieck wrote:\n>> On 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n>> \n>> What sticks out for me are these two scans, which balloon from 50-60\n>> heap fetches to 1.5M each.\n>> \n>>> -> Nested Loop (cost=0.29..0.68 rows=1\n>>> width=7) (actual time=0.003..0.004 rows=1 loops=1487153)\n>>> Join Filter: (\"_IsoCountry\".iso_alpha2 =\n>>> \"_Territory\".country_id)\n>>> Rows Removed by Join Filter: 0\n>>> -> Index Only Scan using\n>>> \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38\n>>> rows=1 width=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n>>> Index Cond: (iso_alpha2 =\n>>> \"_GovtRegion\".country_id)\n>>> Heap Fetches: 1487153\n>>> -> Index Only Scan using\n>>> \"_Territory_pkey\" on \"_Territory\" (cost=0.14..0.29 rows=1 width=7)\n>>> (actual time=0.001..0.001 rows=1 loops=1487153)\n>>> Index Cond: (territory_id =\n>>> \"_GovtRegion\".territory_id)\n>>> Heap Fetches: 1550706\n>> \n>> How did you load the database? pg_dump -> psql/pg_restore?\n>> \n>> If so, did you perform a VACUUM FREEZE after the load?\n>> \n>> \n>> \n> \n> Jan\n> \n> \n> AIUI he did an RDS upgrade. Surely that's not doing a dump/restore? I\n> assume you would know better than him or me what it actually does do :-)\n> \n> \n> cheers\n> \n> \n> andrew\n> \n> \n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n> \n> \n> \n\n\n\n",
"msg_date": "Fri, 28 May 2021 18:09:50 -0400",
"msg_from": "Bob Lunney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-05-28 13:23, Jan Wieck wrote:\n> On 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n>\n> What sticks out for me are these two scans, which balloon from 50-60 \n> heap fetches to 1.5M each.\n>\n>> -> Nested Loop (cost=0.29..0.68 rows=1 \n>> width=7) (actual time=0.003..0.004 rows=1 loops=1487153)\n>> Join Filter: (\"_IsoCountry\".iso_alpha2 = \n>> \"_Territory\".country_id)\n>> Rows Removed by Join Filter: 0\n>> -> Index Only Scan using \n>> \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38 \n>> rows=1 width=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n>> Index Cond: (iso_alpha2 = \n>> \"_GovtRegion\".country_id)\n>> Heap Fetches: 1487153\n>> -> Index Only Scan using \n>> \"_Territory_pkey\" on \"_Territory\" (cost=0.14..0.29 rows=1 width=7) \n>> (actual time=0.001..0.001 rows=1 loops=1487153)\n>> Index Cond: (territory_id = \n>> \"_GovtRegion\".territory_id)\n>> Heap Fetches: 1550706\n>\n> How did you load the database? pg_dump -> psql/pg_restore?\n>\n> If so, did you perform a VACUUM FREEZE after the load?\n>\n> Regards, Jan\n\nIt was RDS's \"upgrade in place\". According to the PostgreSQL site, for \nv9.4 & v12: /\"Aggressive freezing is always performed when the table is \nrewritten, so this option is redundant when //|FULL|//is specified.\"/\n\nI did a VACUUM FULL.\n\n\n\n\n\n\n\nOn 2021-05-28 13:23, Jan Wieck wrote:\n\nOn\n 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n \n\n What sticks out for me are these two scans, which balloon from\n 50-60 heap fetches to 1.5M each.\n \n\n -> Nested Loop \n (cost=0.29..0.68 rows=1 width=7) (actual time=0.003..0.004\n rows=1 loops=1487153)\n \n Join Filter:\n (\"_IsoCountry\".iso_alpha2 = \"_Territory\".country_id)\n \n Rows Removed by Join Filter: 0\n \n -> Index Only Scan using\n \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38\n rows=1 width=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n \n Index Cond: (iso_alpha2 =\n \"_GovtRegion\".country_id)\n \n Heap Fetches: 1487153\n \n -> Index Only Scan using\n \"_Territory_pkey\" on \"_Territory\" (cost=0.14..0.29 rows=1\n width=7) (actual time=0.001..0.001 rows=1 loops=1487153)\n \n Index Cond: (territory_id =\n \"_GovtRegion\".territory_id)\n \n Heap Fetches: 1550706\n \n\n\n How did you load the database? pg_dump -> psql/pg_restore?\n \n\n If so, did you perform a VACUUM FREEZE after the load?\n \n\n Regards, Jan\n \n\n\n It was RDS's \"upgrade in place\". According to the PostgreSQL site,\n for v9.4 & v12: \"Aggressive freezing is always performed\n when the table is rewritten, so this option is redundant when FULL is specified.\"\n\n I did a VACUUM FULL.",
"msg_date": "Fri, 28 May 2021 15:13:58 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On Fri, May 28, 2021, 17:15 Andrew Dunstan <[email protected]> wrote:\n\n>\n> On 5/28/21 4:23 PM, Jan Wieck wrote:\n> > On 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n> >\n> > What sticks out for me are these two scans, which balloon from 50-60\n> > heap fetches to 1.5M each.\n> >\n> >> -> Nested Loop (cost=0.29..0.68 rows=1\n> >> width=7) (actual time=0.003..0.004 rows=1 loops=1487153)\n> >> Join Filter: (\"_IsoCountry\".iso_alpha2 =\n> >> \"_Territory\".country_id)\n> >> Rows Removed by Join Filter: 0\n> >> -> Index Only Scan using\n> >> \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38\n> >> rows=1 width=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n> >> Index Cond: (iso_alpha2 =\n> >> \"_GovtRegion\".country_id)\n> >> Heap Fetches: 1487153\n> >> -> Index Only Scan using\n> >> \"_Territory_pkey\" on \"_Territory\" (cost=0.14..0.29 rows=1 width=7)\n> >> (actual time=0.001..0.001 rows=1 loops=1487153)\n> >> Index Cond: (territory_id =\n> >> \"_GovtRegion\".territory_id)\n> >> Heap Fetches: 1550706\n> >\n> > How did you load the database? pg_dump -> psql/pg_restore?\n> >\n> > If so, did you perform a VACUUM FREEZE after the load?\n> >\n> >\n> >\n>\n> Jan\n>\n>\n> AIUI he did an RDS upgrade. Surely that's not doing a dump/restore? I\n> assume you would know better than him or me what it actually does do :-)\n>\n\nSince I am not working at AWS I can't tell for sure. ;)\n\nIt used to perform a binary pgupgrade. But that also has issues with xids\nand freezing. So I would throw a cluster wide vac-freeze in there for good\nmeasure, Sir.\n\n\nBest Regards, Jan\n\n\n\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\nOn Fri, May 28, 2021, 17:15 Andrew Dunstan <[email protected]> wrote:\nOn 5/28/21 4:23 PM, Jan Wieck wrote:\n> On 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n>\n> What sticks out for me are these two scans, which balloon from 50-60\n> heap fetches to 1.5M each.\n>\n>> -> Nested Loop (cost=0.29..0.68 rows=1\n>> width=7) (actual time=0.003..0.004 rows=1 loops=1487153)\n>> Join Filter: (\"_IsoCountry\".iso_alpha2 =\n>> \"_Territory\".country_id)\n>> Rows Removed by Join Filter: 0\n>> -> Index Only Scan using\n>> \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38\n>> rows=1 width=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n>> Index Cond: (iso_alpha2 =\n>> \"_GovtRegion\".country_id)\n>> Heap Fetches: 1487153\n>> -> Index Only Scan using\n>> \"_Territory_pkey\" on \"_Territory\" (cost=0.14..0.29 rows=1 width=7)\n>> (actual time=0.001..0.001 rows=1 loops=1487153)\n>> Index Cond: (territory_id =\n>> \"_GovtRegion\".territory_id)\n>> Heap Fetches: 1550706\n>\n> How did you load the database? pg_dump -> psql/pg_restore?\n>\n> If so, did you perform a VACUUM FREEZE after the load?\n>\n>\n>\n\nJan\n\n\nAIUI he did an RDS upgrade. Surely that's not doing a dump/restore? I\nassume you would know better than him or me what it actually does do :-)Since I am not working at AWS I can't tell for sure. ;)It used to perform a binary pgupgrade. But that also has issues with xids and freezing. So I would throw a cluster wide vac-freeze in there for good measure, Sir.Best Regards, Jan\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 28 May 2021 22:27:05 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\nOn 5/28/21 10:27 PM, Jan Wieck wrote:\n>\n>\n> On Fri, May 28, 2021, 17:15 Andrew Dunstan <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n>\n>\n>\n> AIUI he did an RDS upgrade. Surely that's not doing a dump/restore? I\n> assume you would know better than him or me what it actually does\n> do :-)\n>\n>\n> Since I am not working at AWS I can't tell for sure. ;)\n\n\nApologies, my mistake then.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 28 May 2021 22:41:08 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\n\n> On May 28, 2021, at 14:30, Bruce Momjian <[email protected]> wrote:\n> I think it uses pg_upgrade.\n\nIt does. It does not, however, do the vacuum analyze step afterwards. A VACUUM (FULL, ANALYZE) should take care of that, and I believe the OP said he had done that after the pg_upgrade.\n\nThe most common reason for this kind of inexplicable stuff after an RDS upgrade is, as others have said, parameter changes, since you get a new default parameter group after the upgrade.\n\nThat being said, this does look like something happened to the planner to cause it to pick a worse plan in v13. The deeply nested views make it kind of hard to pin down, but the core issue appears to be in the \"good\" plan, it evaluates the _Club.club_count > 5 relatively early, which greatly limits the number of rows that it handles elsewhere in the query. Why the plan change, I can't say.\n\nIt might be worth creating a materialized CTE that grabs the \"club_count > 5\" set and uses that, instead of having it at the top level predicates.\n\n",
"msg_date": "Fri, 28 May 2021 19:43:23 -0700",
"msg_from": "Christophe Pettus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-05-28 19:43, Christophe Pettus wrote:\n> ...\n> The most common reason for this kind of inexplicable stuff after an RDS upgrade is, as others have said, parameter changes, since you get a new default parameter group after the upgrade.\n>\n> That being said, this does look like something happened to the planner to cause it to pick a worse plan in v13. The deeply nested views make it kind of hard to pin down, but the core issue appears to be in the \"good\" plan, it evaluates the _Club.club_count > 5 relatively early, which greatly limits the number of rows that it handles elsewhere in the query. Why the plan change, I can't say.\n>\n> It might be worth creating a materialized CTE that grabs the \"club_count > 5\" set and uses that, instead of having it at the top level predicates.\n\nI spent quite a bit of time over the past five days experimenting with \nvarious parameter values, to no avail, but I don't mind trying some more.\n\nI have other queries that fail even more spectacularly, & they all seem \nto involve a generated table like the \"club\" one in my example. I have \nan idea that I might try, in effectively changing the order of \nevaluation. I'll have to think about that. Thanks for the suggestion! \nHowever, one \"shouldn't\" have to tinker with the order of stuff in SQL; \nthat's one of the beauties of the language: the \"compiler\" (planner) is \nsupposed to figure that all out. And for me, that's been true for the \npast 15 years with PostgreSQL.\n\nNote that this problem is not unique to v13. It happened with upgrades \nto v10, 11, &12. So, some fundamental change was made back then (at \nleast in the RDS version). Since I need a bulletproof backup past next \nJanuary, I think my next task will be to get an EC2 instance running \nv9.6, where AWS can't try to upgrade it. Then, at my leisure, I can \nfiddle with upgrading.\n\n\n\n\n\n\nOn 2021-05-28 19:43, Christophe Pettus\n wrote:\n\n...\n \nThe most common reason for this kind of inexplicable stuff after an RDS upgrade is, as others have said, parameter changes, since you get a new default parameter group after the upgrade.\n\nThat being said, this does look like something happened to the planner to cause it to pick a worse plan in v13. The deeply nested views make it kind of hard to pin down, but the core issue appears to be in the \"good\" plan, it evaluates the _Club.club_count > 5 relatively early, which greatly limits the number of rows that it handles elsewhere in the query. Why the plan change, I can't say.\n\nIt might be worth creating a materialized CTE that grabs the \"club_count > 5\" set and uses that, instead of having it at the top level predicates.\n\n\n I spent quite a bit of time over the past five days experimenting\n with various parameter values, to no avail, but I don't mind trying\n some more.\n\n I have other queries that fail even more spectacularly, & they\n all seem to involve a generated table like the \"club\" one in my\n example. I have an idea that I might try, in effectively changing\n the order of evaluation. I'll have to think about that. Thanks for\n the suggestion! However, one \"shouldn't\" have to tinker with the\n order of stuff in SQL; that's one of the beauties of the language: \n the \"compiler\" (planner) is supposed to figure that all out. And\n for me, that's been true for the past 15 years with PostgreSQL.\n\n Note that this problem is not unique to v13. It happened with\n upgrades to v10, 11, &12. So, some fundamental change was made\n back then (at least in the RDS version). Since I need a bulletproof\n backup past next January, I think my next task will be to get an EC2\n instance running v9.6, where AWS can't try to upgrade it. Then, at\n my leisure, I can fiddle with upgrading.",
"msg_date": "Fri, 28 May 2021 21:08:28 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 05/29/21 07:08, Dean Gibson (DB Administrator) wrote:\n> On 2021-05-28 19:43, Christophe Pettus wrote:\n>> ...\n>> The most common reason for this kind of inexplicable stuff after an RDS upgrade is, as others have said, parameter changes, since you get a new default parameter group after the upgrade.\n>>\n>> That being said, this does look like something happened to the planner to cause it to pick a worse plan in v13. The deeply nested views make it kind of hard to pin down, but the core issue appears to be in the \"good\" plan, it evaluates the _Club.club_count > 5 relatively early, which greatly limits the number of rows that it handles elsewhere in the query. Why the plan change, I can't say.\n>>\n>> It might be worth creating a materialized CTE that grabs the \"club_count > 5\" set and uses that, instead of having it at the top level predicates.\n>\n> I spent quite a bit of time over the past five days experimenting with \n> various parameter values, to no avail, but I don't mind trying some more.\n>\n> I have other queries that fail even more spectacularly, & they all \n> seem to involve a generated table like the \"club\" one in my example. \n> I have an idea that I might try, in effectively changing the order of \n> evaluation. I'll have to think about that. Thanks for the \n> suggestion! However, one \"shouldn't\" have to tinker with the order of \n> stuff in SQL; that's one of the beauties of the language: the \n> \"compiler\" (planner) is supposed to figure that all out. And for me, \n> that's been true for the past 15 years with PostgreSQL.\n>\n> Note that this problem is not unique to v13. It happened with \n> upgrades to v10, 11, &12. So, some fundamental change was made back \n> then (at least in the RDS version). Since I need a bulletproof backup \n> past next January, I think my next task will be to get an EC2 instance \n> running v9.6, where AWS can't try to upgrade it. Then, at my leisure, \n> I can fiddle with upgrading.\n\nBTW what is the planner reason to not use index in v13.2? Is index in \ncorrupted state? Have you try to reindex index \n\"FccLookup\".\"_LicStatus_pkey\" ?\n\n1.5M of seqscan's are looking really bad.\n\n SubPlan 2\n -> Limit (cost=0.15..8.17 rows=1 width=32) \n(actual time=0.006..0.007 rows=1 loops=55)\n -> *Index Scan using \"_LicStatus_pkey\" on \n\"_LicStatus\"* (cost=0.15..8.17 rows=1 width=32) (actual \ntime=0.005..0.005 rows=1 loops=55)\n Index Cond: (\"_HD\".license_status = \nstatus_id)\n\n\nSubPlan 2\n -> Limit (cost=0.00..1.07 rows=1 width=13) \n(actual time=0.001..0.001 rows=1 loops=1487153)\n -> *Seq Scan on \"_LicStatus\"* \n(cost=0.00..1.07 rows=1 width=13) (actual time=0.000..0.000 rows=1 \nloops=1487153)\n Filter: (\"_HD\".license_status = \nstatus_id)\n Rows Removed by Filter: 1\n\n\n\n\n\n\n\nOn 05/29/21 07:08, Dean Gibson (DB\n Administrator) wrote:\n\n\n\nOn 2021-05-28 19:43, Christophe\n Pettus wrote:\n\n...\n The most common reason for this kind of inexplicable stuff after an RDS upgrade is, as others have said, parameter changes, since you get a new default parameter group after the upgrade.\n\nThat being said, this does look like something happened to the planner to cause it to pick a worse plan in v13. The deeply nested views make it kind of hard to pin down, but the core issue appears to be in the \"good\" plan, it evaluates the _Club.club_count > 5 relatively early, which greatly limits the number of rows that it handles elsewhere in the query. Why the plan change, I can't say.\n\nIt might be worth creating a materialized CTE that grabs the \"club_count > 5\" set and uses that, instead of having it at the top level predicates.\n\n\n I spent quite a bit of time over the past five days experimenting\n with various parameter values, to no avail, but I don't mind\n trying some more.\n\n I have other queries that fail even more spectacularly, & they\n all seem to involve a generated table like the \"club\" one in my\n example. I have an idea that I might try, in effectively changing\n the order of evaluation. I'll have to think about that. Thanks\n for the suggestion! However, one \"shouldn't\" have to tinker with\n the order of stuff in SQL; that's one of the beauties of the\n language: the \"compiler\" (planner) is supposed to figure that all\n out. And for me, that's been true for the past 15 years with\n PostgreSQL.\n\n Note that this problem is not unique to v13. It happened with\n upgrades to v10, 11, &12. So, some fundamental change was\n made back then (at least in the RDS version). Since I need a\n bulletproof backup past next January, I think my next task will be\n to get an EC2 instance running v9.6, where AWS can't try to\n upgrade it. Then, at my leisure, I can fiddle with upgrading.\n\nBTW what is the planner reason to not use\n index in v13.2? Is index in corrupted state? Have you try to\n reindex index \"FccLookup\".\"_LicStatus_pkey\" ?\n\n1.5M of seqscan's are\n looking really bad.\n\n SubPlan 2\n -> Limit (cost=0.15..8.17 rows=1\n width=32) (actual time=0.006..0.007 rows=1 loops=55)\n -> Index Scan using\n \"_LicStatus_pkey\" on \"_LicStatus\" (cost=0.15..8.17 rows=1\n width=32) (actual time=0.005..0.005 rows=1 loops=55)\n Index Cond:\n (\"_HD\".license_status = status_id)\n\n \n SubPlan 2\n -> Limit (cost=0.00..1.07 rows=1\n width=13) (actual time=0.001..0.001 rows=1 loops=1487153)\n -> Seq Scan on\n \"_LicStatus\" (cost=0.00..1.07 rows=1 width=13) (actual\n time=0.000..0.000 rows=1 loops=1487153)\n Filter:\n (\"_HD\".license_status = status_id)\n Rows Removed by Filter: 1",
"msg_date": "Sat, 29 May 2021 08:24:37 +0300",
"msg_from": "Alexey M Boltenkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On Fri, May 28, 2021, 22:41 Andrew Dunstan <[email protected]> wrote:\n\n>\n> On 5/28/21 10:27 PM, Jan Wieck wrote:\n> >\n> >\n> > On Fri, May 28, 2021, 17:15 Andrew Dunstan <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n> >\n> >\n> >\n> > AIUI he did an RDS upgrade. Surely that's not doing a dump/restore? I\n> > assume you would know better than him or me what it actually does\n> > do :-)\n> >\n> >\n> > Since I am not working at AWS I can't tell for sure. ;)\n>\n>\n> Apologies, my mistake then.\n>\n\nNo need to apologize, you were correct two months ago.\n\n\nBest Regards, Jan\n\n\n\n>\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\nOn Fri, May 28, 2021, 22:41 Andrew Dunstan <[email protected]> wrote:\nOn 5/28/21 10:27 PM, Jan Wieck wrote:\n>\n>\n> On Fri, May 28, 2021, 17:15 Andrew Dunstan <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n>\n>\n>\n> AIUI he did an RDS upgrade. Surely that's not doing a dump/restore? I\n> assume you would know better than him or me what it actually does\n> do :-)\n>\n>\n> Since I am not working at AWS I can't tell for sure. ;)\n\n\nApologies, my mistake then.No need to apologize, you were correct two months ago.Best Regards, Jan\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 29 May 2021 07:39:29 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-05-28 22:24, Alexey M Boltenkov wrote:\n> On 05/29/21 07:08, Dean Gibson (DB Administrator) wrote: [deleted]\n>\n> BTW what is the planner reason to not use index in v13.2? Is index in \n> corrupted state? Have you try to reindex index \n> \"FccLookup\".\"_LicStatus_pkey\" ?\n>\n> 1.5M of seqscan's are looking really bad.\n>\n> SubPlan 2\n> -> Limit (cost=0.15..8.17 rows=1 width=32) \n> (actual time=0.006..0.007 rows=1 loops=55)\n> -> *Index Scan using \"_LicStatus_pkey\" on \n> \"_LicStatus\"* (cost=0.15..8.17 rows=1 width=32) (actual \n> time=0.005..0.005 rows=1 loops=55)\n> Index Cond: (\"_HD\".license_status = \n> status_id)\n>\n>\n> SubPlan 2\n> -> Limit (cost=0.00..1.07 rows=1 width=13) \n> (actual time=0.001..0.001 rows=1 loops=1487153)\n> -> *Seq Scan on \"_LicStatus\"* \n> (cost=0.00..1.07 rows=1 width=13) (actual time=0.000..0.000 rows=1 \n> loops=1487153)\n> Filter: (\"_HD\".license_status = \n> status_id)\n> Rows Removed by Filter: 1\n>\n\nDoing your REINDEX didn't help. Now in the process of reindexing the \nentire database. When that's done, I'll let you know if there is any \nimprovement.\n\n\n\n\n\n\nOn 2021-05-28 22:24, Alexey M Boltenkov\n wrote:\n\n\n\nOn 05/29/21 07:08, Dean Gibson (DB\n Administrator) wrote: [deleted]\n\nBTW what is the planner reason to not use\n index in v13.2? Is index in corrupted state? Have you try to\n reindex index \"FccLookup\".\"_LicStatus_pkey\" ?\n\n1.5M of seqscan's\n are looking really bad.\n\n SubPlan 2\n -> Limit (cost=0.15..8.17 rows=1\n width=32) (actual time=0.006..0.007 rows=1 loops=55)\n -> Index Scan using\n \"_LicStatus_pkey\" on \"_LicStatus\" (cost=0.15..8.17\n rows=1 width=32) (actual time=0.005..0.005 rows=1 loops=55)\n Index Cond:\n (\"_HD\".license_status = status_id)\n\n \n SubPlan 2\n -> Limit (cost=0.00..1.07 rows=1\n width=13) (actual time=0.001..0.001 rows=1 loops=1487153)\n -> Seq Scan on\n \"_LicStatus\" (cost=0.00..1.07 rows=1 width=13)\n (actual time=0.000..0.000 rows=1 loops=1487153)\n Filter:\n (\"_HD\".license_status = status_id)\n Rows Removed by Filter: 1\n\n\n Doing your REINDEX didn't help. Now in the process of reindexing\n the entire database. When that's done, I'll let you know if there\n is any improvement.",
"msg_date": "Sat, 29 May 2021 13:17:40 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "*SOLVED !!!* Below is the *new* EXPLAIN ANALYZE for *13.2* on AWS RDS \n(with *no changes* to server parameters) along with the prior EXPLAIN \nANALYZE outputs for easy comparison.\n\nWhile I didn't discount the significance & effect of optimizing the \nserver parameters, this problem always seemed to me like a fundamental \ndifference in how the PostgreSQL planner viewed the structure of the \nquery. In particular, I had a usage pattern of writing VIEWS that \nworked very well with v9.6 & prior versions, but which made me suspect a \nroute of attack:\n\nSince the FCC tables contain lots of one-character codes for different \nconditions, to simplify maintenance & displays to humans, I created over \ntwenty tiny lookup tables (a dozen or so entries in each table), to \nrender a human-readable field as a replacement for the original \none-character field in many of the VIEWs. In some cases those \n\"humanized\" fields were used as conditions in SELECT statements. Of \ncourse, fields that are not referenced or selected for output from a \nparticular query, never get looked up (an advantage over using a JOIN \nfor each lookup). In some cases, for ease of handling multiple or \ncomplex lookups, I indeed used a JOIN. All this worked fine until v10.\n\nHere's the FROM clause that bit me:\n\n FROM lic_en\n JOIN govt_region USING (territory_id, country_id)\n LEFT JOIN zip_code USING (territory_id, country_id, zip5)\n LEFT JOIN \"County\" USING (territory_id, country_id, fips_county);\n\nThe first two JOINs are not the problem, & are in fact retained in my \nsolution. The problem is the third JOIN, where \"fips_county\" from \n\"County\" is actually matched with the corresponding field from the \n\"zip_code\" VIEW. Works fine, if you don't mind the performance impact \nin v10 & above. It has now been rewritten, to be a sub-query for an \noutput field. Voila ! Back to sub-second query times.\n\nThis also solved performance issues with other queries as well. I also \nnow use lookup values as additional fields in the output, in addition to \nthe original fields, which should help some more (but means some changes \nto some web pages that do queries).\n\n-- Dean\n\nps: I wonder how many other RDS users of v9.6 are going to get a very \nrude awakening *very soon*, as AWS is not allowing new instances of v9.6 \nafter *August 2* (see https://forums.aws.amazon.com/ann.jspa?annID=8499 \n). Whether that milestone affects restores from snapshots, remains to \nbe seen (by others, not by me). In other words, users should plan to be \nup & running on a newer version well before August. Total cost to me? \nI\"m in my *8th day* of dealing with this, & I still have a number of web \npages to update, due to changes in SQL field names to manage this mess. \nThis was certainly not a obvious solution.\n\n*Here's from 13.2 (new):*\n\n=> EXPLAIN ANALYZE SELECT club_count, extra_count, region_count, \ncallsign AS trustee_callsign, applicant_type, entity_name, licensee_id \nAS _lid FROM genclub_multi_ WHERE club_count >= 5 ORDER BY extra_count \nDESC, club_count DESC, entity_name;\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=457.77..457.77 rows=1 width=64) (actual \ntime=48.737..48.742 rows=43 loops=1)\n Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC, \n\"_EN\".entity_name\n Sort Method: quicksort Memory: 31kB\n -> Nested Loop Left Join (cost=1.57..457.76 rows=1 width=64) \n(actual time=1.796..48.635 rows=43 loops=1)\n -> Nested Loop (cost=1.28..457.07 rows=1 width=71) (actual \ntime=1.736..48.239 rows=43 loops=1)\n Join Filter: ((\"_EN\".country_id = \n\"_GovtRegion\".country_id) AND (\"_EN\".state = \"_GovtRegion\".territory_id))\n Rows Removed by Join Filter: 1297\n -> Nested Loop (cost=1.28..453.75 rows=1 width=70) \n(actual time=1.720..47.778 rows=43 loops=1)\n Join Filter: ((\"_HD\".unique_system_identifier = \n\"_EN\".unique_system_identifier) AND (\"_HD\".callsign = \"_EN\".callsign))\n -> Nested Loop (cost=0.85..450.98 rows=1 \nwidth=65) (actual time=1.207..34.912 rows=43 loops=1)\n -> Nested Loop (cost=0.43..376.57 rows=27 \nwidth=50) (actual time=0.620..20.956 rows=43 loops=1)\n -> Seq Scan on \"_Club\" \n(cost=0.00..4.44 rows=44 width=35) (actual time=0.037..0.067 rows=44 \nloops=1)\n Filter: (club_count >= 5)\n Rows Removed by Filter: 151\n -> Index Scan using \"_HD_callsign\" on \n\"_HD\" (cost=0.43..8.45 rows=1 width=15) (actual time=0.474..0.474 \nrows=1 loops=44)\n Index Cond: (callsign = \n\"_Club\".trustee_callsign)\n Filter: (license_status = \n'A'::bpchar)\n Rows Removed by Filter: 0\n -> Index Scan using \"_AM_pkey\" on \"_AM\" \n(cost=0.43..2.75 rows=1 width=15) (actual time=0.323..0.323 rows=1 loops=43)\n Index Cond: (unique_system_identifier \n= \"_HD\".unique_system_identifier)\n Filter: (\"_HD\".callsign = callsign)\n -> Index Scan using \"_EN_pkey\" on \"_EN\" \n(cost=0.43..2.75 rows=1 width=60) (actual time=0.298..0.298 rows=1 loops=43)\n Index Cond: (unique_system_identifier = \n\"_AM\".unique_system_identifier)\n Filter: (\"_AM\".callsign = callsign)\n -> Seq Scan on \"_GovtRegion\" (cost=0.00..1.93 rows=93 \nwidth=7) (actual time=0.002..0.004 rows=31 loops=43)\n -> Nested Loop (cost=0.29..0.68 rows=1 width=7) (actual \ntime=0.008..0.008 rows=1 loops=43)\n -> Index Only Scan using \"_IsoCountry_iso_alpha2_key\" \non \"_IsoCountry\" (cost=0.14..0.38 rows=1 width=3) (actual \ntime=0.004..0.004 rows=1 loops=43)\n Index Cond: (iso_alpha2 = \"_GovtRegion\".country_id)\n Heap Fetches: 43\n -> Index Only Scan using \"_Territory_pkey\" on \n\"_Territory\" (cost=0.14..0.29 rows=1 width=7) (actual time=0.003..0.003 \nrows=1 loops=43)\n Index Cond: ((country_id = \n\"_IsoCountry\".iso_alpha2) AND (territory_id = \"_GovtRegion\".territory_id))\n Heap Fetches: 43\n Planning Time: 4.017 ms\n Execution Time: 48.822 ms\n\n\nOn 2021-05-28 11:48, Dean Gibson (DB Administrator) wrote:\n> ...\n>\n> *Here's from v9.6:*\n>\n> => EXPLAIN ANALYZE SELECT club_count, extra_count, region_count, \n> callsign AS trustee_callsign, applicant_type, entity_name, licensee_id \n> AS _lid FROM genclub_multi_ WHERE club_count >= 5 ORDER BY \n> extra_count DESC, club_count DESC, entity_name;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=407.13..407.13 rows=1 width=94) (actual \n> time=348.850..348.859 rows=43 loops=1)\n> Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC, \n> \"_EN\".entity_name\n> Sort Method: quicksort Memory: 31kB\n> -> Nested Loop (cost=4.90..407.12 rows=1 width=94) (actual \n> time=7.587..348.732 rows=43 loops=1)\n> -> Nested Loop (cost=4.47..394.66 rows=1 width=94) (actual \n> time=5.740..248.149 rows=43 loops=1)\n> -> Nested Loop Left Join (cost=4.04..382.20 rows=1 \n> width=79) (actual time=2.458..107.908 rows=55 loops=1)\n> -> Hash Join (cost=3.75..380.26 rows=1 \n> width=86) (actual time=2.398..106.990 rows=55 loops=1)\n> Hash Cond: ((\"_EN\".country_id = \n> \"_GovtRegion\".country_id) AND (\"_EN\".state = \"_GovtRegion\".territory_id))\n> -> Nested Loop (cost=0.43..376.46 rows=47 \n> width=94) (actual time=2.294..106.736 rows=55 loops=1)\n> -> Seq Scan on \"_Club\" \n> (cost=0.00..4.44 rows=44 width=35) (actual time=0.024..0.101 rows=44 \n> loops=1)\n> Filter: (club_count >= 5)\n> Rows Removed by Filter: 151\n> -> Index Scan using \"_EN_callsign\" \n> on \"_EN\" (cost=0.43..8.45 rows=1 width=69) (actual time=2.179..2.420 \n> rows=1 loops=44)\n> Index Cond: (callsign = \n> \"_Club\".trustee_callsign)\n> -> Hash (cost=1.93..1.93 rows=93 width=7) \n> (actual time=0.071..0.071 rows=88 loops=1)\n> Buckets: 1024 Batches: 1 Memory \n> Usage: 12kB\n> -> Seq Scan on \"_GovtRegion\" \n> (cost=0.00..1.93 rows=93 width=7) (actual time=0.010..0.034 rows=93 \n> loops=1)\n> -> Nested Loop (cost=0.29..1.93 rows=1 width=7) \n> (actual time=0.012..0.014 rows=1 loops=55)\n> Join Filter: (\"_IsoCountry\".iso_alpha2 = \n> \"_Territory\".country_id)\n> Rows Removed by Join Filter: 0\n> -> Index Only Scan using \n> \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..1.62 rows=1 \n> width=3) (actual time=0.006..0.006 rows=1 loops=55)\n> Index Cond: (iso_alpha2 = \n> \"_GovtRegion\".country_id)\n> Heap Fetches: 55\n> -> Index Only Scan using \"_Territory_pkey\" \n> on \"_Territory\" (cost=0.14..0.29 rows=1 width=7)\n> (actual time=0.004..0.005 rows=1 loops=55)\n> Index Cond: (territory_id = \n> \"_GovtRegion\".territory_id)\n> Heap Fetches: 59\n> -> Index Scan using \"_HD_pkey\" on \"_HD\" \n> (cost=0.43..12.45 rows=1 width=15) (actual time=2.548..2.548 rows=1 \n> loops=55)\n> Index Cond: (unique_system_identifier = \n> \"_EN\".unique_system_identifier)\n> Filter: ((\"_EN\".callsign = callsign) AND \n> (((((license_status)::text || ' - '::text) || (COALESCE((SubPlan 2), \n> '???'::character varying))::text))::character(1) = 'A'::bpchar))\n> Rows Removed by Filter: 0\n> SubPlan 2\n> -> Limit (cost=0.15..8.17 rows=1 width=32) \n> (actual time=0.006..0.007 rows=1 loops=55)\n> -> Index Scan using \"_LicStatus_pkey\" on \n> \"_LicStatus\" (cost=0.15..8.17 rows=1 width=32) (actual \n> time=0.005..0.005 rows=1 loops=55)\n> Index Cond: (\"_HD\".license_status = \n> status_id)\n> -> Index Scan using \"_AM_pkey\" on \"_AM\" (cost=0.43..4.27 \n> rows=1 width=15) (actual time=2.325..2.325 rows=1 loops=43)\n> Index Cond: (unique_system_identifier = \n> \"_EN\".unique_system_identifier)\n> Filter: (\"_EN\".callsign = callsign)\n> SubPlan 1\n> -> Limit (cost=0.15..8.17 rows=1 width=32) (actual \n> time=0.007..0.007 rows=1 loops=43)\n> -> Index Scan using \"_ApplicantType_pkey\" on \n> \"_ApplicantType\" (cost=0.15..8.17 rows=1 width=32) (actual \n> time=0.005..0.005 rows=1 loops=43)\n> Index Cond: (\"_EN\".applicant_type_code = \n> app_type_id)\n> Planning time: 13.490 ms\n> Execution time: 349.182 ms\n> (43 rows)\n>\n>\n> *Here's from v13.2:*\n>\n> => EXPLAIN ANALYZE SELECT club_count, extra_count, region_count, \n> callsign AS trustee_callsign, applicant_type, entity_name, licensee_id \n> AS _lid FROM genclub_multi_ WHERE club_count >= 5 ORDER BY \n> extra_count DESC, club_count DESC, entity_name;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=144365.60..144365.60 rows=1 width=94) (actual \n> time=31898.860..31901.922 rows=43 loops=1)\n> Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC, \n> \"_EN\".entity_name\n> Sort Method: quicksort Memory: 31kB\n> -> Nested Loop (cost=58055.66..144365.59 rows=1 width=94) (actual \n> time=6132.403..31894.233 rows=43 loops=1)\n> -> Nested Loop (cost=58055.51..144364.21 rows=1 width=62) \n> (actual time=1226.085..30337.921 rows=837792 loops=1)\n> -> Nested Loop Left Join (cost=58055.09..144360.38 \n> rows=1 width=59) (actual time=1062.414..12471.456 rows=1487153 loops=1)\n> -> Hash Join (cost=58054.80..144359.69 rows=1 \n> width=66) (actual time=1061.330..6635.041 rows=1487153 loops=1)\n> Hash Cond: ((\"_EN\".unique_system_identifier \n> = \"_AM\".unique_system_identifier) AND (\"_EN\".callsign = \"_AM\".callsign))\n> -> Hash Join (cost=3.33..53349.72 \n> rows=1033046 width=51) (actual time=2.151..3433.178 rows=1487153 loops=1)\n> Hash Cond: ((\"_EN\".country_id = \n> \"_GovtRegion\".country_id) AND (\"_EN\".state = \"_GovtRegion\".territory_id))\n> -> Seq Scan on \"_EN\" \n> (cost=0.00..45288.05 rows=1509005 width=60) (actual \n> time=0.037..2737.054 rows=1508736 loops=1)\n> -> Hash (cost=1.93..1.93 rows=93 \n> width=7) (actual time=0.706..1.264 rows=88 loops=1)\n> Buckets: 1024 Batches: 1 \n> Memory Usage: 12kB\n> -> Seq Scan on \"_GovtRegion\" \n> (cost=0.00..1.93 rows=93 width=7) (actual time=0.013..0.577 rows=93 \n> loops=1)\n> -> Hash (cost=28093.99..28093.99 \n> rows=1506699 width=15) (actual time=1055.587..1055.588 rows=1506474 \n> loops=1)\n> Buckets: 131072 Batches: 32 Memory \n> Usage: 3175kB\n> -> Seq Scan on \"_AM\" \n> (cost=0.00..28093.99 rows=1506699 width=15) (actual \n> time=0.009..742.774 rows=1506474 loops=1)\n> -> Nested Loop (cost=0.29..0.68 rows=1 width=7) \n> (actual time=0.003..0.004 rows=1 loops=1487153)\n> Join Filter: (\"_IsoCountry\".iso_alpha2 = \n> \"_Territory\".country_id)\n> Rows Removed by Join Filter: 0\n> -> Index Only Scan using \n> \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38 rows=1 \n> width=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n> Index Cond: (iso_alpha2 = \n> \"_GovtRegion\".country_id)\n> Heap Fetches: 1487153\n> -> Index Only Scan using \"_Territory_pkey\" \n> on \"_Territory\" (cost=0.14..0.29 rows=1 width=7) (actual \n> time=0.001..0.001 rows=1 loops=1487153)\n> Index Cond: (territory_id = \n> \"_GovtRegion\".territory_id)\n> Heap Fetches: 1550706\n> -> Index Scan using \"_HD_pkey\" on \"_HD\" \n> (cost=0.43..3.82 rows=1 width=15) (actual time=0.012..0.012 rows=1 \n> loops=1487153)\n> Index Cond: (unique_system_identifier = \n> \"_EN\".unique_system_identifier)\n> Filter: ((\"_EN\".callsign = callsign) AND \n> (((((license_status)::text || ' - '::text) || (COALESCE((SubPlan 2), \n> '???'::character varying))::text))::character(1) = 'A'::bpchar))\n> Rows Removed by Filter: 0\n> SubPlan 2\n> -> Limit (cost=0.00..1.07 rows=1 width=13) \n> (actual time=0.001..0.001 rows=1 loops=1487153)\n> -> Seq Scan on \"_LicStatus\" \n> (cost=0.00..1.07 rows=1 width=13) (actual time=0.000..0.000 rows=1 \n> loops=1487153)\n> Filter: (\"_HD\".license_status = \n> status_id)\n> Rows Removed by Filter: 1\n> -> Index Scan using \"_Club_pkey\" on \"_Club\" (cost=0.14..0.17 \n> rows=1 width=35) (actual time=0.002..0.002 rows=0 loops=837792)\n> Index Cond: (trustee_callsign = \"_EN\".callsign)\n> Filter: (club_count >= 5)\n> Rows Removed by Filter: 0\n> SubPlan 1\n> -> Limit (cost=0.00..1.20 rows=1 width=15) (actual \n> time=0.060..0.060 rows=1 loops=43)\n> -> Seq Scan on \"_ApplicantType\" (cost=0.00..1.20 \n> rows=1 width=15) (actual time=0.016..0.016 rows=1 loops=43)\n> Filter: (\"_EN\".applicant_type_code = app_type_id)\n> Rows Removed by Filter: 7\n> Planning Time: 173.753 ms\n> Execution Time: 31919.601 ms\n> (46 rows)\n>\n\n\n\n\n\n\n\nSOLVED !!! Below is the new\n EXPLAIN ANALYZE for 13.2 on AWS RDS (with no changes\n to server parameters) along with the prior EXPLAIN ANALYZE outputs\n for easy comparison.\n\n While I didn't discount the significance & effect of\n optimizing the server parameters, this problem always seemed to me\n like a fundamental difference in how the PostgreSQL planner viewed\n the structure of the query. In particular, I had a usage pattern\n of writing VIEWS that worked very well with v9.6 & prior\n versions, but which made me suspect a route of attack:\n\n Since the FCC tables contain lots of one-character codes for\n different conditions, to simplify maintenance & displays to\n humans, I created over twenty tiny lookup tables (a dozen or so\n entries in each table), to render a human-readable field as a\n replacement for the original one-character field in many of the\n VIEWs. In some cases those \"humanized\" fields were used as\n conditions in SELECT statements. Of course, fields that are not\n referenced or selected for output from a particular query, never\n get looked up (an advantage over using a JOIN for each lookup). \n In some cases, for ease of handling multiple or complex lookups, I\n indeed used a JOIN. All this worked fine until v10.\n\n Here's the FROM clause that bit me:\n\n FROM lic_en\n JOIN govt_region USING (territory_id, country_id)\n LEFT JOIN zip_code USING (territory_id, country_id, zip5)\n LEFT JOIN \"County\" USING (territory_id, country_id,\n fips_county);\n\n The first two JOINs are not the problem, & are in fact\n retained in my solution. The problem is the third JOIN, where\n \"fips_county\" from \"County\" is actually matched with the\n corresponding field from the \"zip_code\" VIEW. Works fine, if you\n don't mind the performance impact in v10 & above. It has now\n been rewritten, to be a sub-query for an output field. Voila ! \n Back to sub-second query times.\n\n This also solved performance issues with other queries as well. I\n also now use lookup values as additional fields in the output, in\n addition to the original fields, which should help some more (but\n means some changes to some web pages that do queries).\n\n -- Dean\n\n ps: I wonder how many other RDS users of v9.6 are going to get a\n very rude awakening very soon, as AWS is not allowing new\n instances of v9.6 after August 2 (see\n https://forums.aws.amazon.com/ann.jspa?annID=8499 ). Whether that\n milestone affects restores from snapshots, remains to be seen (by\n others, not by me). In other words, users should plan to be up\n & running on a newer version well before August. Total cost\n to me? I\"m in my 8th day of dealing with this, & I\n still have a number of web pages to update, due to changes in SQL\n field names to manage this mess. This was certainly not a obvious\n solution.\n\nHere's from 13.2 (new):\n\n=> EXPLAIN ANALYZE SELECT\n club_count, extra_count, region_count, callsign AS\n trustee_callsign, applicant_type, entity_name, licensee_id AS\n _lid FROM genclub_multi_ WHERE club_count >= 5 ORDER BY\n extra_count DESC, club_count DESC, entity_name;\n \n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=457.77..457.77 rows=1 width=64) (actual\n time=48.737..48.742 rows=43 loops=1)\n Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC,\n \"_EN\".entity_name\n Sort Method: quicksort Memory: 31kB\n -> Nested Loop Left Join (cost=1.57..457.76 rows=1\n width=64) (actual time=1.796..48.635 rows=43 loops=1)\n -> Nested Loop (cost=1.28..457.07 rows=1 width=71)\n (actual time=1.736..48.239 rows=43 loops=1)\n Join Filter: ((\"_EN\".country_id =\n \"_GovtRegion\".country_id) AND (\"_EN\".state =\n \"_GovtRegion\".territory_id))\n Rows Removed by Join Filter: 1297\n -> Nested Loop (cost=1.28..453.75 rows=1\n width=70) (actual time=1.720..47.778 rows=43 loops=1)\n Join Filter:\n ((\"_HD\".unique_system_identifier =\n \"_EN\".unique_system_identifier) AND (\"_HD\".callsign =\n \"_EN\".callsign))\n -> Nested Loop (cost=0.85..450.98\n rows=1 width=65) (actual time=1.207..34.912 rows=43 loops=1)\n -> Nested Loop \n (cost=0.43..376.57 rows=27 width=50) (actual time=0.620..20.956\n rows=43 loops=1)\n -> Seq Scan on \"_Club\" \n (cost=0.00..4.44 rows=44 width=35) (actual time=0.037..0.067\n rows=44 loops=1)\n Filter: (club_count >=\n 5)\n Rows Removed by Filter:\n 151\n -> Index Scan using\n \"_HD_callsign\" on \"_HD\" (cost=0.43..8.45 rows=1 width=15)\n (actual time=0.474..0.474 rows=1 loops=44)\n Index Cond: (callsign =\n \"_Club\".trustee_callsign)\n Filter: (license_status =\n 'A'::bpchar)\n Rows Removed by Filter: 0\n -> Index Scan using \"_AM_pkey\" on\n \"_AM\" (cost=0.43..2.75 rows=1 width=15) (actual\n time=0.323..0.323 rows=1 loops=43)\n Index Cond:\n (unique_system_identifier = \"_HD\".unique_system_identifier)\n Filter: (\"_HD\".callsign =\n callsign)\n -> Index Scan using \"_EN_pkey\" on\n \"_EN\" (cost=0.43..2.75 rows=1 width=60) (actual\n time=0.298..0.298 rows=1 loops=43)\n Index Cond: (unique_system_identifier\n = \"_AM\".unique_system_identifier)\n Filter: (\"_AM\".callsign = callsign)\n -> Seq Scan on \"_GovtRegion\" \n (cost=0.00..1.93 rows=93 width=7) (actual time=0.002..0.004\n rows=31 loops=43)\n -> Nested Loop (cost=0.29..0.68 rows=1 width=7)\n (actual time=0.008..0.008 rows=1 loops=43)\n -> Index Only Scan using\n \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38\n rows=1 width=3) (actual time=0.004..0.004 rows=1 loops=43)\n Index Cond: (iso_alpha2 =\n \"_GovtRegion\".country_id)\n Heap Fetches: 43\n -> Index Only Scan using \"_Territory_pkey\" on\n \"_Territory\" (cost=0.14..0.29 rows=1 width=7) (actual\n time=0.003..0.003 rows=1 loops=43)\n Index Cond: ((country_id =\n \"_IsoCountry\".iso_alpha2) AND (territory_id =\n \"_GovtRegion\".territory_id))\n Heap Fetches: 43\n Planning Time: 4.017 ms\n Execution Time: 48.822 ms\n\n\n On 2021-05-28 11:48, Dean Gibson (DB Administrator) wrote:\n\n\n\n ...\n\nHere's from v9.6:\n\n=> EXPLAIN ANALYZE SELECT\n club_count, extra_count, region_count, callsign AS\n trustee_callsign, applicant_type, entity_name, licensee_id AS\n _lid FROM genclub_multi_ WHERE club_count >= 5 ORDER BY\n extra_count DESC, club_count DESC, entity_name;\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=407.13..407.13 rows=1 width=94) (actual\n time=348.850..348.859 rows=43 loops=1)\n Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC,\n \"_EN\".entity_name\n Sort Method: quicksort Memory: 31kB\n -> Nested Loop (cost=4.90..407.12 rows=1 width=94)\n (actual time=7.587..348.732 rows=43 loops=1)\n -> Nested Loop (cost=4.47..394.66 rows=1 width=94)\n (actual time=5.740..248.149 rows=43 loops=1)\n -> Nested Loop Left Join (cost=4.04..382.20\n rows=1 width=79) (actual time=2.458..107.908 rows=55 loops=1)\n -> Hash Join (cost=3.75..380.26 rows=1\n width=86) (actual time=2.398..106.990 rows=55 loops=1)\n Hash Cond: ((\"_EN\".country_id =\n \"_GovtRegion\".country_id) AND (\"_EN\".state =\n \"_GovtRegion\".territory_id))\n -> Nested Loop \n (cost=0.43..376.46 rows=47 width=94) (actual time=2.294..106.736\n rows=55 loops=1)\n -> Seq Scan on \"_Club\" \n (cost=0.00..4.44 rows=44 width=35) (actual time=0.024..0.101\n rows=44 loops=1)\n Filter: (club_count >=\n 5)\n Rows Removed by Filter:\n 151\n -> Index Scan using\n \"_EN_callsign\" on \"_EN\" (cost=0.43..8.45 rows=1 width=69)\n (actual time=2.179..2.420 rows=1 loops=44)\n Index Cond: (callsign =\n \"_Club\".trustee_callsign)\n -> Hash (cost=1.93..1.93 rows=93\n width=7) (actual time=0.071..0.071 rows=88 loops=1)\n Buckets: 1024 Batches: 1 \n Memory Usage: 12kB\n -> Seq Scan on\n \"_GovtRegion\" (cost=0.00..1.93 rows=93 width=7) (actual\n time=0.010..0.034 rows=93 loops=1)\n -> Nested Loop (cost=0.29..1.93 rows=1\n width=7) (actual time=0.012..0.014 rows=1 loops=55)\n Join Filter:\n (\"_IsoCountry\".iso_alpha2 = \"_Territory\".country_id)\n Rows Removed by Join Filter: 0\n -> Index Only Scan using\n \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..1.62\n rows=1 width=3) (actual time=0.006..0.006 rows=1 loops=55)\n Index Cond: (iso_alpha2 =\n \"_GovtRegion\".country_id)\n Heap Fetches: 55\n -> Index Only Scan using\n \"_Territory_pkey\" on \"_Territory\" (cost=0.14..0.29 rows=1\n width=7)\n (actual time=0.004..0.005 rows=1 loops=55)\n Index Cond: (territory_id =\n \"_GovtRegion\".territory_id)\n Heap Fetches: 59\n -> Index Scan using \"_HD_pkey\" on \"_HD\" \n (cost=0.43..12.45 rows=1 width=15) (actual time=2.548..2.548\n rows=1 loops=55)\n Index Cond: (unique_system_identifier =\n \"_EN\".unique_system_identifier)\n Filter: ((\"_EN\".callsign = callsign) AND\n (((((license_status)::text || ' - '::text) || (COALESCE((SubPlan\n 2), '???'::character varying))::text))::character(1) =\n 'A'::bpchar))\n Rows Removed by Filter: 0\n SubPlan 2\n -> Limit (cost=0.15..8.17 rows=1\n width=32) (actual time=0.006..0.007 rows=1 loops=55)\n -> Index Scan using\n \"_LicStatus_pkey\" on \"_LicStatus\" (cost=0.15..8.17 rows=1\n width=32) (actual time=0.005..0.005 rows=1 loops=55)\n Index Cond:\n (\"_HD\".license_status = status_id)\n -> Index Scan using \"_AM_pkey\" on \"_AM\" \n (cost=0.43..4.27 rows=1 width=15) (actual time=2.325..2.325\n rows=1 loops=43)\n Index Cond: (unique_system_identifier =\n \"_EN\".unique_system_identifier)\n Filter: (\"_EN\".callsign = callsign)\n SubPlan 1\n -> Limit (cost=0.15..8.17 rows=1 width=32)\n (actual time=0.007..0.007 rows=1 loops=43)\n -> Index Scan using \"_ApplicantType_pkey\"\n on \"_ApplicantType\" (cost=0.15..8.17 rows=1 width=32) (actual\n time=0.005..0.005 rows=1 loops=43)\n Index Cond: (\"_EN\".applicant_type_code =\n app_type_id)\n Planning time: 13.490 ms\n Execution time: 349.182 ms\n (43 rows)\n\n\nHere's from v13.2: \n\n=> EXPLAIN ANALYZE SELECT\n club_count, extra_count, region_count, callsign AS\n trustee_callsign, applicant_type, entity_name, licensee_id AS\n _lid FROM genclub_multi_ WHERE club_count >= 5 ORDER BY\n extra_count DESC, club_count DESC, entity_name;\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=144365.60..144365.60 rows=1 width=94) (actual\n time=31898.860..31901.922 rows=43 loops=1)\n Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC,\n \"_EN\".entity_name\n Sort Method: quicksort Memory: 31kB\n -> Nested Loop (cost=58055.66..144365.59 rows=1\n width=94) (actual time=6132.403..31894.233 rows=43 loops=1)\n -> Nested Loop (cost=58055.51..144364.21 rows=1\n width=62) (actual time=1226.085..30337.921 rows=837792 loops=1)\n -> Nested Loop Left Join \n (cost=58055.09..144360.38 rows=1 width=59) (actual\n time=1062.414..12471.456 rows=1487153 loops=1)\n -> Hash Join (cost=58054.80..144359.69\n rows=1 width=66) (actual time=1061.330..6635.041 rows=1487153\n loops=1)\n Hash Cond:\n ((\"_EN\".unique_system_identifier =\n \"_AM\".unique_system_identifier) AND (\"_EN\".callsign =\n \"_AM\".callsign))\n -> Hash Join \n (cost=3.33..53349.72 rows=1033046 width=51) (actual\n time=2.151..3433.178 rows=1487153 loops=1)\n Hash Cond: ((\"_EN\".country_id =\n \"_GovtRegion\".country_id) AND (\"_EN\".state =\n \"_GovtRegion\".territory_id))\n -> Seq Scan on \"_EN\" \n (cost=0.00..45288.05 rows=1509005 width=60) (actual\n time=0.037..2737.054 rows=1508736 loops=1)\n -> Hash (cost=1.93..1.93\n rows=93 width=7) (actual time=0.706..1.264 rows=88 loops=1)\n Buckets: 1024 Batches:\n 1 Memory Usage: 12kB\n -> Seq Scan on\n \"_GovtRegion\" (cost=0.00..1.93 rows=93 width=7) (actual\n time=0.013..0.577 rows=93 loops=1)\n -> Hash (cost=28093.99..28093.99\n rows=1506699 width=15) (actual time=1055.587..1055.588\n rows=1506474 loops=1)\n Buckets: 131072 Batches: 32 \n Memory Usage: 3175kB\n -> Seq Scan on \"_AM\" \n (cost=0.00..28093.99 rows=1506699 width=15) (actual\n time=0.009..742.774 rows=1506474 loops=1)\n -> Nested Loop (cost=0.29..0.68 rows=1\n width=7) (actual time=0.003..0.004 rows=1 loops=1487153)\n Join Filter:\n (\"_IsoCountry\".iso_alpha2 = \"_Territory\".country_id)\n Rows Removed by Join Filter: 0\n -> Index Only Scan using\n \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38\n rows=1 width=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n Index Cond: (iso_alpha2 =\n \"_GovtRegion\".country_id)\n Heap Fetches: 1487153\n -> Index Only Scan using\n \"_Territory_pkey\" on \"_Territory\" (cost=0.14..0.29 rows=1\n width=7) (actual time=0.001..0.001 rows=1 loops=1487153)\n Index Cond: (territory_id =\n \"_GovtRegion\".territory_id)\n Heap Fetches: 1550706\n -> Index Scan using \"_HD_pkey\" on \"_HD\" \n (cost=0.43..3.82 rows=1 width=15) (actual time=0.012..0.012\n rows=1 loops=1487153)\n Index Cond: (unique_system_identifier =\n \"_EN\".unique_system_identifier)\n Filter: ((\"_EN\".callsign = callsign) AND\n (((((license_status)::text || ' - '::text) || (COALESCE((SubPlan\n 2), '???'::character varying))::text))::character(1) =\n 'A'::bpchar))\n Rows Removed by Filter: 0\n SubPlan 2\n -> Limit (cost=0.00..1.07 rows=1\n width=13) (actual time=0.001..0.001 rows=1 loops=1487153)\n -> Seq Scan on \"_LicStatus\" \n (cost=0.00..1.07 rows=1 width=13) (actual time=0.000..0.000\n rows=1 loops=1487153)\n Filter: (\"_HD\".license_status\n = status_id)\n Rows Removed by Filter: 1\n -> Index Scan using \"_Club_pkey\" on \"_Club\" \n (cost=0.14..0.17 rows=1 width=35) (actual time=0.002..0.002\n rows=0 loops=837792)\n Index Cond: (trustee_callsign = \"_EN\".callsign)\n Filter: (club_count >= 5)\n Rows Removed by Filter: 0\n SubPlan 1\n -> Limit (cost=0.00..1.20 rows=1 width=15)\n (actual time=0.060..0.060 rows=1 loops=43)\n -> Seq Scan on \"_ApplicantType\" \n (cost=0.00..1.20 rows=1 width=15) (actual time=0.016..0.016\n rows=1 loops=43)\n Filter: (\"_EN\".applicant_type_code =\n app_type_id)\n Rows Removed by Filter: 7\n Planning Time: 173.753 ms\n Execution Time: 31919.601 ms\n (46 rows)",
"msg_date": "Sun, 30 May 2021 20:07:29 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\n\n> On May 30, 2021, at 20:07, Dean Gibson (DB Administrator) <[email protected]> wrote:\n> The first two JOINs are not the problem, & are in fact retained in my solution. The problem is the third JOIN, where \"fips_county\" from \"County\" is actually matched with the corresponding field from the \"zip_code\" VIEW. Works fine, if you don't mind the performance impact in v10 & above. It has now been rewritten, to be a sub-query for an output field. Voila ! Back to sub-second query times.\n\nIf, rather than a subquery, you explicitly called out the join criteria with ON, did it have the same performance benefit?\n\n\n\n",
"msg_date": "Sun, 30 May 2021 20:41:28 -0700",
"msg_from": "Christophe Pettus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-05-30 20:41, Christophe Pettus wrote:\n> On May 30, 2021, at 20:07, Dean Gibson (DB Administrator) \n> <[email protected]> wrote:\n>> The first two JOINs are not the problem, & are in fact retained in my solution. The problem is the third JOIN, where \"fips_county\" from \"County\" is actually matched with the corresponding field from the \"zip_code\" VIEW. Works fine, if you don't mind the performance impact in v10 & above. It has now been rewritten, to be a sub-query for an output field. Voila ! Back to sub-second query times.\n> If, rather than a subquery, you explicitly called out the join criteria with ON, did it have the same performance benefit?\n\nI thought that having a \"USING\" clause, was semantically equivalent to \nan \"ON\" clause with the equalities explicitly stated. So no, I didn't \ntry that.\n\nThe matching that occurred is *exactly *what I wanted. I just didn't \nwant the performance impact.\n\n\n\n\n\n\n\nOn 2021-05-30 20:41, Christophe Pettus\n wrote:\n\nOn\n May 30, 2021, at 20:07, Dean Gibson (DB Administrator)\n <[email protected]> wrote:\n \nThe first two JOINs are not the problem, & are in fact retained in my solution. The problem is the third JOIN, where \"fips_county\" from \"County\" is actually matched with the corresponding field from the \"zip_code\" VIEW. Works fine, if you don't mind the performance impact in v10 & above. It has now been rewritten, to be a sub-query for an output field. Voila ! Back to sub-second query times.\n\n\n\nIf, rather than a subquery, you explicitly called out the join criteria with ON, did it have the same performance benefit?\n\n\n\n I thought that having a \"USING\" clause, was semantically equivalent\n to an \"ON\" clause with the equalities explicitly stated. So no, I\n didn't try that.\n\n The matching that occurred is exactly what I wanted. I\n just didn't want the performance impact.",
"msg_date": "Sun, 30 May 2021 21:23:43 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\"Dean Gibson (DB Administrator)\" <[email protected]> writes:\n> I thought that having a \"USING\" clause, was semantically equivalent to \n> an \"ON\" clause with the equalities explicitly stated. So no, I didn't \n> try that.\n\nUSING is not that, or at least not only that ... read the manual.\n\nI'm wondering if what you saw is some side-effect of the aliasing\nthat USING does.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 31 May 2021 00:44:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-05-30 21:44, Tom Lane wrote:\n> \"Dean Gibson (DB Administrator)\" <[email protected]> writes:\n>> I thought that having a \"USING\" clause, was semantically equivalent to\n>> an \"ON\" clause with the equalities explicitly stated. So no, I didn't\n>> try that.\n> USING is not that, or at least not only that ... read the manual.\n>\n> I'm wondering if what you saw is some side-effect of the aliasing\n> that USING does.\n>\n> \t\t\tregards, tom lane\n\n /|USING ( /|join_column|/ [, ...] )|/\n\n /A clause of the form //|USING ( a, b, ... )|//is shorthand for\n //|ON left_table.a = right_table.a AND left_table.b =\n right_table.b ...|//. Also, //|USING|//implies that only one of\n each pair of equivalent columns will be included in the join\n output, not both./\n\n /\n /\n\n /The //|USING|//clause is a shorthand that allows you to take\n advantage of the specific situation where both sides of the join use\n the same name for the joining column(s). It takes a comma-separated\n list of the shared column names and forms a join condition that\n includes an equality comparison for each one. For example, joining\n //|T1|//and //|T2|//with //|USING (a, b)|//produces the join\n condition //|ON /|T1|/.a = /|T2|/.a AND /|T1|/.b = /|T2|/.b|//./\n\n /Furthermore, the output of //|JOIN USING|//suppresses redundant\n columns: there is no need to print both of the matched columns,\n since they must have equal values. While //|JOIN ON|//produces all\n columns from //|T1|//followed by all columns from //|T2|//, //|JOIN\n USING|//produces one output column for each of the listed column\n pairs (in the listed order), followed by any remaining columns from\n //|T1|//, followed by any remaining columns from //|T2|//./\n\n /Finally, //|NATURAL|//is a shorthand form of //|USING|//: it forms\n a //|USING|//list consisting of all column names that appear in both\n input tables. As with //|USING|//, these columns appear only once in\n the output table. If there are no common column names, //|NATURAL\n JOIN|//behaves like //|JOIN ... ON TRUE|//, producing a\n cross-product join./\n\n\nI get that it's like NATURAL, in that only one column is included. Is \nthere some other side-effect? Is the fact that I was using a LEFT JOIN, \nrelevant? Is what I was doing, unusual (or risky)?\n\n\n\n\n\n\n\n\n\nOn 2021-05-30 21:44, Tom Lane wrote:\n\n\n\"Dean Gibson (DB Administrator)\" <[email protected]> writes:\n\n\nI thought that having a \"USING\" clause, was semantically equivalent to \nan \"ON\" clause with the equalities explicitly stated. So no, I didn't \ntry that.\n\n\n\nUSING is not that, or at least not only that ... read the manual.\n\nI'm wondering if what you saw is some side-effect of the aliasing\nthat USING does.\n\n\t\t\tregards, tom lane\n\n\n\n\n\nUSING ( join_column [,\n ...] )\n\nA clause of the form USING\n ( a, b, ... ) is shorthand for ON left_table.a = right_table.a AND\n left_table.b = right_table.b .... Also, USING implies that only\n one of each pair of equivalent columns will be included in\n the join output, not both.\n\n\n\n\nThe USING clause\n is a shorthand that allows you to take advantage of the\n specific situation where both sides of the join use the same\n name for the joining column(s). It takes a comma-separated\n list of the shared column names and forms a join condition\n that includes an equality comparison for each one. For\n example, joining T1\n and T2 with\n USING (a, b) produces\n the join condition ON T1.a = T2.a AND T1.b = T2.b.\nFurthermore, the output of JOIN\n USING suppresses redundant columns: there is\n no need to print both of the matched columns, since they must\n have equal values. While JOIN ON\n produces all columns from T1\n followed by all columns from T2,\n JOIN USING produces\n one output column for each of the listed column pairs (in the\n listed order), followed by any remaining columns from T1, followed by any\n remaining columns from T2.\nFinally, NATURAL\n is a shorthand form of USING:\n it forms a USING\n list consisting of all column names that appear in both input\n tables. As with USING,\n these columns appear only once in the output table. If there\n are no common column names, NATURAL\n JOIN behaves like JOIN ... ON TRUE, producing a\n cross-product join.\n\n\n\n I get that it's like NATURAL, in that only one column is included. \n Is there some other side-effect? Is the fact that I was using a\n LEFT JOIN, relevant? Is what I was doing, unusual (or risky)?",
"msg_date": "Sun, 30 May 2021 22:24:00 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "> Here's the FROM clause that bit me:\n>\n> FROM lic_en\n> JOIN govt_region USING (territory_id, country_id)\n> LEFT JOIN zip_code USING (territory_id, country_id, zip5)\n> LEFT JOIN \"County\" USING (territory_id, country_id, fips_county);\n\nI'm guessing that there's a dependency/correlation between\nterritory/country/county, and that's probably related to a misestimate causing\na bad plan.\n\n> The first two JOINs are not the problem, & are in fact retained in my\n> solution. The problem is the third JOIN, where \"fips_county\" from \"County\" is\n> actually matched with the corresponding field from the \"zip_code\" VIEW. Works\n> fine, if you don't mind the performance impact in v10 & above. It has now\n> been rewritten, to be a sub-query for an output field. Voila ! Back to\n> sub-second query times.\n\nWhat version of 9.6.X were you upgrading *from* ?\n\nv9.6 added selectivity estimates based on FKs, so it's not surprising if there\nwas a plan change migrating *to* v9.6.\n\n...but there were a number of fixes to that, and it seems possible the plans\nchanged between 9.6.0 and 9.6.22, and anything backpatched to 9.X would also be\nin v10+. So you might've gotten the bad plan on 9.6.22, also.\n\nI found these commits that might be relevant.\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=1f184426b\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=7fa93eec4\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=770671062\n\nad1c36b07 wasn't backpatched and probably not relevant to your issue.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 31 May 2021 23:16:35 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-05-31 21:16, Justin Pryzby wrote:\n>> Here's the FROM clause that bit me:\n>>\n>> FROM lic_en\n>> JOIN govt_region USING (territory_id, country_id)\n>> LEFT JOIN zip_code USING (territory_id, country_id, zip5)\n>> LEFT JOIN \"County\" USING (territory_id, country_id, fips_county);\n> I'm guessing that there's a dependency/correlation between territory/country/county, and that's probably related to a misestimate causing a bad plan.\n>\n>> The first two JOINs are not the problem, & are in fact retained in my solution. The problem is the third JOIN, where \"fips_county\" from \"County\" is actually matched with the corresponding field from the \"zip_code\" VIEW. Works fine, if you don't mind the performance impact in v10 & above. It has now been rewritten, to be a sub-query for an output field. Voila ! Back to sub-second query times.\n> What version of 9.6.X were you upgrading *from* ?\n>\n> v9.6 added selectivity estimates based on FKs, so it's not surprising if there was a plan change migrating *to* v9.6.\n\nI originally upgraded from 9.6.20 to v12.6. When that (otherwise \nsuccessful) upgrade had performance problems, I upgraded the v9.6.20 \ncopy to v9.6.21, & tried again, with the same result.\n\nInterestingly, on v13.2 I have now run into another (similar) \nperformance issue. I've solved it by setting the following to values I \nused with v9.x:\n\njoin_collapse_limit & from_collapse_limit = 16\n\ngeqo_threshold = 32\n\nI pretty sure I tried those settings (on v10 & above) with the earlier \nperformance problem, to no avail. However, I now wonder what would have \nbeen the result if I have doubled those values before re-architecting \nsome of my tables (moving from certain JOINs to specific sub-selects).\n\n\n\n\n\n\n\nOn 2021-05-31 21:16, Justin Pryzby\n wrote:\n\n\n\nHere's the FROM clause that bit me:\n\n FROM lic_en\n JOIN govt_region USING (territory_id, country_id)\n LEFT JOIN zip_code USING (territory_id, country_id, zip5)\n LEFT JOIN \"County\" USING (territory_id, country_id, fips_county);\n\n\n\nI'm guessing that there's a dependency/correlation between territory/country/county, and that's probably related to a misestimate causing a bad plan.\n\n\n\nThe first two JOINs are not the problem, & are in fact retained in my solution. The problem is the third JOIN, where \"fips_county\" from \"County\" is actually matched with the corresponding field from the \"zip_code\" VIEW. Works fine, if you don't mind the performance impact in v10 & above. It has now been rewritten, to be a sub-query for an output field. Voila ! Back to sub-second query times.\n\n\n\nWhat version of 9.6.X were you upgrading *from* ?\n\nv9.6 added selectivity estimates based on FKs, so it's not surprising if there was a plan change migrating *to* v9.6.\n\n\n\n I originally upgraded from 9.6.20 to v12.6. When that (otherwise\n successful) upgrade had performance problems, I upgraded the v9.6.20\n copy to v9.6.21, & tried again, with the same result.\n\n Interestingly, on v13.2 I have now run into another (similar)\n performance issue. I've solved it by setting the following to\n values I used with v9.x:\n\n join_collapse_limit & from_collapse_limit = 16\n\n geqo_threshold = 32\n\n I pretty sure I tried those settings (on v10 & above) with the\n earlier performance problem, to no avail. However, I now wonder\n what would have been the result if I have doubled those values\n before re-architecting some of my tables (moving from certain JOINs\n to specific sub-selects).",
"msg_date": "Tue, 1 Jun 2021 10:44:54 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "Having now successfully migrated from PostgreSQL v9.6 to v13.2 in Amazon \nRDS, I wondered, why I am paying AWS for an RDS-based version, when I \nwas forced by their POLICY to go through the effort I did? I'm not one \nof the crowd who thinks, \"It works OK, so I don't update anything\". I'm \nusually one who is VERY quick to apply upgrades, especially when there \nis a fallback ability. However, the initial failure to successfully \nupgrade from v9.6 to any more recent major version, put me in a \ntime-limited box that I really don't like to be in.\n\nIf I'm going to have to deal with maintenance issues, like I easily did \nwhen I ran native PostgreSQL, why not go back to that? So, I've ported \nmy database back to native PostgreSQL v13.3 on an AWS EC2 instance. It \nlooks like I will save about 40% of the cost, which is in accord with \nthis article: https://www.iobasis.com/Strategies-to-reduce-Amazon-RDS-Costs/\n\nWhy am I mentioning this here? Because there were minor issues & \nbenefits in porting back to native PostgreSQL, that may be of interest here:\n\nFirst, pg_dumpall (v13.3) errors out, because on RDS, you cannot be a \nsuperuser, & it tries to dump protected stuff. If there is a way around \nthat, I'd like to know it, even though it's not an issue now. pg_dump \nworks OK, but of course you don't get the roles dumped. Fortunately, I \nkept script files that have all the database setup, so I just ran them \nto create all the relationships, & then used the pg_dump output. Worked \nflawlessly.\n\nSecond, I noticed that the compressed (\"-Z6\" level) output from pg-dump \nis less than one-tenth of the disk size of the restored database. \nThat's LOT less than the size of the backups that AWS was charging me for.\n\nThird, once you increase your disk size in RDS, you can never decrease \nit, unless you go through the above port to a brand new instance (RDS or \nnative PostgreSQL). RDS backups must be restored to the same size \nvolume (or larger) that they were created for. A VACUUM FULL ANALYZE on \nRDS requires more than doubling the required disk size (I tried with \nless several times). This is easily dealt with on an EC2 Linux \ninstance, requiring only a couple minutes of DB downtime.\n\nFourth, while AWS is forcing customers to upgrade from v9.6, but the \nonly PostgreSQL client tools that AWS currently provides in their \nstandard repository are for v9.6!!! That means when you want to use any \nof their client tools on newer versions, you have problems. psql gives \nyou a warning on each startup, & pg_dump simply (& correctly) won't back \nup a newer DB. If you add their \"optional\" repository, you can use \nv12.6 tools, but v13.3 is only available by hand-editing the repo file \nto include v13 (which I did). For this level of support, I pay extra? \nI don't think so.\n\nFinally, the AWS support forums are effectively \"write-only.\" Most of \nthe questions asked there, never get ANY response from other users, & \nAWS only uses them to post announcements, from what I can tell. I got a \nLOT more help here in this thread, & last I looked, I don't pay anyone here.\n\n\n\n\n\n\n Having now successfully migrated from PostgreSQL v9.6 to v13.2 in\n Amazon RDS, I wondered, why I am paying AWS for an RDS-based\n version, when I was forced by their POLICY to go through the effort\n I did? I'm not one of the crowd who thinks, \"It works OK, so I\n don't update anything\". I'm usually one who is VERY quick to apply\n upgrades, especially when there is a fallback ability. However, the\n initial failure to successfully upgrade from v9.6 to any more recent\n major version, put me in a time-limited box that I really don't like\n to be in.\n\n If I'm going to have to deal with maintenance issues, like I easily\n did when I ran native PostgreSQL, why not go back to that? So, I've\n ported my database back to native PostgreSQL v13.3 on an AWS EC2\n instance. It looks like I will save about 40% of the cost, which is\n in accord with this article: \n https://www.iobasis.com/Strategies-to-reduce-Amazon-RDS-Costs/\n\n Why am I mentioning this here? Because there were minor issues\n & benefits in porting back to native PostgreSQL, that may be of\n interest here:\n\n First, pg_dumpall (v13.3) errors out, because on RDS, you cannot be\n a superuser, & it tries to dump protected stuff. If there is a\n way around that, I'd like to know it, even though it's not an issue\n now. pg_dump works OK, but of course you don't get the roles\n dumped. Fortunately, I kept script files that have all the database\n setup, so I just ran them to create all the relationships, &\n then used the pg_dump output. Worked flawlessly.\n\n Second, I noticed that the compressed (\"-Z6\" level) output from\n pg-dump is less than one-tenth of the disk size of the restored\n database. That's LOT less than the size of the backups that AWS was\n charging me for.\n\n Third, once you increase your disk size in RDS, you can never\n decrease it, unless you go through the above port to a brand new\n instance (RDS or native PostgreSQL). RDS backups must be restored\n to the same size volume (or larger) that they were created for. A\n VACUUM FULL ANALYZE on RDS requires more than doubling the required\n disk size (I tried with less several times). This is easily dealt\n with on an EC2 Linux instance, requiring only a couple minutes of DB\n downtime.\n\n Fourth, while AWS is forcing customers to upgrade from v9.6, but the\n only PostgreSQL client tools that AWS currently provides in their\n standard repository are for v9.6!!! That means when you want to use\n any of their client tools on newer versions, you have problems. \n psql gives you a warning on each startup, & pg_dump simply\n (& correctly) won't back up a newer DB. If you add their\n \"optional\" repository, you can use v12.6 tools, but v13.3 is only\n available by hand-editing the repo file to include v13 (which I\n did). For this level of support, I pay extra? I don't think so. \n\n Finally, the AWS support forums are effectively \"write-only.\" Most\n of the questions asked there, never get ANY response from other\n users, & AWS only uses them to post announcements, from what I\n can tell. I got a LOT more help here in this thread, & last I\n looked, I don't pay anyone here.",
"msg_date": "Wed, 9 Jun 2021 18:50:38 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\nOn 6/9/21 9:50 PM, Dean Gibson (DB Administrator) wrote:\n> Having now successfully migrated from PostgreSQL v9.6 to v13.2 in\n> Amazon RDS, I wondered, why I am paying AWS for an RDS-based version,\n> when I was forced by their POLICY to go through the effort I did? I'm\n> not one of the crowd who thinks, \"It works OK, so I don't update\n> anything\". I'm usually one who is VERY quick to apply upgrades,\n> especially when there is a fallback ability. However, the initial\n> failure to successfully upgrade from v9.6 to any more recent major\n> version, put me in a time-limited box that I really don't like to be in.\n>\n> If I'm going to have to deal with maintenance issues, like I easily\n> did when I ran native PostgreSQL, why not go back to that? So, I've\n> ported my database back to native PostgreSQL v13.3 on an AWS EC2\n> instance. It looks like I will save about 40% of the cost, which is\n> in accord with this article: \n> https://www.iobasis.com/Strategies-to-reduce-Amazon-RDS-Costs/\n>\n> Why am I mentioning this here? Because there were minor issues &\n> benefits in porting back to native PostgreSQL, that may be of interest\n> here:\n>\n> First, pg_dumpall (v13.3) errors out, because on RDS, you cannot be a\n> superuser, & it tries to dump protected stuff. If there is a way\n> around that, I'd like to know it, even though it's not an issue now. \n> pg_dump works OK, but of course you don't get the roles dumped. \n> Fortunately, I kept script files that have all the database setup, so\n> I just ran them to create all the relationships, & then used the\n> pg_dump output. Worked flawlessly.\n\n\n\nThis was added in release 12 specifically with RDS in mind:\n\n\n pg_dumpall --exclude-database\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 10 Jun 2021 06:29:12 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On Wed, Jun 9, 2021 at 6:50 PM Dean Gibson (DB Administrator) <\[email protected]> wrote:\n\n> Having now successfully migrated from PostgreSQL v9.6 to v13.2 in Amazon\n> RDS, I wondered, why I am paying AWS for an RDS-based version, when I was\n> forced by their POLICY to go through the effort I did? I'm not one of the\n> crowd who thinks, \"It works OK, so I don't update anything\". I'm usually\n> one who is VERY quick to apply upgrades, especially when there is a\n> fallback ability. However, the initial failure to successfully upgrade\n> from v9.6 to any more recent major version, put me in a time-limited box\n> that I really don't like to be in.\n>\n\nRight, and had you deployed on EC2 you would not have been forced to\nupgrade. This is an argument against RDS for this particular problem.\n\n\n>\n> If I'm going to have to deal with maintenance issues, like I easily did\n> when I ran native PostgreSQL, why not go back to that? So, I've ported my\n> database back to native PostgreSQL v13.3 on an AWS EC2 instance. It looks\n> like I will save about 40% of the cost, which is in accord with this\n> article: https://www.iobasis.com/Strategies-to-reduce-Amazon-RDS-Costs/\n>\n\nThat is correct, it is quite a bit less expensive to host your own EC2\ninstances. Where it is not cheaper is when you need to easily configure\nbackups, take a snapshot, or bring up a replica. For those in the know,\nputting in some work upfront largely removes the burden that RDS corrects\nbut a lot of people who deploy RDS are *not* DBAs, or even Systems people.\nThey are front end developers.\n\nGlad to see you were able to work things out.\n\nJD\n\n-- \n\n - Partner, Father, Explorer and Founder.\n - Founder - https://commandprompt.com/ - 24x7x365 Postgres since 1997\n - Founder and Co-Chair - https://postgresconf.org/\n - Founder - https://postgresql.us - United States PostgreSQL\n - Public speaker, published author, postgresql expert, and people\n believer.\n - Host - More than a refresh\n <https://commandprompt.com/about/more-than-a-refresh/>: A podcast about\n data and the people who wrangle it.\n\nOn Wed, Jun 9, 2021 at 6:50 PM Dean Gibson (DB Administrator) <[email protected]> wrote:\n\n Having now successfully migrated from PostgreSQL v9.6 to v13.2 in\n Amazon RDS, I wondered, why I am paying AWS for an RDS-based\n version, when I was forced by their POLICY to go through the effort\n I did? I'm not one of the crowd who thinks, \"It works OK, so I\n don't update anything\". I'm usually one who is VERY quick to apply\n upgrades, especially when there is a fallback ability. However, the\n initial failure to successfully upgrade from v9.6 to any more recent\n major version, put me in a time-limited box that I really don't like\n to be in.Right, and had you deployed on EC2 you would not have been forced to upgrade. This is an argument against RDS for this particular problem. \n\n If I'm going to have to deal with maintenance issues, like I easily\n did when I ran native PostgreSQL, why not go back to that? So, I've\n ported my database back to native PostgreSQL v13.3 on an AWS EC2\n instance. It looks like I will save about 40% of the cost, which is\n in accord with this article: \n https://www.iobasis.com/Strategies-to-reduce-Amazon-RDS-Costs/That is correct, it is quite a bit less expensive to host your own EC2 instances. Where it is not cheaper is when you need to easily configure backups, take a snapshot, or bring up a replica. For those in the know, putting in some work upfront largely removes the burden that RDS corrects but a lot of people who deploy RDS are *not* DBAs, or even Systems people. They are front end developers. Glad to see you were able to work things out.JD-- Partner, Father, Explorer and Founder.Founder - https://commandprompt.com/ - 24x7x365 Postgres since 1997Founder and Co-Chair - https://postgresconf.org/ Founder - https://postgresql.us - United States PostgreSQLPublic speaker, published author, postgresql expert, and people believer.Host - More than a refresh: A podcast about data and the people who wrangle it.",
"msg_date": "Thu, 10 Jun 2021 07:36:26 -0700",
"msg_from": "Joshua Drake <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-06-10 03:29, Andrew Dunstan wrote:\n> On 6/9/21 9:50 PM, Dean Gibson (DB Administrator) wrote:\n>> First, pg_dumpall (v13.3) errors out, because on RDS, you cannot be a superuser, & it tries to dump protected stuff. If there is a way around that, I'd like to know it, even though it's not an issue now. pg_dump works OK, but of course you don't get the roles dumped. Fortunately, I kept script files that have all the database setup, so I just ran them to create all the relationships, & then used the pg_dump output. Worked flawlessly.\n> This was added in release 12 specifically with RDS in mind:\n>\n> pg_dumpall --exclude-database\n>\n> cheers, andrew\n\nI guess I don't understand what that option does:\n\n=>pg_dumpall -U Admin --exclude-database MailPen >zzz.sql\npg_dump: error: could not write to output file: No space left on device\npg_dumpall: error: pg_dump failed on database \"MailPen\", exiting\n\nI expected a tiny file, not 3.5GB. \"MailPen\" is the only database \n(other than what's pre-installed). Do I need quotes on the command line?\n\n\n\n\n\n\n\nOn 2021-06-10 03:29, Andrew Dunstan\n wrote:\n\n\n\nOn 6/9/21 9:50 PM, Dean Gibson (DB Administrator) wrote:\n\n\nFirst, pg_dumpall (v13.3) errors out, because on RDS, you cannot be a superuser, & it tries to dump protected stuff. If there is a way around that, I'd like to know it, even though it's not an issue now. pg_dump works OK, but of course you don't get the roles dumped. Fortunately, I kept script files that have all the database setup, so I just ran them to create all the relationships, & then used the pg_dump output. Worked flawlessly.\n\n\n\nThis was added in release 12 specifically with RDS in mind:\n\n pg_dumpall --exclude-database\n\ncheers, andrew\n\n\n I guess I don't understand what that option does:\n\n =>pg_dumpall -U Admin --exclude-database MailPen >zzz.sql\n pg_dump: error: could not write to output file: No space left on\n device\n pg_dumpall: error: pg_dump failed on database \"MailPen\", exiting\n\n I expected a tiny file, not 3.5GB. \"MailPen\" is the only database\n (other than what's pre-installed). Do I need quotes on the command\n line?",
"msg_date": "Thu, 10 Jun 2021 09:07:52 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "Em qui., 10 de jun. de 2021 às 13:08, Dean Gibson (DB Administrator) <\[email protected]> escreveu:\n\n> On 2021-06-10 03:29, Andrew Dunstan wrote:\n>\n> On 6/9/21 9:50 PM, Dean Gibson (DB Administrator) wrote:\n>\n> First, pg_dumpall (v13.3) errors out, because on RDS, you cannot be a superuser, & it tries to dump protected stuff. If there is a way around that, I'd like to know it, even though it's not an issue now. pg_dump works OK, but of course you don't get the roles dumped. Fortunately, I kept script files that have all the database setup, so I just ran them to create all the relationships, & then used the pg_dump output. Worked flawlessly.\n>\n> This was added in release 12 specifically with RDS in mind:\n>\n> pg_dumpall --exclude-database\n>\n> cheers, andrew\n>\n>\n> I guess I don't understand what that option does:\n>\n> =>pg_dumpall -U Admin --exclude-database MailPen >zzz.sql\n> pg_dump: error: could not write to output file: No space left on device\n> pg_dumpall: error: pg_dump failed on database \"MailPen\", exiting\n>\n> I expected a tiny file, not 3.5GB. \"MailPen\" is the only database (other\n> than what's pre-installed). Do I need quotes on the command line?\n>\nSee at:\nhttps://www.postgresql.org/docs/13/app-pg-dumpall.html\n\nYour cmd lacks =\n=>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql\n\nregards,\nRanier Vilela\n\nEm qui., 10 de jun. de 2021 às 13:08, Dean Gibson (DB Administrator) <[email protected]> escreveu:\n\nOn 2021-06-10 03:29, Andrew Dunstan\n wrote:\n\n\nOn 6/9/21 9:50 PM, Dean Gibson (DB Administrator) wrote:\n\n\nFirst, pg_dumpall (v13.3) errors out, because on RDS, you cannot be a superuser, & it tries to dump protected stuff. If there is a way around that, I'd like to know it, even though it's not an issue now. pg_dump works OK, but of course you don't get the roles dumped. Fortunately, I kept script files that have all the database setup, so I just ran them to create all the relationships, & then used the pg_dump output. Worked flawlessly.\n\n\nThis was added in release 12 specifically with RDS in mind:\n\n pg_dumpall --exclude-database\n\ncheers, andrew\n\n\n I guess I don't understand what that option does:\n\n =>pg_dumpall -U Admin --exclude-database MailPen >zzz.sql\n pg_dump: error: could not write to output file: No space left on\n device\n pg_dumpall: error: pg_dump failed on database \"MailPen\", exiting\n\n I expected a tiny file, not 3.5GB. \"MailPen\" is the only database\n (other than what's pre-installed). Do I need quotes on the command\n line?See at:https://www.postgresql.org/docs/13/app-pg-dumpall.htmlYour cmd lacks =\n=>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql regards,Ranier Vilela",
"msg_date": "Thu, 10 Jun 2021 13:54:55 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-06-10 09:54, Ranier Vilela wrote:\n> Em qui., 10 de jun. de 2021 às 13:08, Dean Gibson (DB Administrator) \n> <[email protected] <mailto:[email protected]>> escreveu:\n>\n>\n> I guess I don't understand what that option does:\n>\n> =>pg_dumpall -U Admin --exclude-database MailPen >zzz.sql\n> pg_dump: error: could not write to output file: No space left on\n> device\n> pg_dumpall: error: pg_dump failed on database \"MailPen\", exiting\n>\n> I expected a tiny file, not 3.5GB. \"MailPen\" is the only database\n> (other than what's pre-installed). Do I need quotes on the\n> command line?\n>\n> See at:\n> https://www.postgresql.org/docs/13/app-pg-dumpall.html \n> <https://www.postgresql.org/docs/13/app-pg-dumpall.html>\n>\n> Your cmd lacks =\n> =>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql\n>\n> regards, Ranier Vilela\n\nI read that before posting, but missed that. Old command line patterns \ndie hard!\n\nHowever, the result was the same: 3.5GB before running out of space.\n\n\n\n\n\n\n\nOn 2021-06-10 09:54, Ranier Vilela\n wrote:\n\n\n\n\n\nEm qui., 10 de jun. de 2021\n às 13:08, Dean Gibson (DB Administrator) <[email protected]>\n escreveu:\n\n\n\n I guess I don't understand what that option does:\n\n =>pg_dumpall -U Admin --exclude-database MailPen\n >zzz.sql\n pg_dump: error: could not write to output file: No space\n left on device\n pg_dumpall: error: pg_dump failed on database \"MailPen\",\n exiting\n\n I expected a tiny file, not 3.5GB. \"MailPen\" is the only\n database (other than what's pre-installed). Do I need\n quotes on the command line?\n\n\nSee at:\nhttps://www.postgresql.org/docs/13/app-pg-dumpall.html\n\n\nYour cmd lacks =\n\n =>pg_dumpall -U Admin --exclude-database=MailPen\n >zzz.sql \n\n\n\nregards, Ranier Vilela\n\n\n\n\n\n I read that before posting, but missed that. Old command line\n patterns die hard!\n\n However, the result was the same: 3.5GB before running out of\n space.",
"msg_date": "Thu, 10 Jun 2021 10:43:13 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\"Dean Gibson (DB Administrator)\" <[email protected]> writes:\n> On 2021-06-10 09:54, Ranier Vilela wrote:\n>> Your cmd lacks =\n>> =>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql\n\n> I read that before posting, but missed that. Old command line patterns \n> die hard!\n> However, the result was the same: 3.5GB before running out of space.\n\n[ experiments... ] Looks like you gotta do it like this:\n\n\tpg_dumpall '--exclude-database=\"MailPen\"' ...\n\nThis surprises me, as I thought it was project policy not to\ncase-fold command-line arguments (precisely because you end\nup needing weird quoting to prevent that).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 10 Jun 2021 14:00:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\nOn 6/10/21 2:00 PM, Tom Lane wrote:\n> \"Dean Gibson (DB Administrator)\" <[email protected]> writes:\n>> On 2021-06-10 09:54, Ranier Vilela wrote:\n>>> Your cmd lacks =\n>>> =>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql\n>> I read that before posting, but missed that. Old command line patterns \n>> die hard!\n>> However, the result was the same: 3.5GB before running out of space.\n> [ experiments... ] Looks like you gotta do it like this:\n>\n> \tpg_dumpall '--exclude-database=\"MailPen\"' ...\n>\n> This surprises me, as I thought it was project policy not to\n> case-fold command-line arguments (precisely because you end\n> up needing weird quoting to prevent that).\n>\n> \t\t\t\n\n\n\nOuch. That looks like a plain old bug. Let's fix it. IIRC I just used\nthe same logic that we use for pg_dump's --exclude-* options, so we need\nto check if they have similar issues.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 10 Jun 2021 14:23:39 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-06-10 11:23, Andrew Dunstan wrote:\n> On 6/10/21 2:00 PM, Tom Lane wrote:\n>> \"Dean Gibson (DB Administrator)\" <[email protected]> writes:\n>>> ... Do I need quotes on the command line?\n>>> On 2021-06-10 09:54, Ranier Vilela wrote:\n>>>> Your cmd lacks =\n>>>> =>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql\n>>> I read [the manual] before posting, but missed that. Old command line patterns die hard!\n>>> However, the result was the same: 3.5GB before running out of space.\n>> [ experiments... ] Looks like you gotta do it like this:\n>>\n>> \tpg_dumpall '--exclude-database=\"MailPen\"' ...\n>>\n>> This surprises me, as I thought it was project policy not to case-fold command-line arguments (precisely because you end up needing weird quoting to prevent that).\t\n> Ouch. That looks like a plain old bug. Let's fix it. IIRC I just used the same logic that we use for pg_dump's --exclude-* options, so we need to check if they have similar issues.\n>\n> cheers, andrew\n\nThat works! I thought it was a quoting/case issue! I was next going to \ntry single quotes just outside double quotes, & that works as well (& is \na bit more natural):\n\npg_dumpall -U Admin --exclude-database='\"MailPen\"' >zzz.sql\n\nUsing mixed case has bitten me before, but I am not deterred! I run \nphpBB 3.0.14 (very old version) because upgrades to more current \nversions fail on the mixed case of the DB name, as well as the use of \nSCHEMAs to isolate the message board from the rest of the data. Yes, I \nreported it years ago.\n\nI use lower-case for column, VIEW, & function names; mixed (camel) case \nfor table, schema, & database names; & upper-case for SQL keywords. It \nhelps readability (as does murdering a couple semicolons in the prior \nsentence).\n\n\n\n\n\n\n\nOn 2021-06-10 11:23, Andrew Dunstan\n wrote:\n\n\n\nOn 6/10/21 2:00 PM, Tom Lane wrote:\n\n\n\"Dean Gibson (DB Administrator)\" <[email protected]> writes:\n\n\n... Do I need quotes on the command line?\nOn 2021-06-10 09:54, Ranier Vilela wrote:\n\n\nYour cmd lacks =\n=>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql\n\n\nI read [the manual] before posting, but missed that. Old command line patterns die hard!\nHowever, the result was the same: 3.5GB before running out of space.\n\n\n[ experiments... ] Looks like you gotta do it like this:\n\n\tpg_dumpall '--exclude-database=\"MailPen\"' ...\n\nThis surprises me, as I thought it was project policy not to case-fold command-line arguments (precisely because you end up needing weird quoting to prevent that).\t\n\n\n\nOuch. That looks like a plain old bug. Let's fix it. IIRC I just used the same logic that we use for pg_dump's --exclude-* options, so we need to check if they have similar issues.\n\ncheers, andrew\n\n\n\n That works! I thought it was a quoting/case issue! I was next\n going to try single quotes just outside double quotes, & that\n works as well (& is a bit more natural):\n\n pg_dumpall -U Admin --exclude-database='\"MailPen\"' >zzz.sql\n\n Using mixed case has bitten me before, but I am not deterred! I run\n phpBB 3.0.14 (very old version) because upgrades to more current\n versions fail on the mixed case of the DB name, as well as the use\n of SCHEMAs to isolate the message board from the rest of the data. \n Yes, I reported it years ago.\n\n I use lower-case for column, VIEW, & function names; mixed\n (camel) case for table, schema, & database names; &\n upper-case for SQL keywords. It helps readability (as does\n murdering a couple semicolons in the prior sentence).",
"msg_date": "Thu, 10 Jun 2021 12:29:05 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-06-10 10:43, Dean Gibson (DB Administrator) wrote:\n> On 2021-06-10 09:54, Ranier Vilela wrote:\n>> Em qui., 10 de jun. de 2021 às 13:08, Dean Gibson (DB Administrator) \n>> <[email protected] <mailto:[email protected]>> escreveu:\n>>\n>>\n>> ... Do I need quotes on the command line?\n>>\n>> See at:\n>> https://www.postgresql.org/docs/13/app-pg-dumpall.html \n>> <https://www.postgresql.org/docs/13/app-pg-dumpall.html>\n>>\n>> Your cmd lacks =\n>> =>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql\n>>\n>> regards, Ranier Vilela\n>\n> ...\n>\n> However, the result was the same: 3.5GB before running out of space.\n>\n\nIt turns out the \"=\" is not needed. The double-quoting is (this works):\n\npg_dumpall -U Admin --exclude-database '\"MailPen\"' >zzz.sql\n\n\n\n\n\n\nOn 2021-06-10 10:43, Dean Gibson (DB\n Administrator) wrote:\n\n\n\nOn 2021-06-10 09:54, Ranier Vilela\n wrote:\n\n\n\n\n\nEm qui., 10 de jun. de\n 2021 às 13:08, Dean Gibson (DB Administrator) <[email protected]>\n escreveu:\n\n\n\n ... Do I need quotes on the command line?\n\n\nSee at:\nhttps://www.postgresql.org/docs/13/app-pg-dumpall.html\n\n\nYour cmd lacks =\n =>pg_dumpall -U Admin --exclude-database=MailPen\n >zzz.sql \n\n\n\nregards, Ranier Vilela\n\n\n\n\n\n ...\n\n However, the result was the same: 3.5GB before running out of\n space.\n\n\n\n It turns out the \"=\" is not needed. The double-quoting is (this\n works):\n\n pg_dumpall -U Admin --exclude-database '\"MailPen\"' >zzz.sql",
"msg_date": "Thu, 10 Jun 2021 14:46:27 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\nOn 6/10/21 2:23 PM, Andrew Dunstan wrote:\n> On 6/10/21 2:00 PM, Tom Lane wrote:\n>> \"Dean Gibson (DB Administrator)\" <[email protected]> writes:\n>>> On 2021-06-10 09:54, Ranier Vilela wrote:\n>>>> Your cmd lacks =\n>>>> =>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql\n>>> I read that before posting, but missed that. Old command line patterns \n>>> die hard!\n>>> However, the result was the same: 3.5GB before running out of space.\n>> [ experiments... ] Looks like you gotta do it like this:\n>>\n>> \tpg_dumpall '--exclude-database=\"MailPen\"' ...\n>>\n>> This surprises me, as I thought it was project policy not to\n>> case-fold command-line arguments (precisely because you end\n>> up needing weird quoting to prevent that).\n>>\n>> \t\t\t\n>\n>\n> Ouch. That looks like a plain old bug. Let's fix it. IIRC I just used\n> the same logic that we use for pg_dump's --exclude-* options, so we need\n> to check if they have similar issues.\n>\n>\n\n\n\nPeter Eisentraut has pointed out to me that this is documented, albeit a\nbit obscurely for pg_dumpall. But it is visible on the pg_dump page.\n\n\nNevertheless, it's a bit of a POLA violation as we've seen above, and\nI'd like to get it fixed, if there's agreement, both for this pg_dumpall\noption and for pg_dump's pattern matching options.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 14 Jun 2021 09:21:30 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "pg_dumpall --exclude-database case folding, was Re: AWS forcing PG\n upgrade from v9.6 a disaster"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 6/10/21 2:23 PM, Andrew Dunstan wrote:\n>> Ouch. That looks like a plain old bug. Let's fix it. IIRC I just used\n>> the same logic that we use for pg_dump's --exclude-* options, so we need\n>> to check if they have similar issues.\n\n> Peter Eisentraut has pointed out to me that this is documented, albeit a\n> bit obscurely for pg_dumpall. But it is visible on the pg_dump page.\n\nHmm.\n\n> Nevertheless, it's a bit of a POLA violation as we've seen above, and\n> I'd like to get it fixed, if there's agreement, both for this pg_dumpall\n> option and for pg_dump's pattern matching options.\n\n+1, but the -performance list isn't really where to hold that discussion.\nPlease start a thread on -hackers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 14 Jun 2021 09:32:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dumpall --exclude-database case folding,\n was Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\n[discussion transferred from psql-performance]\n\nSummary: pg_dumpall and pg_dump fold non-quoted commandline patterns to\nlower case\n\nTom lane writes:\n\nAndrew Dunstan <[email protected]> writes:\n> On 6/10/21 2:23 PM, Andrew Dunstan wrote:\n>> Ouch. That looks like a plain old bug. Let's fix it. IIRC I just used\n>> the same logic that we use for pg_dump's --exclude-* options, so we need\n>> to check if they have similar issues.\n\n> Peter Eisentraut has pointed out to me that this is documented, albeit a\n> bit obscurely for pg_dumpall. But it is visible on the pg_dump page.\n\nHmm.\n\n> Nevertheless, it's a bit of a POLA violation as we've seen above, and\n> I'd like to get it fixed, if there's agreement, both for this pg_dumpall\n> option and for pg_dump's pattern matching options.\n\n+1, but the -performance list isn't really where to hold that discussion.\nPlease start a thread on -hackers.\n\nregards, tom lane\n\n\n\n",
"msg_date": "Mon, 14 Jun 2021 09:46:06 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "pg_dumpall --exclude-database case folding"
}
] |
[
{
"msg_contents": "below query is slow even with no data\n\n\nexplain ANALYZE\n\nWITH business AS( SELECT * FROM get_businessday_utc_f() start_date)\n SELECT ro.order_id,\n ro.date_time,\n round(ro.order_amount, 2) AS order_amount,\n b.branch_id,\n b.branch_name,\n st_x(b.location) AS from_x,\n st_y(b.location) AS from_y,\n b.user_id AS branch_user_id,\n b.contact_info,\n r.restaurant_id,\n c.city_id,\n c.city_name,\n c.city_name_ar,\n st_linefromtext(((((((('LINESTRING('::text || st_x(b.location)) ||\n' '::text) || st_y(b.location)) || ','::text) ||\nst_x(ro.location_geometry)) || ' '::text) ||\nst_y(ro.location_geometry)) || ')'::text, 28355) AS from_to,\n to_char(ro.date_time, 'HH24:MI'::text) AS order_time,\n ro.customer_comment,\n 'N'::text AS is_new_customer,\n ro.picked_up_time,\n ro.driver_assigned_date_time,\n oom.offer_amount,\n oom.offer_type_code AS offer_type,\n ro.uk_vat\n FROM business, restaurant_order ro\n\n JOIN branch b ON b.branch_id = ro.branch_id\nJOIN restaurant r ON r.restaurant_id = b.restaurant_id\n JOIN city c ON c.city_id = b.city_id\nLEFT JOIN order_offer_map oom using (order_id)\nWHERE ro.date_time >= business.start_date AND ro.date_time<=\nf_now_immutable_with_tz();\n\n\n\nHash Left Join (cost=55497.32..5417639.59 rows=5397276 width=291)\n(actual time=1056.926..1056.934 rows=0 loops=1)\n Hash Cond: (ro.order_id = oom.order_id)\n -> Hash Join (cost=6584.61..3674143.44 rows=5397276 width=209)\n(actual time=1056.926..1056.932 rows=0 loops=1)\n Hash Cond: (ro.branch_id = b.branch_id)\n -> Nested Loop (cost=5427.94..3546726.47 rows=19275986\nwidth=108) (actual time=1036.809..1036.810 rows=0 loops=1)\n -> Function Scan on start_date (cost=0.00..0.01 rows=1\nwidth=8) (actual time=0.006..0.008 rows=1 loops=1)\n -> Bitmap Heap Scan on restaurant_order ro\n(cost=5427.94..3353966.60 rows=19275986 width=108) (actual\ntime=1036.793..1036.793 rows=0 loops=1)\n Recheck Cond: ((date_time >=\nstart_date.start_date) AND (date_time <= '2021-06-04\n08:05:32.784199+00'::timestamp with time zone))\n Rows Removed by Index Recheck: 5039976\n Heap Blocks: lossy=275230\n -> Bitmap Index Scan on rest_ord_date_brin\n(cost=0.00..608.94 rows=19359111 width=0) (actual time=14.037..14.038\nrows=2917120 loops=1)\n Index Cond: ((date_time >=\nstart_date.start_date) AND (date_time <= '2021-06-04\n08:05:32.784199+00'::timestamp with time zone))\n -> Hash (cost=1083.35..1083.35 rows=5866 width=109) (actual\ntime=20.106..20.109 rows=20949 loops=1)\n Buckets: 32768 (originally 8192) Batches: 1 (originally\n1) Memory Usage: 3112kB\n -> Hash Join (cost=343.29..1083.35 rows=5866\nwidth=109) (actual time=1.620..14.539 rows=20949 loops=1)\n Hash Cond: (b.restaurant_id = r.restaurant_id)\n -> Hash Join (cost=2.26..726.91 rows=5866\nwidth=109) (actual time=0.029..8.597 rows=20949 loops=1)\n Hash Cond: (b.city_id = c.city_id)\n -> Seq Scan on branch b (cost=0.00..668.49\nrows=20949 width=88) (actual time=0.004..1.609 rows=20949 loops=1)\n -> Hash (cost=1.56..1.56 rows=56 width=29)\n(actual time=0.020..0.021 rows=56 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 12kB\n -> Seq Scan on city c\n(cost=0.00..1.56 rows=56 width=29) (actual time=0.004..0.010 rows=56\nloops=1)\n -> Hash (cost=233.42..233.42 rows=8609 width=8)\n(actual time=1.575..1.575 rows=8609 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 465kB\n -> Index Only Scan using\n\"restaurant_idx$$_274b003d\" on restaurant r (cost=0.29..233.42\nrows=8609 width=8) (actual time=0.006..0.684 rows=8609 loops=1)\n Heap Fetches: 0\n -> Hash (cost=33000.09..33000.09 rows=1273009 width=13) (never executed)\n -> Seq Scan on order_offer_map oom (cost=0.00..33000.09\nrows=1273009 width=13) (never executed)\nPlanning Time: 1.180 ms\nExecution Time: 1057.535 ms\n\ncould some one explain why it is slow, if I insert 50k records the\nexecution time reaches 20 seconds\n\nbelow query is slow even with no dataexplain ANALYZEWITH business AS( SELECT * FROM get_businessday_utc_f() start_date) SELECT ro.order_id, ro.date_time, round(ro.order_amount, 2) AS order_amount, b.branch_id, b.branch_name, st_x(b.location) AS from_x, st_y(b.location) AS from_y, b.user_id AS branch_user_id, b.contact_info, r.restaurant_id, c.city_id, c.city_name, c.city_name_ar, st_linefromtext(((((((('LINESTRING('::text || st_x(b.location)) || ' '::text) || st_y(b.location)) || ','::text) || st_x(ro.location_geometry)) || ' '::text) || st_y(ro.location_geometry)) || ')'::text, 28355) AS from_to, to_char(ro.date_time, 'HH24:MI'::text) AS order_time, ro.customer_comment, 'N'::text AS is_new_customer, ro.picked_up_time, ro.driver_assigned_date_time, oom.offer_amount, oom.offer_type_code AS offer_type, ro.uk_vat FROM business, restaurant_order ro JOIN branch b ON b.branch_id = ro.branch_idJOIN restaurant r ON r.restaurant_id = b.restaurant_id JOIN city c ON c.city_id = b.city_idLEFT JOIN order_offer_map oom using (order_id)WHERE ro.date_time >= business.start_date AND ro.date_time<= f_now_immutable_with_tz();Hash Left Join (cost=55497.32..5417639.59 rows=5397276 width=291) (actual time=1056.926..1056.934 rows=0 loops=1) Hash Cond: (ro.order_id = oom.order_id) -> Hash Join (cost=6584.61..3674143.44 rows=5397276 width=209) (actual time=1056.926..1056.932 rows=0 loops=1) Hash Cond: (ro.branch_id = b.branch_id) -> Nested Loop (cost=5427.94..3546726.47 rows=19275986 width=108) (actual time=1036.809..1036.810 rows=0 loops=1) -> Function Scan on start_date (cost=0.00..0.01 rows=1 width=8) (actual time=0.006..0.008 rows=1 loops=1) -> Bitmap Heap Scan on restaurant_order ro (cost=5427.94..3353966.60 rows=19275986 width=108) (actual time=1036.793..1036.793 rows=0 loops=1) Recheck Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone)) Rows Removed by Index Recheck: 5039976 Heap Blocks: lossy=275230 -> Bitmap Index Scan on rest_ord_date_brin (cost=0.00..608.94 rows=19359111 width=0) (actual time=14.037..14.038 rows=2917120 loops=1) Index Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone)) -> Hash (cost=1083.35..1083.35 rows=5866 width=109) (actual time=20.106..20.109 rows=20949 loops=1) Buckets: 32768 (originally 8192) Batches: 1 (originally 1) Memory Usage: 3112kB -> Hash Join (cost=343.29..1083.35 rows=5866 width=109) (actual time=1.620..14.539 rows=20949 loops=1) Hash Cond: (b.restaurant_id = r.restaurant_id) -> Hash Join (cost=2.26..726.91 rows=5866 width=109) (actual time=0.029..8.597 rows=20949 loops=1) Hash Cond: (b.city_id = c.city_id) -> Seq Scan on branch b (cost=0.00..668.49 rows=20949 width=88) (actual time=0.004..1.609 rows=20949 loops=1) -> Hash (cost=1.56..1.56 rows=56 width=29) (actual time=0.020..0.021 rows=56 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 12kB -> Seq Scan on city c (cost=0.00..1.56 rows=56 width=29) (actual time=0.004..0.010 rows=56 loops=1) -> Hash (cost=233.42..233.42 rows=8609 width=8) (actual time=1.575..1.575 rows=8609 loops=1) Buckets: 16384 Batches: 1 Memory Usage: 465kB -> Index Only Scan using \"restaurant_idx$$_274b003d\" on restaurant r (cost=0.29..233.42 rows=8609 width=8) (actual time=0.006..0.684 rows=8609 loops=1) Heap Fetches: 0 -> Hash (cost=33000.09..33000.09 rows=1273009 width=13) (never executed) -> Seq Scan on order_offer_map oom (cost=0.00..33000.09 rows=1273009 width=13) (never executed)Planning Time: 1.180 msExecution Time: 1057.535 ms\n\ncould some one explain why it is slow, if I insert 50k records the execution time reaches 20 seconds",
"msg_date": "Fri, 4 Jun 2021 11:06:56 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow query with inline function on AWS RDS with RDS 24x large"
},
{
"msg_contents": "pá 4. 6. 2021 v 10:07 odesílatel Ayub Khan <[email protected]> napsal:\n\n>\n> below query is slow even with no data\n>\n>\n> explain ANALYZE\n>\n> WITH business AS( SELECT * FROM get_businessday_utc_f() start_date)\n> SELECT ro.order_id,\n> ro.date_time,\n> round(ro.order_amount, 2) AS order_amount,\n> b.branch_id,\n> b.branch_name,\n> st_x(b.location) AS from_x,\n> st_y(b.location) AS from_y,\n> b.user_id AS branch_user_id,\n> b.contact_info,\n> r.restaurant_id,\n> c.city_id,\n> c.city_name,\n> c.city_name_ar,\n> st_linefromtext(((((((('LINESTRING('::text || st_x(b.location)) || ' '::text) || st_y(b.location)) || ','::text) || st_x(ro.location_geometry)) || ' '::text) || st_y(ro.location_geometry)) || ')'::text, 28355) AS from_to,\n> to_char(ro.date_time, 'HH24:MI'::text) AS order_time,\n> ro.customer_comment,\n> 'N'::text AS is_new_customer,\n> ro.picked_up_time,\n> ro.driver_assigned_date_time,\n> oom.offer_amount,\n> oom.offer_type_code AS offer_type,\n> ro.uk_vat\n> FROM business, restaurant_order ro\n>\n> JOIN branch b ON b.branch_id = ro.branch_id\n> JOIN restaurant r ON r.restaurant_id = b.restaurant_id\n> JOIN city c ON c.city_id = b.city_id\n> LEFT JOIN order_offer_map oom using (order_id)\n> WHERE ro.date_time >= business.start_date AND ro.date_time<= f_now_immutable_with_tz();\n>\n>\n>\n> Hash Left Join (cost=55497.32..5417639.59 rows=5397276 width=291) (actual time=1056.926..1056.934 rows=0 loops=1)\n> Hash Cond: (ro.order_id = oom.order_id)\n> -> Hash Join (cost=6584.61..3674143.44 rows=5397276 width=209) (actual time=1056.926..1056.932 rows=0 loops=1)\n> Hash Cond: (ro.branch_id = b.branch_id)\n> -> Nested Loop (cost=5427.94..3546726.47 rows=19275986 width=108) (actual time=1036.809..1036.810 rows=0 loops=1)\n> -> Function Scan on start_date (cost=0.00..0.01 rows=1 width=8) (actual time=0.006..0.008 rows=1 loops=1)\n> -> Bitmap Heap Scan on restaurant_order ro (cost=5427.94..3353966.60 rows=19275986 width=108) (actual time=1036.793..1036.793 rows=0 loops=1)\n> Recheck Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone))\n> Rows Removed by Index Recheck: 5039976\n> Heap Blocks: lossy=275230\n> -> Bitmap Index Scan on rest_ord_date_brin (cost=0.00..608.94 rows=19359111 width=0) (actual time=14.037..14.038 rows=2917120 loops=1)\n> Index Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone))\n> -> Hash (cost=1083.35..1083.35 rows=5866 width=109) (actual time=20.106..20.109 rows=20949 loops=1)\n> Buckets: 32768 (originally 8192) Batches: 1 (originally 1) Memory Usage: 3112kB\n> -> Hash Join (cost=343.29..1083.35 rows=5866 width=109) (actual time=1.620..14.539 rows=20949 loops=1)\n> Hash Cond: (b.restaurant_id = r.restaurant_id)\n> -> Hash Join (cost=2.26..726.91 rows=5866 width=109) (actual time=0.029..8.597 rows=20949 loops=1)\n> Hash Cond: (b.city_id = c.city_id)\n> -> Seq Scan on branch b (cost=0.00..668.49 rows=20949 width=88) (actual time=0.004..1.609 rows=20949 loops=1)\n> -> Hash (cost=1.56..1.56 rows=56 width=29) (actual time=0.020..0.021 rows=56 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 12kB\n> -> Seq Scan on city c (cost=0.00..1.56 rows=56 width=29) (actual time=0.004..0.010 rows=56 loops=1)\n> -> Hash (cost=233.42..233.42 rows=8609 width=8) (actual time=1.575..1.575 rows=8609 loops=1)\n> Buckets: 16384 Batches: 1 Memory Usage: 465kB\n> -> Index Only Scan using \"restaurant_idx$$_274b003d\" on restaurant r (cost=0.29..233.42 rows=8609 width=8) (actual time=0.006..0.684 rows=8609 loops=1)\n> Heap Fetches: 0\n> -> Hash (cost=33000.09..33000.09 rows=1273009 width=13) (never executed)\n> -> Seq Scan on order_offer_map oom (cost=0.00..33000.09 rows=1273009 width=13) (never executed)\n> Planning Time: 1.180 ms\n> Execution Time: 1057.535 ms\n>\n> could some one explain why it is slow, if I insert 50k records the execution time reaches 20 seconds\n>\n>\n -> Bitmap Heap Scan on restaurant_order ro\n (cost=5427.94..3353966.60 rows=19275986 width=108) (actual\ntime=1036.793..1036.793 rows=0 loops=1)\n Recheck Cond: ((date_time >= start_date.start_date) AND\n(date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone))\n Rows Removed by Index Recheck: 5039976\n Heap Blocks: lossy=275230\n -> Bitmap Index Scan on rest_ord_date_brin\n (cost=0.00..608.94 rows=19359111 width=0) (actual time=14.037..14.038\nrows=2917120 loops=1)\n Index Cond: ((date_time >= start_date.start_date)\nAND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time\nzone))\n\nLooks so the BRIN index is not in good condition. Maybe you need reindex,\nmaybe BRIN index is not good format for your data.\n\nThere are lot of data - few millions of rows\n\nRegards\n\nPavel\n\npá 4. 6. 2021 v 10:07 odesílatel Ayub Khan <[email protected]> napsal:below query is slow even with no dataexplain ANALYZEWITH business AS( SELECT * FROM get_businessday_utc_f() start_date) SELECT ro.order_id, ro.date_time, round(ro.order_amount, 2) AS order_amount, b.branch_id, b.branch_name, st_x(b.location) AS from_x, st_y(b.location) AS from_y, b.user_id AS branch_user_id, b.contact_info, r.restaurant_id, c.city_id, c.city_name, c.city_name_ar, st_linefromtext(((((((('LINESTRING('::text || st_x(b.location)) || ' '::text) || st_y(b.location)) || ','::text) || st_x(ro.location_geometry)) || ' '::text) || st_y(ro.location_geometry)) || ')'::text, 28355) AS from_to, to_char(ro.date_time, 'HH24:MI'::text) AS order_time, ro.customer_comment, 'N'::text AS is_new_customer, ro.picked_up_time, ro.driver_assigned_date_time, oom.offer_amount, oom.offer_type_code AS offer_type, ro.uk_vat FROM business, restaurant_order ro JOIN branch b ON b.branch_id = ro.branch_idJOIN restaurant r ON r.restaurant_id = b.restaurant_id JOIN city c ON c.city_id = b.city_idLEFT JOIN order_offer_map oom using (order_id)WHERE ro.date_time >= business.start_date AND ro.date_time<= f_now_immutable_with_tz();Hash Left Join (cost=55497.32..5417639.59 rows=5397276 width=291) (actual time=1056.926..1056.934 rows=0 loops=1) Hash Cond: (ro.order_id = oom.order_id) -> Hash Join (cost=6584.61..3674143.44 rows=5397276 width=209) (actual time=1056.926..1056.932 rows=0 loops=1) Hash Cond: (ro.branch_id = b.branch_id) -> Nested Loop (cost=5427.94..3546726.47 rows=19275986 width=108) (actual time=1036.809..1036.810 rows=0 loops=1) -> Function Scan on start_date (cost=0.00..0.01 rows=1 width=8) (actual time=0.006..0.008 rows=1 loops=1) -> Bitmap Heap Scan on restaurant_order ro (cost=5427.94..3353966.60 rows=19275986 width=108) (actual time=1036.793..1036.793 rows=0 loops=1) Recheck Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone)) Rows Removed by Index Recheck: 5039976 Heap Blocks: lossy=275230 -> Bitmap Index Scan on rest_ord_date_brin (cost=0.00..608.94 rows=19359111 width=0) (actual time=14.037..14.038 rows=2917120 loops=1) Index Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone)) -> Hash (cost=1083.35..1083.35 rows=5866 width=109) (actual time=20.106..20.109 rows=20949 loops=1) Buckets: 32768 (originally 8192) Batches: 1 (originally 1) Memory Usage: 3112kB -> Hash Join (cost=343.29..1083.35 rows=5866 width=109) (actual time=1.620..14.539 rows=20949 loops=1) Hash Cond: (b.restaurant_id = r.restaurant_id) -> Hash Join (cost=2.26..726.91 rows=5866 width=109) (actual time=0.029..8.597 rows=20949 loops=1) Hash Cond: (b.city_id = c.city_id) -> Seq Scan on branch b (cost=0.00..668.49 rows=20949 width=88) (actual time=0.004..1.609 rows=20949 loops=1) -> Hash (cost=1.56..1.56 rows=56 width=29) (actual time=0.020..0.021 rows=56 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 12kB -> Seq Scan on city c (cost=0.00..1.56 rows=56 width=29) (actual time=0.004..0.010 rows=56 loops=1) -> Hash (cost=233.42..233.42 rows=8609 width=8) (actual time=1.575..1.575 rows=8609 loops=1) Buckets: 16384 Batches: 1 Memory Usage: 465kB -> Index Only Scan using \"restaurant_idx$$_274b003d\" on restaurant r (cost=0.29..233.42 rows=8609 width=8) (actual time=0.006..0.684 rows=8609 loops=1) Heap Fetches: 0 -> Hash (cost=33000.09..33000.09 rows=1273009 width=13) (never executed) -> Seq Scan on order_offer_map oom (cost=0.00..33000.09 rows=1273009 width=13) (never executed)Planning Time: 1.180 msExecution Time: 1057.535 ms\n\ncould some one explain why it is slow, if I insert 50k records the execution time reaches 20 seconds -> Bitmap Heap Scan on restaurant_order ro (cost=5427.94..3353966.60 rows=19275986 width=108) (actual time=1036.793..1036.793 rows=0 loops=1) Recheck Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone)) Rows Removed by Index Recheck: 5039976 Heap Blocks: lossy=275230 -> Bitmap Index Scan on rest_ord_date_brin (cost=0.00..608.94 rows=19359111 width=0) (actual time=14.037..14.038 rows=2917120 loops=1) Index Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone))Looks so the BRIN index is not in good condition. Maybe you need reindex, maybe BRIN index is not good format for your data.There are lot of data - few millions of rowsRegardsPavel",
"msg_date": "Fri, 4 Jun 2021 10:22:27 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow query with inline function on AWS RDS with RDS 24x large"
},
{
"msg_contents": "BRIN index is only on the date_time column, I even tried with btree index\nwith no performance gains.\n\nOn Fri, Jun 4, 2021 at 11:23 AM Pavel Stehule <[email protected]>\nwrote:\n\n>\n>\n> pá 4. 6. 2021 v 10:07 odesílatel Ayub Khan <[email protected]> napsal:\n>\n>>\n>> below query is slow even with no data\n>>\n>>\n>> explain ANALYZE\n>>\n>> WITH business AS( SELECT * FROM get_businessday_utc_f() start_date)\n>> SELECT ro.order_id,\n>> ro.date_time,\n>> round(ro.order_amount, 2) AS order_amount,\n>> b.branch_id,\n>> b.branch_name,\n>> st_x(b.location) AS from_x,\n>> st_y(b.location) AS from_y,\n>> b.user_id AS branch_user_id,\n>> b.contact_info,\n>> r.restaurant_id,\n>> c.city_id,\n>> c.city_name,\n>> c.city_name_ar,\n>> st_linefromtext(((((((('LINESTRING('::text || st_x(b.location)) || ' '::text) || st_y(b.location)) || ','::text) || st_x(ro.location_geometry)) || ' '::text) || st_y(ro.location_geometry)) || ')'::text, 28355) AS from_to,\n>> to_char(ro.date_time, 'HH24:MI'::text) AS order_time,\n>> ro.customer_comment,\n>> 'N'::text AS is_new_customer,\n>> ro.picked_up_time,\n>> ro.driver_assigned_date_time,\n>> oom.offer_amount,\n>> oom.offer_type_code AS offer_type,\n>> ro.uk_vat\n>> FROM business, restaurant_order ro\n>>\n>> JOIN branch b ON b.branch_id = ro.branch_id\n>> JOIN restaurant r ON r.restaurant_id = b.restaurant_id\n>> JOIN city c ON c.city_id = b.city_id\n>> LEFT JOIN order_offer_map oom using (order_id)\n>> WHERE ro.date_time >= business.start_date AND ro.date_time<= f_now_immutable_with_tz();\n>>\n>>\n>>\n>> Hash Left Join (cost=55497.32..5417639.59 rows=5397276 width=291) (actual time=1056.926..1056.934 rows=0 loops=1)\n>> Hash Cond: (ro.order_id = oom.order_id)\n>> -> Hash Join (cost=6584.61..3674143.44 rows=5397276 width=209) (actual time=1056.926..1056.932 rows=0 loops=1)\n>> Hash Cond: (ro.branch_id = b.branch_id)\n>> -> Nested Loop (cost=5427.94..3546726.47 rows=19275986 width=108) (actual time=1036.809..1036.810 rows=0 loops=1)\n>> -> Function Scan on start_date (cost=0.00..0.01 rows=1 width=8) (actual time=0.006..0.008 rows=1 loops=1)\n>> -> Bitmap Heap Scan on restaurant_order ro (cost=5427.94..3353966.60 rows=19275986 width=108) (actual time=1036.793..1036.793 rows=0 loops=1)\n>> Recheck Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone))\n>> Rows Removed by Index Recheck: 5039976\n>> Heap Blocks: lossy=275230\n>> -> Bitmap Index Scan on rest_ord_date_brin (cost=0.00..608.94 rows=19359111 width=0) (actual time=14.037..14.038 rows=2917120 loops=1)\n>> Index Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone))\n>> -> Hash (cost=1083.35..1083.35 rows=5866 width=109) (actual time=20.106..20.109 rows=20949 loops=1)\n>> Buckets: 32768 (originally 8192) Batches: 1 (originally 1) Memory Usage: 3112kB\n>> -> Hash Join (cost=343.29..1083.35 rows=5866 width=109) (actual time=1.620..14.539 rows=20949 loops=1)\n>> Hash Cond: (b.restaurant_id = r.restaurant_id)\n>> -> Hash Join (cost=2.26..726.91 rows=5866 width=109) (actual time=0.029..8.597 rows=20949 loops=1)\n>> Hash Cond: (b.city_id = c.city_id)\n>> -> Seq Scan on branch b (cost=0.00..668.49 rows=20949 width=88) (actual time=0.004..1.609 rows=20949 loops=1)\n>> -> Hash (cost=1.56..1.56 rows=56 width=29) (actual time=0.020..0.021 rows=56 loops=1)\n>> Buckets: 1024 Batches: 1 Memory Usage: 12kB\n>> -> Seq Scan on city c (cost=0.00..1.56 rows=56 width=29) (actual time=0.004..0.010 rows=56 loops=1)\n>> -> Hash (cost=233.42..233.42 rows=8609 width=8) (actual time=1.575..1.575 rows=8609 loops=1)\n>> Buckets: 16384 Batches: 1 Memory Usage: 465kB\n>> -> Index Only Scan using \"restaurant_idx$$_274b003d\" on restaurant r (cost=0.29..233.42 rows=8609 width=8) (actual time=0.006..0.684 rows=8609 loops=1)\n>> Heap Fetches: 0\n>> -> Hash (cost=33000.09..33000.09 rows=1273009 width=13) (never executed)\n>> -> Seq Scan on order_offer_map oom (cost=0.00..33000.09 rows=1273009 width=13) (never executed)\n>> Planning Time: 1.180 ms\n>> Execution Time: 1057.535 ms\n>>\n>> could some one explain why it is slow, if I insert 50k records the execution time reaches 20 seconds\n>>\n>>\n> -> Bitmap Heap Scan on restaurant_order ro\n> (cost=5427.94..3353966.60 rows=19275986 width=108) (actual\n> time=1036.793..1036.793 rows=0 loops=1)\n> Recheck Cond: ((date_time >= start_date.start_date)\n> AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time\n> zone))\n> Rows Removed by Index Recheck: 5039976\n> Heap Blocks: lossy=275230\n> -> Bitmap Index Scan on rest_ord_date_brin\n> (cost=0.00..608.94 rows=19359111 width=0) (actual time=14.037..14.038\n> rows=2917120 loops=1)\n> Index Cond: ((date_time >=\n> start_date.start_date) AND (date_time <= '2021-06-04\n> 08:05:32.784199+00'::timestamp with time zone))\n>\n> Looks so the BRIN index is not in good condition. Maybe you need reindex,\n> maybe BRIN index is not good format for your data.\n>\n> There are lot of data - few millions of rows\n>\n> Regards\n>\n> Pavel\n>\n>\n\nBRIN index is only on the date_time column, I even tried with btree index with no performance gains.On Fri, Jun 4, 2021 at 11:23 AM Pavel Stehule <[email protected]> wrote:pá 4. 6. 2021 v 10:07 odesílatel Ayub Khan <[email protected]> napsal:below query is slow even with no dataexplain ANALYZEWITH business AS( SELECT * FROM get_businessday_utc_f() start_date) SELECT ro.order_id, ro.date_time, round(ro.order_amount, 2) AS order_amount, b.branch_id, b.branch_name, st_x(b.location) AS from_x, st_y(b.location) AS from_y, b.user_id AS branch_user_id, b.contact_info, r.restaurant_id, c.city_id, c.city_name, c.city_name_ar, st_linefromtext(((((((('LINESTRING('::text || st_x(b.location)) || ' '::text) || st_y(b.location)) || ','::text) || st_x(ro.location_geometry)) || ' '::text) || st_y(ro.location_geometry)) || ')'::text, 28355) AS from_to, to_char(ro.date_time, 'HH24:MI'::text) AS order_time, ro.customer_comment, 'N'::text AS is_new_customer, ro.picked_up_time, ro.driver_assigned_date_time, oom.offer_amount, oom.offer_type_code AS offer_type, ro.uk_vat FROM business, restaurant_order ro JOIN branch b ON b.branch_id = ro.branch_idJOIN restaurant r ON r.restaurant_id = b.restaurant_id JOIN city c ON c.city_id = b.city_idLEFT JOIN order_offer_map oom using (order_id)WHERE ro.date_time >= business.start_date AND ro.date_time<= f_now_immutable_with_tz();Hash Left Join (cost=55497.32..5417639.59 rows=5397276 width=291) (actual time=1056.926..1056.934 rows=0 loops=1) Hash Cond: (ro.order_id = oom.order_id) -> Hash Join (cost=6584.61..3674143.44 rows=5397276 width=209) (actual time=1056.926..1056.932 rows=0 loops=1) Hash Cond: (ro.branch_id = b.branch_id) -> Nested Loop (cost=5427.94..3546726.47 rows=19275986 width=108) (actual time=1036.809..1036.810 rows=0 loops=1) -> Function Scan on start_date (cost=0.00..0.01 rows=1 width=8) (actual time=0.006..0.008 rows=1 loops=1) -> Bitmap Heap Scan on restaurant_order ro (cost=5427.94..3353966.60 rows=19275986 width=108) (actual time=1036.793..1036.793 rows=0 loops=1) Recheck Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone)) Rows Removed by Index Recheck: 5039976 Heap Blocks: lossy=275230 -> Bitmap Index Scan on rest_ord_date_brin (cost=0.00..608.94 rows=19359111 width=0) (actual time=14.037..14.038 rows=2917120 loops=1) Index Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone)) -> Hash (cost=1083.35..1083.35 rows=5866 width=109) (actual time=20.106..20.109 rows=20949 loops=1) Buckets: 32768 (originally 8192) Batches: 1 (originally 1) Memory Usage: 3112kB -> Hash Join (cost=343.29..1083.35 rows=5866 width=109) (actual time=1.620..14.539 rows=20949 loops=1) Hash Cond: (b.restaurant_id = r.restaurant_id) -> Hash Join (cost=2.26..726.91 rows=5866 width=109) (actual time=0.029..8.597 rows=20949 loops=1) Hash Cond: (b.city_id = c.city_id) -> Seq Scan on branch b (cost=0.00..668.49 rows=20949 width=88) (actual time=0.004..1.609 rows=20949 loops=1) -> Hash (cost=1.56..1.56 rows=56 width=29) (actual time=0.020..0.021 rows=56 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 12kB -> Seq Scan on city c (cost=0.00..1.56 rows=56 width=29) (actual time=0.004..0.010 rows=56 loops=1) -> Hash (cost=233.42..233.42 rows=8609 width=8) (actual time=1.575..1.575 rows=8609 loops=1) Buckets: 16384 Batches: 1 Memory Usage: 465kB -> Index Only Scan using \"restaurant_idx$$_274b003d\" on restaurant r (cost=0.29..233.42 rows=8609 width=8) (actual time=0.006..0.684 rows=8609 loops=1) Heap Fetches: 0 -> Hash (cost=33000.09..33000.09 rows=1273009 width=13) (never executed) -> Seq Scan on order_offer_map oom (cost=0.00..33000.09 rows=1273009 width=13) (never executed)Planning Time: 1.180 msExecution Time: 1057.535 ms\n\ncould some one explain why it is slow, if I insert 50k records the execution time reaches 20 seconds -> Bitmap Heap Scan on restaurant_order ro (cost=5427.94..3353966.60 rows=19275986 width=108) (actual time=1036.793..1036.793 rows=0 loops=1) Recheck Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone)) Rows Removed by Index Recheck: 5039976 Heap Blocks: lossy=275230 -> Bitmap Index Scan on rest_ord_date_brin (cost=0.00..608.94 rows=19359111 width=0) (actual time=14.037..14.038 rows=2917120 loops=1) Index Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone))Looks so the BRIN index is not in good condition. Maybe you need reindex, maybe BRIN index is not good format for your data.There are lot of data - few millions of rowsRegardsPavel",
"msg_date": "Fri, 4 Jun 2021 11:32:24 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow query with inline function on AWS RDS with RDS 24x large"
},
{
"msg_contents": "Hi\n\n\npá 4. 6. 2021 v 10:32 odesílatel Ayub Khan <[email protected]> napsal:\n\n> BRIN index is only on the date_time column, I even tried with btree index\n> with no performance gains.\n>\n\n -> Bitmap Heap Scan on restaurant_order ro\n(cost=5427.94..3353966.60 rows=19275986 width=108) (actual\ntime=1036.793..1036.793 rows=0 loops=1)\n Recheck Cond: ((date_time >=\nstart_date.start_date) AND (date_time <= '2021-06-04\n08:05:32.784199+00'::timestamp with time zone))\n Rows Removed by Index Recheck: 5039976\n Heap Blocks: lossy=275230\n\nWhen the most rows are removed in recheck, then the effectivity of the\nindex is not good\n\nPavel\n\n\n\n\n>\n> On Fri, Jun 4, 2021 at 11:23 AM Pavel Stehule <[email protected]>\n> wrote:\n>\n>>\n>>\n>> pá 4. 6. 2021 v 10:07 odesílatel Ayub Khan <[email protected]> napsal:\n>>\n>>>\n>>> below query is slow even with no data\n>>>\n>>>\n>>> explain ANALYZE\n>>>\n>>> WITH business AS( SELECT * FROM get_businessday_utc_f() start_date)\n>>> SELECT ro.order_id,\n>>> ro.date_time,\n>>> round(ro.order_amount, 2) AS order_amount,\n>>> b.branch_id,\n>>> b.branch_name,\n>>> st_x(b.location) AS from_x,\n>>> st_y(b.location) AS from_y,\n>>> b.user_id AS branch_user_id,\n>>> b.contact_info,\n>>> r.restaurant_id,\n>>> c.city_id,\n>>> c.city_name,\n>>> c.city_name_ar,\n>>> st_linefromtext(((((((('LINESTRING('::text || st_x(b.location)) || ' '::text) || st_y(b.location)) || ','::text) || st_x(ro.location_geometry)) || ' '::text) || st_y(ro.location_geometry)) || ')'::text, 28355) AS from_to,\n>>> to_char(ro.date_time, 'HH24:MI'::text) AS order_time,\n>>> ro.customer_comment,\n>>> 'N'::text AS is_new_customer,\n>>> ro.picked_up_time,\n>>> ro.driver_assigned_date_time,\n>>> oom.offer_amount,\n>>> oom.offer_type_code AS offer_type,\n>>> ro.uk_vat\n>>> FROM business, restaurant_order ro\n>>>\n>>> JOIN branch b ON b.branch_id = ro.branch_id\n>>> JOIN restaurant r ON r.restaurant_id = b.restaurant_id\n>>> JOIN city c ON c.city_id = b.city_id\n>>> LEFT JOIN order_offer_map oom using (order_id)\n>>> WHERE ro.date_time >= business.start_date AND ro.date_time<= f_now_immutable_with_tz();\n>>>\n>>>\n>>>\n>>> Hash Left Join (cost=55497.32..5417639.59 rows=5397276 width=291) (actual time=1056.926..1056.934 rows=0 loops=1)\n>>> Hash Cond: (ro.order_id = oom.order_id)\n>>> -> Hash Join (cost=6584.61..3674143.44 rows=5397276 width=209) (actual time=1056.926..1056.932 rows=0 loops=1)\n>>> Hash Cond: (ro.branch_id = b.branch_id)\n>>> -> Nested Loop (cost=5427.94..3546726.47 rows=19275986 width=108) (actual time=1036.809..1036.810 rows=0 loops=1)\n>>> -> Function Scan on start_date (cost=0.00..0.01 rows=1 width=8) (actual time=0.006..0.008 rows=1 loops=1)\n>>> -> Bitmap Heap Scan on restaurant_order ro (cost=5427.94..3353966.60 rows=19275986 width=108) (actual time=1036.793..1036.793 rows=0 loops=1)\n>>> Recheck Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone))\n>>> Rows Removed by Index Recheck: 5039976\n>>> Heap Blocks: lossy=275230\n>>> -> Bitmap Index Scan on rest_ord_date_brin (cost=0.00..608.94 rows=19359111 width=0) (actual time=14.037..14.038 rows=2917120 loops=1)\n>>> Index Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone))\n>>> -> Hash (cost=1083.35..1083.35 rows=5866 width=109) (actual time=20.106..20.109 rows=20949 loops=1)\n>>> Buckets: 32768 (originally 8192) Batches: 1 (originally 1) Memory Usage: 3112kB\n>>> -> Hash Join (cost=343.29..1083.35 rows=5866 width=109) (actual time=1.620..14.539 rows=20949 loops=1)\n>>> Hash Cond: (b.restaurant_id = r.restaurant_id)\n>>> -> Hash Join (cost=2.26..726.91 rows=5866 width=109) (actual time=0.029..8.597 rows=20949 loops=1)\n>>> Hash Cond: (b.city_id = c.city_id)\n>>> -> Seq Scan on branch b (cost=0.00..668.49 rows=20949 width=88) (actual time=0.004..1.609 rows=20949 loops=1)\n>>> -> Hash (cost=1.56..1.56 rows=56 width=29) (actual time=0.020..0.021 rows=56 loops=1)\n>>> Buckets: 1024 Batches: 1 Memory Usage: 12kB\n>>> -> Seq Scan on city c (cost=0.00..1.56 rows=56 width=29) (actual time=0.004..0.010 rows=56 loops=1)\n>>> -> Hash (cost=233.42..233.42 rows=8609 width=8) (actual time=1.575..1.575 rows=8609 loops=1)\n>>> Buckets: 16384 Batches: 1 Memory Usage: 465kB\n>>> -> Index Only Scan using \"restaurant_idx$$_274b003d\" on restaurant r (cost=0.29..233.42 rows=8609 width=8) (actual time=0.006..0.684 rows=8609 loops=1)\n>>> Heap Fetches: 0\n>>> -> Hash (cost=33000.09..33000.09 rows=1273009 width=13) (never executed)\n>>> -> Seq Scan on order_offer_map oom (cost=0.00..33000.09 rows=1273009 width=13) (never executed)\n>>> Planning Time: 1.180 ms\n>>> Execution Time: 1057.535 ms\n>>>\n>>> could some one explain why it is slow, if I insert 50k records the execution time reaches 20 seconds\n>>>\n>>>\n>> -> Bitmap Heap Scan on restaurant_order ro\n>> (cost=5427.94..3353966.60 rows=19275986 width=108) (actual\n>> time=1036.793..1036.793 rows=0 loops=1)\n>> Recheck Cond: ((date_time >= start_date.start_date)\n>> AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time\n>> zone))\n>> Rows Removed by Index Recheck: 5039976\n>> Heap Blocks: lossy=275230\n>> -> Bitmap Index Scan on rest_ord_date_brin\n>> (cost=0.00..608.94 rows=19359111 width=0) (actual time=14.037..14.038\n>> rows=2917120 loops=1)\n>> Index Cond: ((date_time >=\n>> start_date.start_date) AND (date_time <= '2021-06-04\n>> 08:05:32.784199+00'::timestamp with time zone))\n>>\n>> Looks so the BRIN index is not in good condition. Maybe you need reindex,\n>> maybe BRIN index is not good format for your data.\n>>\n>> There are lot of data - few millions of rows\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>\n>\n>\n\nHipá 4. 6. 2021 v 10:32 odesílatel Ayub Khan <[email protected]> napsal:BRIN index is only on the date_time column, I even tried with btree index with no performance gains. -> Bitmap Heap Scan on restaurant_order ro (cost=5427.94..3353966.60 rows=19275986 width=108) (actual time=1036.793..1036.793 rows=0 loops=1) Recheck Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone)) Rows Removed by Index Recheck: 5039976 Heap Blocks: lossy=275230When the most rows are removed in recheck, then the effectivity of the index is not goodPavel On Fri, Jun 4, 2021 at 11:23 AM Pavel Stehule <[email protected]> wrote:pá 4. 6. 2021 v 10:07 odesílatel Ayub Khan <[email protected]> napsal:below query is slow even with no dataexplain ANALYZEWITH business AS( SELECT * FROM get_businessday_utc_f() start_date) SELECT ro.order_id, ro.date_time, round(ro.order_amount, 2) AS order_amount, b.branch_id, b.branch_name, st_x(b.location) AS from_x, st_y(b.location) AS from_y, b.user_id AS branch_user_id, b.contact_info, r.restaurant_id, c.city_id, c.city_name, c.city_name_ar, st_linefromtext(((((((('LINESTRING('::text || st_x(b.location)) || ' '::text) || st_y(b.location)) || ','::text) || st_x(ro.location_geometry)) || ' '::text) || st_y(ro.location_geometry)) || ')'::text, 28355) AS from_to, to_char(ro.date_time, 'HH24:MI'::text) AS order_time, ro.customer_comment, 'N'::text AS is_new_customer, ro.picked_up_time, ro.driver_assigned_date_time, oom.offer_amount, oom.offer_type_code AS offer_type, ro.uk_vat FROM business, restaurant_order ro JOIN branch b ON b.branch_id = ro.branch_idJOIN restaurant r ON r.restaurant_id = b.restaurant_id JOIN city c ON c.city_id = b.city_idLEFT JOIN order_offer_map oom using (order_id)WHERE ro.date_time >= business.start_date AND ro.date_time<= f_now_immutable_with_tz();Hash Left Join (cost=55497.32..5417639.59 rows=5397276 width=291) (actual time=1056.926..1056.934 rows=0 loops=1) Hash Cond: (ro.order_id = oom.order_id) -> Hash Join (cost=6584.61..3674143.44 rows=5397276 width=209) (actual time=1056.926..1056.932 rows=0 loops=1) Hash Cond: (ro.branch_id = b.branch_id) -> Nested Loop (cost=5427.94..3546726.47 rows=19275986 width=108) (actual time=1036.809..1036.810 rows=0 loops=1) -> Function Scan on start_date (cost=0.00..0.01 rows=1 width=8) (actual time=0.006..0.008 rows=1 loops=1) -> Bitmap Heap Scan on restaurant_order ro (cost=5427.94..3353966.60 rows=19275986 width=108) (actual time=1036.793..1036.793 rows=0 loops=1) Recheck Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone)) Rows Removed by Index Recheck: 5039976 Heap Blocks: lossy=275230 -> Bitmap Index Scan on rest_ord_date_brin (cost=0.00..608.94 rows=19359111 width=0) (actual time=14.037..14.038 rows=2917120 loops=1) Index Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone)) -> Hash (cost=1083.35..1083.35 rows=5866 width=109) (actual time=20.106..20.109 rows=20949 loops=1) Buckets: 32768 (originally 8192) Batches: 1 (originally 1) Memory Usage: 3112kB -> Hash Join (cost=343.29..1083.35 rows=5866 width=109) (actual time=1.620..14.539 rows=20949 loops=1) Hash Cond: (b.restaurant_id = r.restaurant_id) -> Hash Join (cost=2.26..726.91 rows=5866 width=109) (actual time=0.029..8.597 rows=20949 loops=1) Hash Cond: (b.city_id = c.city_id) -> Seq Scan on branch b (cost=0.00..668.49 rows=20949 width=88) (actual time=0.004..1.609 rows=20949 loops=1) -> Hash (cost=1.56..1.56 rows=56 width=29) (actual time=0.020..0.021 rows=56 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 12kB -> Seq Scan on city c (cost=0.00..1.56 rows=56 width=29) (actual time=0.004..0.010 rows=56 loops=1) -> Hash (cost=233.42..233.42 rows=8609 width=8) (actual time=1.575..1.575 rows=8609 loops=1) Buckets: 16384 Batches: 1 Memory Usage: 465kB -> Index Only Scan using \"restaurant_idx$$_274b003d\" on restaurant r (cost=0.29..233.42 rows=8609 width=8) (actual time=0.006..0.684 rows=8609 loops=1) Heap Fetches: 0 -> Hash (cost=33000.09..33000.09 rows=1273009 width=13) (never executed) -> Seq Scan on order_offer_map oom (cost=0.00..33000.09 rows=1273009 width=13) (never executed)Planning Time: 1.180 msExecution Time: 1057.535 ms\n\ncould some one explain why it is slow, if I insert 50k records the execution time reaches 20 seconds -> Bitmap Heap Scan on restaurant_order ro (cost=5427.94..3353966.60 rows=19275986 width=108) (actual time=1036.793..1036.793 rows=0 loops=1) Recheck Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone)) Rows Removed by Index Recheck: 5039976 Heap Blocks: lossy=275230 -> Bitmap Index Scan on rest_ord_date_brin (cost=0.00..608.94 rows=19359111 width=0) (actual time=14.037..14.038 rows=2917120 loops=1) Index Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone))Looks so the BRIN index is not in good condition. Maybe you need reindex, maybe BRIN index is not good format for your data.There are lot of data - few millions of rowsRegardsPavel",
"msg_date": "Fri, 4 Jun 2021 10:40:30 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow query with inline function on AWS RDS with RDS 24x large"
},
{
"msg_contents": "You are right, I dropped BRIN and created btree and the performance on 0\nrows matching criteria table is good, below is the plan with BTREE. I will\ntest by inserting lot of data.\n\n\nHash Join (cost=50186.91..3765911.10 rows=5397411 width=291) (actual\ntime=1.501..1.504 rows=0 loops=1)\n Hash Cond: (b.restaurant_id = r.restaurant_id)\n -> Hash Left Join (cost=49845.88..2078197.48 rows=5397411 width=216)\n(actual time=0.079..0.081 rows=0 loops=1)\n Hash Cond: (ro.order_id = oom.order_id)\n -> Hash Join (cost=933.18..2007856.35 rows=5397411 width=209)\n(actual time=0.078..0.080 rows=0 loops=1)\n Hash Cond: (b.city_id = c.city_id)\n -> Hash Join (cost=930.92..1956181.11 rows=19276467\nwidth=188) (actual time=0.048..0.050 rows=0 loops=1)\n Hash Cond: (ro.branch_id = b.branch_id)\n -> Nested Loop (cost=0.56..1904639.80 rows=19276467\nwidth=108) (actual time=0.048..0.048 rows=0 loops=1)\n -> Function Scan on start_date (cost=0.00..0.01\nrows=1 width=8) (actual time=0.002..0.003 rows=1 loops=1)\n -> Index Scan using rest_ord_date_brin on\nrestaurant_order ro (cost=0.56..1711875.12 rows=19276467 width=108)\n(actual time=0.042..0.042 rows=0 loops=1)\n Index Cond: ((date_time >=\nstart_date.start_date) AND (date_time <= '2021-06-04\n08:48:45.377833+00'::timestamp with time zone))\n -> Hash (cost=668.49..668.49 rows=20949 width=88)\n(never executed)\n -> Seq Scan on branch b (cost=0.00..668.49\nrows=20949 width=88) (never executed)\n -> Hash (cost=1.56..1.56 rows=56 width=29) (actual\ntime=0.026..0.027 rows=56 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 12kB\n -> Seq Scan on city c (cost=0.00..1.56 rows=56\nwidth=29) (actual time=0.009..0.016 rows=56 loops=1)\n -> Hash (cost=33000.09..33000.09 rows=1273009 width=13) (never\nexecuted)\n -> Seq Scan on order_offer_map oom (cost=0.00..33000.09\nrows=1273009 width=13) (never executed)\n -> Hash (cost=233.42..233.42 rows=8609 width=8) (actual\ntime=1.403..1.403 rows=8609 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 465kB\n -> Index Only Scan using \"restaurant_idx$$_274b003d\" on restaurant\nr (cost=0.29..233.42 rows=8609 width=8) (actual time=0.007..0.634\nrows=8609 loops=1)\n Heap Fetches: 0\nPlanning Time: 1.352 ms\nExecution Time: 1.571 ms\n\nOn Fri, Jun 4, 2021 at 11:41 AM Pavel Stehule <[email protected]>\nwrote:\n\n> Hi\n>\n>\n> pá 4. 6. 2021 v 10:32 odesílatel Ayub Khan <[email protected]> napsal:\n>\n>> BRIN index is only on the date_time column, I even tried with btree index\n>> with no performance gains.\n>>\n>\n> -> Bitmap Heap Scan on restaurant_order ro (cost=5427.94..3353966.60 rows=19275986 width=108) (actual time=1036.793..1036.793 rows=0 loops=1)\n> Recheck Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone))\n> Rows Removed by Index Recheck: 5039976\n> Heap Blocks: lossy=275230\n>\n> When the most rows are removed in recheck, then the effectivity of the index is not good\n>\n> Pavel\n>\n>\n>\n>\n>>\n>> On Fri, Jun 4, 2021 at 11:23 AM Pavel Stehule <[email protected]>\n>> wrote:\n>>\n>>>\n>>>\n>>> pá 4. 6. 2021 v 10:07 odesílatel Ayub Khan <[email protected]> napsal:\n>>>\n>>>>\n>>>> below query is slow even with no data\n>>>>\n>>>>\n>>>> explain ANALYZE\n>>>>\n>>>> WITH business AS( SELECT * FROM get_businessday_utc_f() start_date)\n>>>> SELECT ro.order_id,\n>>>> ro.date_time,\n>>>> round(ro.order_amount, 2) AS order_amount,\n>>>> b.branch_id,\n>>>> b.branch_name,\n>>>> st_x(b.location) AS from_x,\n>>>> st_y(b.location) AS from_y,\n>>>> b.user_id AS branch_user_id,\n>>>> b.contact_info,\n>>>> r.restaurant_id,\n>>>> c.city_id,\n>>>> c.city_name,\n>>>> c.city_name_ar,\n>>>> st_linefromtext(((((((('LINESTRING('::text || st_x(b.location)) || ' '::text) || st_y(b.location)) || ','::text) || st_x(ro.location_geometry)) || ' '::text) || st_y(ro.location_geometry)) || ')'::text, 28355) AS from_to,\n>>>> to_char(ro.date_time, 'HH24:MI'::text) AS order_time,\n>>>> ro.customer_comment,\n>>>> 'N'::text AS is_new_customer,\n>>>> ro.picked_up_time,\n>>>> ro.driver_assigned_date_time,\n>>>> oom.offer_amount,\n>>>> oom.offer_type_code AS offer_type,\n>>>> ro.uk_vat\n>>>> FROM business, restaurant_order ro\n>>>>\n>>>> JOIN branch b ON b.branch_id = ro.branch_id\n>>>> JOIN restaurant r ON r.restaurant_id = b.restaurant_id\n>>>> JOIN city c ON c.city_id = b.city_id\n>>>> LEFT JOIN order_offer_map oom using (order_id)\n>>>> WHERE ro.date_time >= business.start_date AND ro.date_time<= f_now_immutable_with_tz();\n>>>>\n>>>>\n>>>>\n>>>> Hash Left Join (cost=55497.32..5417639.59 rows=5397276 width=291) (actual time=1056.926..1056.934 rows=0 loops=1)\n>>>> Hash Cond: (ro.order_id = oom.order_id)\n>>>> -> Hash Join (cost=6584.61..3674143.44 rows=5397276 width=209) (actual time=1056.926..1056.932 rows=0 loops=1)\n>>>> Hash Cond: (ro.branch_id = b.branch_id)\n>>>> -> Nested Loop (cost=5427.94..3546726.47 rows=19275986 width=108) (actual time=1036.809..1036.810 rows=0 loops=1)\n>>>> -> Function Scan on start_date (cost=0.00..0.01 rows=1 width=8) (actual time=0.006..0.008 rows=1 loops=1)\n>>>> -> Bitmap Heap Scan on restaurant_order ro (cost=5427.94..3353966.60 rows=19275986 width=108) (actual time=1036.793..1036.793 rows=0 loops=1)\n>>>> Recheck Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone))\n>>>> Rows Removed by Index Recheck: 5039976\n>>>> Heap Blocks: lossy=275230\n>>>> -> Bitmap Index Scan on rest_ord_date_brin (cost=0.00..608.94 rows=19359111 width=0) (actual time=14.037..14.038 rows=2917120 loops=1)\n>>>> Index Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone))\n>>>> -> Hash (cost=1083.35..1083.35 rows=5866 width=109) (actual time=20.106..20.109 rows=20949 loops=1)\n>>>> Buckets: 32768 (originally 8192) Batches: 1 (originally 1) Memory Usage: 3112kB\n>>>> -> Hash Join (cost=343.29..1083.35 rows=5866 width=109) (actual time=1.620..14.539 rows=20949 loops=1)\n>>>> Hash Cond: (b.restaurant_id = r.restaurant_id)\n>>>> -> Hash Join (cost=2.26..726.91 rows=5866 width=109) (actual time=0.029..8.597 rows=20949 loops=1)\n>>>> Hash Cond: (b.city_id = c.city_id)\n>>>> -> Seq Scan on branch b (cost=0.00..668.49 rows=20949 width=88) (actual time=0.004..1.609 rows=20949 loops=1)\n>>>> -> Hash (cost=1.56..1.56 rows=56 width=29) (actual time=0.020..0.021 rows=56 loops=1)\n>>>> Buckets: 1024 Batches: 1 Memory Usage: 12kB\n>>>> -> Seq Scan on city c (cost=0.00..1.56 rows=56 width=29) (actual time=0.004..0.010 rows=56 loops=1)\n>>>> -> Hash (cost=233.42..233.42 rows=8609 width=8) (actual time=1.575..1.575 rows=8609 loops=1)\n>>>> Buckets: 16384 Batches: 1 Memory Usage: 465kB\n>>>> -> Index Only Scan using \"restaurant_idx$$_274b003d\" on restaurant r (cost=0.29..233.42 rows=8609 width=8) (actual time=0.006..0.684 rows=8609 loops=1)\n>>>> Heap Fetches: 0\n>>>> -> Hash (cost=33000.09..33000.09 rows=1273009 width=13) (never executed)\n>>>> -> Seq Scan on order_offer_map oom (cost=0.00..33000.09 rows=1273009 width=13) (never executed)\n>>>> Planning Time: 1.180 ms\n>>>> Execution Time: 1057.535 ms\n>>>>\n>>>> could some one explain why it is slow, if I insert 50k records the execution time reaches 20 seconds\n>>>>\n>>>>\n>>> -> Bitmap Heap Scan on restaurant_order ro\n>>> (cost=5427.94..3353966.60 rows=19275986 width=108) (actual\n>>> time=1036.793..1036.793 rows=0 loops=1)\n>>> Recheck Cond: ((date_time >= start_date.start_date)\n>>> AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time\n>>> zone))\n>>> Rows Removed by Index Recheck: 5039976\n>>> Heap Blocks: lossy=275230\n>>> -> Bitmap Index Scan on rest_ord_date_brin\n>>> (cost=0.00..608.94 rows=19359111 width=0) (actual time=14.037..14.038\n>>> rows=2917120 loops=1)\n>>> Index Cond: ((date_time >=\n>>> start_date.start_date) AND (date_time <= '2021-06-04\n>>> 08:05:32.784199+00'::timestamp with time zone))\n>>>\n>>> Looks so the BRIN index is not in good condition. Maybe you need\n>>> reindex, maybe BRIN index is not good format for your data.\n>>>\n>>> There are lot of data - few millions of rows\n>>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>>>\n>>\n>>\n>>\n\n-- \n--------------------------------------------------------------------\nSun Certified Enterprise Architect 1.5\nSun Certified Java Programmer 1.4\nMicrosoft Certified Systems Engineer 2000\nhttp://in.linkedin.com/pub/ayub-khan/a/811/b81\nmobile:+966-502674604\n----------------------------------------------------------------------\nIt is proved that Hard Work and kowledge will get you close but attitude\nwill get you there. However, it's the Love\nof God that will put you over the top!!\n\nYou are right, I dropped BRIN and created btree and the performance on 0 rows matching criteria table is good, below is the plan with BTREE. I will test by inserting lot of data.Hash Join (cost=50186.91..3765911.10 rows=5397411 width=291) (actual time=1.501..1.504 rows=0 loops=1) Hash Cond: (b.restaurant_id = r.restaurant_id) -> Hash Left Join (cost=49845.88..2078197.48 rows=5397411 width=216) (actual time=0.079..0.081 rows=0 loops=1) Hash Cond: (ro.order_id = oom.order_id) -> Hash Join (cost=933.18..2007856.35 rows=5397411 width=209) (actual time=0.078..0.080 rows=0 loops=1) Hash Cond: (b.city_id = c.city_id) -> Hash Join (cost=930.92..1956181.11 rows=19276467 width=188) (actual time=0.048..0.050 rows=0 loops=1) Hash Cond: (ro.branch_id = b.branch_id) -> Nested Loop (cost=0.56..1904639.80 rows=19276467 width=108) (actual time=0.048..0.048 rows=0 loops=1) -> Function Scan on start_date (cost=0.00..0.01 rows=1 width=8) (actual time=0.002..0.003 rows=1 loops=1) -> Index Scan using rest_ord_date_brin on restaurant_order ro (cost=0.56..1711875.12 rows=19276467 width=108) (actual time=0.042..0.042 rows=0 loops=1) Index Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:48:45.377833+00'::timestamp with time zone)) -> Hash (cost=668.49..668.49 rows=20949 width=88) (never executed) -> Seq Scan on branch b (cost=0.00..668.49 rows=20949 width=88) (never executed) -> Hash (cost=1.56..1.56 rows=56 width=29) (actual time=0.026..0.027 rows=56 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 12kB -> Seq Scan on city c (cost=0.00..1.56 rows=56 width=29) (actual time=0.009..0.016 rows=56 loops=1) -> Hash (cost=33000.09..33000.09 rows=1273009 width=13) (never executed) -> Seq Scan on order_offer_map oom (cost=0.00..33000.09 rows=1273009 width=13) (never executed) -> Hash (cost=233.42..233.42 rows=8609 width=8) (actual time=1.403..1.403 rows=8609 loops=1) Buckets: 16384 Batches: 1 Memory Usage: 465kB -> Index Only Scan using \"restaurant_idx$$_274b003d\" on restaurant r (cost=0.29..233.42 rows=8609 width=8) (actual time=0.007..0.634 rows=8609 loops=1) Heap Fetches: 0Planning Time: 1.352 msExecution Time: 1.571 msOn Fri, Jun 4, 2021 at 11:41 AM Pavel Stehule <[email protected]> wrote:Hipá 4. 6. 2021 v 10:32 odesílatel Ayub Khan <[email protected]> napsal:BRIN index is only on the date_time column, I even tried with btree index with no performance gains. -> Bitmap Heap Scan on restaurant_order ro (cost=5427.94..3353966.60 rows=19275986 width=108) (actual time=1036.793..1036.793 rows=0 loops=1) Recheck Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone)) Rows Removed by Index Recheck: 5039976 Heap Blocks: lossy=275230When the most rows are removed in recheck, then the effectivity of the index is not goodPavel On Fri, Jun 4, 2021 at 11:23 AM Pavel Stehule <[email protected]> wrote:pá 4. 6. 2021 v 10:07 odesílatel Ayub Khan <[email protected]> napsal:below query is slow even with no dataexplain ANALYZEWITH business AS( SELECT * FROM get_businessday_utc_f() start_date) SELECT ro.order_id, ro.date_time, round(ro.order_amount, 2) AS order_amount, b.branch_id, b.branch_name, st_x(b.location) AS from_x, st_y(b.location) AS from_y, b.user_id AS branch_user_id, b.contact_info, r.restaurant_id, c.city_id, c.city_name, c.city_name_ar, st_linefromtext(((((((('LINESTRING('::text || st_x(b.location)) || ' '::text) || st_y(b.location)) || ','::text) || st_x(ro.location_geometry)) || ' '::text) || st_y(ro.location_geometry)) || ')'::text, 28355) AS from_to, to_char(ro.date_time, 'HH24:MI'::text) AS order_time, ro.customer_comment, 'N'::text AS is_new_customer, ro.picked_up_time, ro.driver_assigned_date_time, oom.offer_amount, oom.offer_type_code AS offer_type, ro.uk_vat FROM business, restaurant_order ro JOIN branch b ON b.branch_id = ro.branch_idJOIN restaurant r ON r.restaurant_id = b.restaurant_id JOIN city c ON c.city_id = b.city_idLEFT JOIN order_offer_map oom using (order_id)WHERE ro.date_time >= business.start_date AND ro.date_time<= f_now_immutable_with_tz();Hash Left Join (cost=55497.32..5417639.59 rows=5397276 width=291) (actual time=1056.926..1056.934 rows=0 loops=1) Hash Cond: (ro.order_id = oom.order_id) -> Hash Join (cost=6584.61..3674143.44 rows=5397276 width=209) (actual time=1056.926..1056.932 rows=0 loops=1) Hash Cond: (ro.branch_id = b.branch_id) -> Nested Loop (cost=5427.94..3546726.47 rows=19275986 width=108) (actual time=1036.809..1036.810 rows=0 loops=1) -> Function Scan on start_date (cost=0.00..0.01 rows=1 width=8) (actual time=0.006..0.008 rows=1 loops=1) -> Bitmap Heap Scan on restaurant_order ro (cost=5427.94..3353966.60 rows=19275986 width=108) (actual time=1036.793..1036.793 rows=0 loops=1) Recheck Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone)) Rows Removed by Index Recheck: 5039976 Heap Blocks: lossy=275230 -> Bitmap Index Scan on rest_ord_date_brin (cost=0.00..608.94 rows=19359111 width=0) (actual time=14.037..14.038 rows=2917120 loops=1) Index Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone)) -> Hash (cost=1083.35..1083.35 rows=5866 width=109) (actual time=20.106..20.109 rows=20949 loops=1) Buckets: 32768 (originally 8192) Batches: 1 (originally 1) Memory Usage: 3112kB -> Hash Join (cost=343.29..1083.35 rows=5866 width=109) (actual time=1.620..14.539 rows=20949 loops=1) Hash Cond: (b.restaurant_id = r.restaurant_id) -> Hash Join (cost=2.26..726.91 rows=5866 width=109) (actual time=0.029..8.597 rows=20949 loops=1) Hash Cond: (b.city_id = c.city_id) -> Seq Scan on branch b (cost=0.00..668.49 rows=20949 width=88) (actual time=0.004..1.609 rows=20949 loops=1) -> Hash (cost=1.56..1.56 rows=56 width=29) (actual time=0.020..0.021 rows=56 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 12kB -> Seq Scan on city c (cost=0.00..1.56 rows=56 width=29) (actual time=0.004..0.010 rows=56 loops=1) -> Hash (cost=233.42..233.42 rows=8609 width=8) (actual time=1.575..1.575 rows=8609 loops=1) Buckets: 16384 Batches: 1 Memory Usage: 465kB -> Index Only Scan using \"restaurant_idx$$_274b003d\" on restaurant r (cost=0.29..233.42 rows=8609 width=8) (actual time=0.006..0.684 rows=8609 loops=1) Heap Fetches: 0 -> Hash (cost=33000.09..33000.09 rows=1273009 width=13) (never executed) -> Seq Scan on order_offer_map oom (cost=0.00..33000.09 rows=1273009 width=13) (never executed)Planning Time: 1.180 msExecution Time: 1057.535 ms\n\ncould some one explain why it is slow, if I insert 50k records the execution time reaches 20 seconds -> Bitmap Heap Scan on restaurant_order ro (cost=5427.94..3353966.60 rows=19275986 width=108) (actual time=1036.793..1036.793 rows=0 loops=1) Recheck Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone)) Rows Removed by Index Recheck: 5039976 Heap Blocks: lossy=275230 -> Bitmap Index Scan on rest_ord_date_brin (cost=0.00..608.94 rows=19359111 width=0) (actual time=14.037..14.038 rows=2917120 loops=1) Index Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone))Looks so the BRIN index is not in good condition. Maybe you need reindex, maybe BRIN index is not good format for your data.There are lot of data - few millions of rowsRegardsPavel \n\n\n-- --------------------------------------------------------------------Sun Certified Enterprise Architect 1.5Sun Certified Java Programmer 1.4Microsoft Certified Systems Engineer 2000http://in.linkedin.com/pub/ayub-khan/a/811/b81mobile:+966-502674604----------------------------------------------------------------------It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!",
"msg_date": "Fri, 4 Jun 2021 11:51:13 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow query with inline function on AWS RDS with RDS 24x large"
}
] |
[
{
"msg_contents": "Dear All,\n\nI have demo stand on Hyper-V (2xIntel E5-2650v2, 256GB RAM, Intel SSDs in RAID):\n* One VM with 1C application Server\n* 2 database VMs with same database imported into each PostgreSQL (~56Gb, \"1C accounting 3.0\" config):\n1. CentOS 7 with postgresql_12.6_6.1C_x86_64 (distribution with special patches for running 1C), VM is updated with LIS for Hyper-V\n2. Windows Server 2019 with same postgresql_12.6_6.1C_x86_64\n\nMy real life test is to \"register\" 10 _same_ documents (провести документы) in each of 1C/PostgreSQL DBs. Both PostgreSQL DBs are identical and just before test imported to PostgreSQL via application server (DT import).\nOn Windows Server test procedure takes 20-30 seconds, on Linux it takes 1m-1m10seconds. PostgreSQL VMs are running on same Hypervisor with same resources assigned to each of them.\nTuning PostgreSQL config and/or CentOS don't make any difference. Contrary on Windows VM we have almost 3x better performance with stock PostgreSQL config.\n\nAny ideas what's wrong? For me such a big difference on identical databases/queries looks strange.\n\n--\nTaras Savchuk\n\n\n\n\n\n\n\n\n\n\n\nDear All,\n \nI have demo stand on Hyper-V (2xIntel E5-2650v2, 256GB RAM, Intel SSDs in RAID):\n* One VM with 1C application Server\n* 2 database VMs with same database imported into each PostgreSQL (~56Gb, \"1C accounting 3.0\" config):\n1. CentOS 7 with postgresql_12.6_6.1C_x86_64 (distribution with special patches for running 1C), VM is updated with LIS for Hyper-V\n2. Windows Server 2019 with same postgresql_12.6_6.1C_x86_64\n \nMy real life test is to \"register\" 10 _same_ documents (провести документы) in each of 1C/PostgreSQL DBs. Both PostgreSQL DBs are identical and just before test imported to PostgreSQL via application\n server (DT import).\nOn Windows Server test procedure takes 20-30 seconds, on Linux it takes 1m-1m10seconds. PostgreSQL VMs are running on same Hypervisor with same resources assigned to each of them.\nTuning PostgreSQL config and/or CentOS don't make any difference. Contrary on Windows VM we have almost 3x better performance with stock PostgreSQL config.\n \nAny ideas what's wrong? For me such a big difference on identical databases/queries looks strange.\n \n--\nTaras Savchuk",
"msg_date": "Fri, 4 Jun 2021 11:53:12 +0000",
"msg_from": "Taras Savchuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "PgSQL 12 on WinSrv ~3x faster than on Linux"
},
{
"msg_contents": "On Fri, 4 Jun 2021 at 23:53, Taras Savchuk <[email protected]> wrote:\n> My real life test is to \"register\" 10 _same_ documents (провести документы) in each of 1C/PostgreSQL DBs. Both PostgreSQL DBs are identical and just before test imported to PostgreSQL via application server (DT import).\n> On Windows Server test procedure takes 20-30 seconds, on Linux it takes 1m-1m10seconds. PostgreSQL VMs are running on same Hypervisor with same resources assigned to each of them.\n> Tuning PostgreSQL config and/or CentOS don't make any difference. Contrary on Windows VM we have almost 3x better performance with stock PostgreSQL config.\n>\n> Any ideas what's wrong? For me such a big difference on identical databases/queries looks strange.\n\nIt's pretty difficult to say. You've not provided any useful details\nabout the workload you're running.\n\nIf this \"register 10 _same_ documents\" thing requires running some\nquery, then you might want to look at EXPLAIN (ANALYZE, BUFFERS) for\nthat query. You might want to consider doing SET track_io_timing =\non; Perhaps Linux is having to read more buffers from disk than\nWindows.\n\nDavid.\n\n\n",
"msg_date": "Sat, 5 Jun 2021 00:42:34 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PgSQL 12 on WinSrv ~3x faster than on Linux"
},
{
"msg_contents": "also if you can setup an external timer \\timing , along with explain\nanalyse to get total time, it would help if everything else is same.\n\n\nI have seen some threads thar mention added startup cost for parallel\nworkers on windows but not on Linux.\nBut I do not want to mix those threads here, but just FYI.\n\n\nOn Fri, Jun 4, 2021, 6:12 PM David Rowley <[email protected]> wrote:\n\n> On Fri, 4 Jun 2021 at 23:53, Taras Savchuk <[email protected]> wrote:\n> > My real life test is to \"register\" 10 _same_ documents (провести\n> документы) in each of 1C/PostgreSQL DBs. Both PostgreSQL DBs are identical\n> and just before test imported to PostgreSQL via application server (DT\n> import).\n> > On Windows Server test procedure takes 20-30 seconds, on Linux it takes\n> 1m-1m10seconds. PostgreSQL VMs are running on same Hypervisor with same\n> resources assigned to each of them.\n> > Tuning PostgreSQL config and/or CentOS don't make any difference.\n> Contrary on Windows VM we have almost 3x better performance with stock\n> PostgreSQL config.\n> >\n> > Any ideas what's wrong? For me such a big difference on identical\n> databases/queries looks strange.\n>\n> It's pretty difficult to say. You've not provided any useful details\n> about the workload you're running.\n>\n> If this \"register 10 _same_ documents\" thing requires running some\n> query, then you might want to look at EXPLAIN (ANALYZE, BUFFERS) for\n> that query. You might want to consider doing SET track_io_timing =\n> on; Perhaps Linux is having to read more buffers from disk than\n> Windows.\n>\n> David.\n>\n>\n>\n\nalso if you can setup an external timer \\timing , along with explain analyse to get total time, it would help if everything else is same.I have seen some threads thar mention added startup cost for parallel workers on windows but not on Linux.But I do not want to mix those threads here, but just FYI.On Fri, Jun 4, 2021, 6:12 PM David Rowley <[email protected]> wrote:On Fri, 4 Jun 2021 at 23:53, Taras Savchuk <[email protected]> wrote:\n> My real life test is to \"register\" 10 _same_ documents (провести документы) in each of 1C/PostgreSQL DBs. Both PostgreSQL DBs are identical and just before test imported to PostgreSQL via application server (DT import).\n> On Windows Server test procedure takes 20-30 seconds, on Linux it takes 1m-1m10seconds. PostgreSQL VMs are running on same Hypervisor with same resources assigned to each of them.\n> Tuning PostgreSQL config and/or CentOS don't make any difference. Contrary on Windows VM we have almost 3x better performance with stock PostgreSQL config.\n>\n> Any ideas what's wrong? For me such a big difference on identical databases/queries looks strange.\n\nIt's pretty difficult to say. You've not provided any useful details\nabout the workload you're running.\n\nIf this \"register 10 _same_ documents\" thing requires running some\nquery, then you might want to look at EXPLAIN (ANALYZE, BUFFERS) for\nthat query. You might want to consider doing SET track_io_timing =\non; Perhaps Linux is having to read more buffers from disk than\nWindows.\n\nDavid.",
"msg_date": "Fri, 4 Jun 2021 18:44:16 +0530",
"msg_from": "Vijaykumar Jain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PgSQL 12 on WinSrv ~3x faster than on Linux"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> On Fri, 4 Jun 2021 at 23:53, Taras Savchuk <[email protected]> wrote:\n>> Any ideas what's wrong? For me such a big difference on identical databases/queries looks strange.\n\n> It's pretty difficult to say. You've not provided any useful details\n> about the workload you're running.\n> If this \"register 10 _same_ documents\" thing requires running some\n> query, then you might want to look at EXPLAIN (ANALYZE, BUFFERS) for\n> that query. You might want to consider doing SET track_io_timing =\n> on; Perhaps Linux is having to read more buffers from disk than\n> Windows.\n\nThe first thing that comes to mind for me is fsync working correctly\n(i.e. actually waiting for the disk write) in Linux but not in Windows.\nOn a weird VM stack like you've got, it's not hard for that sort of\nthing to go wrong. Needless to say, if that's the issue then the\napparent performance win is coming at the cost of crash safety.\n\npg_test_fsync might help detect such a problem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Jun 2021 09:40:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PgSQL 12 on WinSrv ~3x faster than on Linux"
},
{
"msg_contents": "> The first thing that comes to mind for me is fsync working correctly (i.e.\n> actually waiting for the disk write) in Linux but not in Windows.\n> On a weird VM stack like you've got, it's not hard for that sort of thing to go\n> wrong. Needless to say, if that's the issue then the apparent performance\n> win is coming at the cost of crash safety.\n> \n> pg_test_fsync might help detect such a problem.\n> \n> \t\t\tregards, tom lane\n> \n\nfsync performance on win is much better (results are below). Also network performance for VMs on same HV for win-win is 40% better than for win-linux (5,97Gbps vs 3,6Gbps).\nRegarding weird VM stack - we're running both win and linux VMs and Hyper-V works reasonable well except... this issue )\n\nWin:\nC:\\Program Files\\PostgreSQL\\12.6-6.1C\\bin>pg_test_fsync.exe\n5 seconds per test\nO_DIRECT supported on this platform for open_datasync and open_sync.\n\nCompare file sync methods using one 8kB write:\n(in wal_sync_method preference order, except fdatasync is Linux's default)\n open_datasync 7423,645 ops/sec 135 usecs/op\n fdatasync n/a\n fsync 1910,611 ops/sec 523 usecs/op\n fsync_writethrough 1987,900 ops/sec 503 usecs/op\n open_sync n/a\n\nCompare file sync methods using two 8kB writes:\n(in wal_sync_method preference order, except fdatasync is Linux's default)\n open_datasync 3827,254 ops/sec 261 usecs/op\n fdatasync n/a\n fsync 1920,720 ops/sec 521 usecs/op\n fsync_writethrough 1863,852 ops/sec 537 usecs/op\n open_sync n/a\n\nCompare open_sync with different write sizes:\n(This is designed to compare the cost of writing 16kB in different write\nopen_sync sizes.)\n 1 * 16kB open_sync write n/a\n 2 * 8kB open_sync writes n/a\n 4 * 4kB open_sync writes n/a\n 8 * 2kB open_sync writes n/a\n 16 * 1kB open_sync writes n/a\n\nTest if fsync on non-write file descriptor is honored:\n(If the times are similar, fsync() can sync data written on a different\ndescriptor.)\n write, fsync, close 144,065 ops/sec 6941 usecs/op\n write, close, fsync 148,751 ops/sec 6723 usecs/op\n\nNon-sync'ed 8kB writes:\n write 165,484 ops/sec 6043 usecs/op\n\nLinux:\n[root@pgsql12 ~]# /usr/pgsql-12/bin/pg_test_fsync\n\n5 seconds per test\nO_DIRECT supported on this platform for open_datasync and open_sync.\n\nCompare file sync methods using one 8kB write:\n(in wal_sync_method preference order, except fdatasync is Linux's default)\n open_datasync 2947.296 ops/sec 339 usecs/op\n fdatasync 2824.271 ops/sec 354 usecs/op\n fsync 1885.924 ops/sec 530 usecs/op\n fsync_writethrough n/a\n open_sync 1816.312 ops/sec 551 usecs/op\n\nCompare file sync methods using two 8kB writes:\n(in wal_sync_method preference order, except fdatasync is Linux's default)\n open_datasync 1458.849 ops/sec 685 usecs/op\n fdatasync 2712.756 ops/sec 369 usecs/op\n fsync 1769.353 ops/sec 565 usecs/op\n fsync_writethrough n/a\n open_sync 902.626 ops/sec 1108 usecs/op\n\nCompare open_sync with different write sizes:\n(This is designed to compare the cost of writing 16kB in different write\nopen_sync sizes.)\n 1 * 16kB open_sync write 1798.811 ops/sec 556 usecs/op\n 2 * 8kB open_sync writes 887.727 ops/sec 1126 usecs/op\n 4 * 4kB open_sync writes 494.843 ops/sec 2021 usecs/op\n 8 * 2kB open_sync writes 233.659 ops/sec 4280 usecs/op\n 16 * 1kB open_sync writes 117.417 ops/sec 8517 usecs/op\n\nTest if fsync on non-write file descriptor is honored:\n(If the times are similar, fsync() can sync data written on a different\ndescriptor.)\n write, fsync, close 1673.781 ops/sec 597 usecs/op\n write, close, fsync 1727.787 ops/sec 579 usecs/op\n\nNon-sync'ed 8kB writes:\n write 200638.271 ops/sec 5 usecs/op\n\n--\nTaras\n\n\n",
"msg_date": "Fri, 4 Jun 2021 14:01:51 +0000",
"msg_from": "Taras Savchuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: PgSQL 12 on WinSrv ~3x faster than on Linux"
}
] |
[
{
"msg_contents": "I am using postgres 12 on AWS RDS\n\ncould someone clarify why the LEFT JOIN order_offer_map oom using\n(order_id) in the below query is using sequential scan instead of\nusing index on order_id which is defined in order_offer_map table.\n\n\n\n\nexplain ANALYZE\n\nWITH business AS( SELECT * FROM get_businessday_utc_f() start_date)\n SELECT ro.order_id,\n ro.date_time,\n round(ro.order_amount, 2) AS order_amount,\n b.branch_id,\n b.branch_name,\n st_x(b.location) AS from_x,\n st_y(b.location) AS from_y,\n b.user_id AS branch_user_id,\n b.contact_info,\n r.restaurant_id,\n c.city_id,\n c.city_name,\n c.city_name_ar,\n st_linefromtext(((((((('LINESTRING('::text || st_x(b.location)) ||\n' '::text) || st_y(b.location)) || ','::text) ||\nst_x(ro.location_geometry)) || ' '::text) ||\nst_y(ro.location_geometry)) || ')'::text, 28355) AS from_to,\n to_char(ro.date_time, 'HH24:MI'::text) AS order_time,\n ro.customer_comment,\n 'N'::text AS is_new_customer,\n ro.picked_up_time,\n ro.driver_assigned_date_time,\n oom.offer_amount,\n oom.offer_type_code AS offer_type,\n ro.uk_vat\n FROM business, restaurant_order ro\n\n JOIN branch b ON b.branch_id = ro.branch_id\nJOIN restaurant r ON r.restaurant_id = b.restaurant_id\n JOIN city c ON c.city_id = b.city_id\nLEFT JOIN order_offer_map oom using (order_id)\nWHERE ro.date_time >= business.start_date AND ro.date_time<=\nf_now_immutable_with_tz();\n\n\n\n\n\nHash Join (cost=39897.37..161872.06 rows=259399 width=494) (actual\ntime=605.767..2060.712 rows=156253 loops=1)\n Hash Cond: (b.city_id = c.city_id)\n -> Hash Join (cost=39895.11..78778.16 rows=259399 width=355) (actual\ntime=605.583..789.863 rows=156253 loops=1)\n Hash Cond: (b.restaurant_id = r.restaurant_id)\n -> Hash Join (cost=39542.41..77744.20 rows=259399 width=307)\n(actual time=602.096..738.765 rows=156253 loops=1)\n Hash Cond: (ro.branch_id = b.branch_id)\n -> Hash Right Join (cost=38607.06..76127.79 rows=259399\nwidth=225) (actual time=591.342..672.039 rows=156253 loops=1)\n Hash Cond: (oom.order_id = ro.order_id)\n -> Seq Scan on order_offer_map oom\n (cost=0.00..34179.09 rows=1273009 width=15) (actual time=0.007..91.121\nrows=1273009 loops=1)\n -> Hash (cost=35364.57..35364.57 rows=259399\nwidth=218) (actual time=244.571..244.571 rows=156253 loops=1)\n Buckets: 262144 Batches: 1 Memory Usage: 29098kB\n -> Index Scan using \"idx$$_00010001\" on\nrestaurant_order ro (cost=0.56..35364.57 rows=259399 width=218) (actual\ntime=0.033..195.939 rows=156253 loops=1)\n Index Cond: ((date_time >= '2021-05-27\n05:00:00'::timestamp without time zone) AND (date_time <= '2021-06-05\n16:38:22.758875+00'::timestamp with time zone))\n Filter: (order_status_code = 'D'::bpchar)\n Rows Removed by Filter: 73969\n -> Hash (cost=673.49..673.49 rows=20949 width=90) (actual\ntime=10.715..10.715 rows=20949 loops=1)\n Buckets: 32768 Batches: 1 Memory Usage: 2758kB\n -> Seq Scan on branch b (cost=0.00..673.49 rows=20949\nwidth=90) (actual time=0.006..6.397 rows=20949 loops=1)\n -> Hash (cost=245.09..245.09 rows=8609 width=56) (actual\ntime=3.466..3.467 rows=8609 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 903kB\n -> Seq Scan on restaurant r (cost=0.00..245.09 rows=8609\nwidth=56) (actual time=0.003..2.096 rows=8609 loops=1)\n -> Hash (cost=1.56..1.56 rows=56 width=29) (actual time=0.026..0.026\nrows=56 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 12kB\n -> Seq Scan on city c (cost=0.00..1.56 rows=56 width=29) (actual\ntime=0.007..0.015 rows=56 loops=1)\nPlanning Time: 1.377 ms\nExecution Time: 2071.965 ms\n\n-Ayub\n\nI am using postgres 12 on AWS RDScould someone clarify why the LEFT JOIN order_offer_map oom using (order_id) in the below query is using sequential scan instead of using index on order_id which is defined in order_offer_map table. \n\n\n\nexplain ANALYZEWITH business AS( SELECT * FROM get_businessday_utc_f() start_date) SELECT ro.order_id, ro.date_time, round(ro.order_amount, 2) AS order_amount, b.branch_id, b.branch_name, st_x(b.location) AS from_x, st_y(b.location) AS from_y, b.user_id AS branch_user_id, b.contact_info, r.restaurant_id, c.city_id, c.city_name, c.city_name_ar, st_linefromtext(((((((('LINESTRING('::text || st_x(b.location)) || ' '::text) || st_y(b.location)) || ','::text) || st_x(ro.location_geometry)) || ' '::text) || st_y(ro.location_geometry)) || ')'::text, 28355) AS from_to, to_char(ro.date_time, 'HH24:MI'::text) AS order_time, ro.customer_comment, 'N'::text AS is_new_customer, ro.picked_up_time, ro.driver_assigned_date_time, oom.offer_amount, oom.offer_type_code AS offer_type, ro.uk_vat FROM business, restaurant_order ro JOIN branch b ON b.branch_id = ro.branch_idJOIN restaurant r ON r.restaurant_id = b.restaurant_id JOIN city c ON c.city_id = b.city_idLEFT JOIN order_offer_map oom using (order_id)WHERE ro.date_time >= business.start_date AND ro.date_time<= f_now_immutable_with_tz();Hash Join (cost=39897.37..161872.06 rows=259399 width=494) (actual time=605.767..2060.712 rows=156253 loops=1) Hash Cond: (b.city_id = c.city_id) -> Hash Join (cost=39895.11..78778.16 rows=259399 width=355) (actual time=605.583..789.863 rows=156253 loops=1) Hash Cond: (b.restaurant_id = r.restaurant_id) -> Hash Join (cost=39542.41..77744.20 rows=259399 width=307) (actual time=602.096..738.765 rows=156253 loops=1) Hash Cond: (ro.branch_id = b.branch_id) -> Hash Right Join (cost=38607.06..76127.79 rows=259399 width=225) (actual time=591.342..672.039 rows=156253 loops=1) Hash Cond: (oom.order_id = ro.order_id) -> Seq Scan on order_offer_map oom (cost=0.00..34179.09 rows=1273009 width=15) (actual time=0.007..91.121 rows=1273009 loops=1) -> Hash (cost=35364.57..35364.57 rows=259399 width=218) (actual time=244.571..244.571 rows=156253 loops=1) Buckets: 262144 Batches: 1 Memory Usage: 29098kB -> Index Scan using \"idx$$_00010001\" on restaurant_order ro (cost=0.56..35364.57 rows=259399 width=218) (actual time=0.033..195.939 rows=156253 loops=1) Index Cond: ((date_time >= '2021-05-27 05:00:00'::timestamp without time zone) AND (date_time <= '2021-06-05 16:38:22.758875+00'::timestamp with time zone)) Filter: (order_status_code = 'D'::bpchar) Rows Removed by Filter: 73969 -> Hash (cost=673.49..673.49 rows=20949 width=90) (actual time=10.715..10.715 rows=20949 loops=1) Buckets: 32768 Batches: 1 Memory Usage: 2758kB -> Seq Scan on branch b (cost=0.00..673.49 rows=20949 width=90) (actual time=0.006..6.397 rows=20949 loops=1) -> Hash (cost=245.09..245.09 rows=8609 width=56) (actual time=3.466..3.467 rows=8609 loops=1) Buckets: 16384 Batches: 1 Memory Usage: 903kB -> Seq Scan on restaurant r (cost=0.00..245.09 rows=8609 width=56) (actual time=0.003..2.096 rows=8609 loops=1) -> Hash (cost=1.56..1.56 rows=56 width=29) (actual time=0.026..0.026 rows=56 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 12kB -> Seq Scan on city c (cost=0.00..1.56 rows=56 width=29) (actual time=0.007..0.015 rows=56 loops=1)Planning Time: 1.377 msExecution Time: 2071.965 ms-Ayub",
"msg_date": "Sat, 5 Jun 2021 19:42:37 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "query planner not using index, instead using squential scan"
},
{
"msg_contents": "Ayub Khan <[email protected]> writes:\n> could someone clarify why the LEFT JOIN order_offer_map oom using\n> (order_id) in the below query is using sequential scan instead of\n> using index on order_id which is defined in order_offer_map table.\n\nProbably because it estimates the hash join to restaurant_order is\nfaster than a nestloop join would be. I think it's likely right.\nYou'd need very optimistic assumptions about the cost of an\nindividual index probe into order_offer_map to conclude that 156K\nof them would be faster than the 476ms that are being spent here\nto read order_offer_map and join it to the result of the\nindexscan on restaurant_order.\n\nIf, indeed, that *is* faster on your hardware, you might want\nto dial down random_page_cost to get more-relevant estimates.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 05 Jun 2021 15:29:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query planner not using index, instead using squential scan"
},
{
"msg_contents": "thanks Tom.\n\nI was trying to simulate some scenarios to be able to explain how the plan\nwould change with/without\n*Rows Removed by Filter: 73969 * -- by using a different/correct index.\n\npostgres=# \\d t\n Table \"public.t\"\n Column | Type | Collation | Nullable | Default\n------------+-----------------------------+-----------+----------+---------\n id | integer | | not null |\n created_on | timestamp without time zone | | |\n col1 | text | | |\nIndexes:\n \"t_pkey\" PRIMARY KEY, btree (id)\n \"t_created_on_idx\" btree (created_on) WHERE col1 = 'a'::text ---\nuseless index as all rows have col1 = 'a', but to attempt lossy case\n \"t_created_on_idx1\" btree (created_on)\nReferenced by:\n TABLE \"t1\" CONSTRAINT \"t1_id_fkey\" FOREIGN KEY (id) REFERENCES t(id)\n\npostgres=# \\d t1\n Table \"public.t1\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n t1_id | integer | | not null |\n id | integer | | |\n col2 | text | | |\nIndexes:\n \"t1_pkey\" PRIMARY KEY, btree (t1_id)\nForeign-key constraints:\n \"t1_id_fkey\" FOREIGN KEY (id) REFERENCES t(id)\n\n\n\npostgres=# update t set col1 = 'a';\nUPDATE 1000\n\npostgres=# explain analyze select 1 from t join t1 on (t.id = t1.id) where\ncreated_on = '2021-06-01 12:48:45.141123';\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------\n Hash Join (cost=37.01..39.28 rows=1 width=4) (actual time=0.124..0.125\nrows=0 loops=1)\n Hash Cond: (t1.id = t.id)\n -> Seq Scan on t1 (cost=0.00..2.00 rows=100 width=4) (actual\ntime=0.004..0.008 rows=100 loops=1)\n -> Hash (cost=37.00..37.00 rows=1 width=4) (actual time=0.109..0.109\nrows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Seq Scan on t (cost=0.00..37.00 rows=1 width=4) (actual\ntime=0.058..0.107 rows=1 loops=1)\n Filter: (created_on = '2021-06-01\n12:48:45.141123'::timestamp without time zone)\n *Rows Removed by Filter: 999 --- as no useful\nindex, t_created_on_idx will fetch all pages and then remove rows from\nthem, expensive*\n Planning Time: 0.111 ms\n Execution Time: 0.162 ms\n(10 rows)\n\n\npostgres=# explain analyze select 1 from t join t1 on (t.id = t1.id) where\ncreated_on = '2021-06-01 12:48:45.141123';\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=8.32..33.47 rows=1 width=4) (actual time=0.025..0.026\nrows=0 loops=1)\n Hash Cond: (t1.id = t.id)\n -> Seq Scan on t1 (cost=0.00..22.00 rows=1200 width=4) (actual\ntime=0.009..0.009 rows=1 loops=1)\n -> Hash (cost=8.31..8.31 rows=1 width=4) (actual time=0.014..0.014\nrows=0 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 8kB\n -> Index Scan using t_created_on_idx1 on t (cost=0.29..8.31\nrows=1 width=4) (actual time=0.014..0.014 rows=0 loops=1)\n Index Cond: (created_on = '2021-06-01\n12:48:45.141123'::timestamp without time zone) -- *exact match using btree\nindex, *\n Planning Time: 0.255 ms\n Execution Time: 0.071 ms\n(9 rows)\n\n\nbut from Ayub's plan, the number of rows fetched are a lot, but is also\nremoving rows post index scan.\nif that can be improved with a btree index that does not filter unwanted\nrows, the run may be faster ?\nbut i guess if there are 156k rows, planner would a have found a win in seq\nscan.\n\nAyub,\njust for the sake of understanding,\n\ncan you run the query using\n\npostgres=# set enable_seqscan TO 0;\nSET\npostgres=# -- explain analyze <run the query>\n\npostgres=# set enable_seqscan TO 1;\nSET\n\n\nOn Sun, 6 Jun 2021 at 00:59, Tom Lane <[email protected]> wrote:\n\n> Ayub Khan <[email protected]> writes:\n> > could someone clarify why the LEFT JOIN order_offer_map oom using\n> > (order_id) in the below query is using sequential scan instead of\n> > using index on order_id which is defined in order_offer_map table.\n>\n> Probably because it estimates the hash join to restaurant_order is\n> faster than a nestloop join would be. I think it's likely right.\n> You'd need very optimistic assumptions about the cost of an\n> individual index probe into order_offer_map to conclude that 156K\n> of them would be faster than the 476ms that are being spent here\n> to read order_offer_map and join it to the result of the\n> indexscan on restaurant_order.\n>\n> If, indeed, that *is* faster on your hardware, you might want\n> to dial down random_page_cost to get more-relevant estimates.\n>\n> regards, tom lane\n>\n>\n>\n\n-- \nThanks,\nVijay\nMumbai, India\n\nthanks Tom.I was trying to simulate some scenarios to be able to explain how the plan would change with/without Rows Removed by Filter: 73969 -- by using a different/correct index.postgres=# \\d t Table \"public.t\" Column | Type | Collation | Nullable | Default------------+-----------------------------+-----------+----------+--------- id | integer | | not null | created_on | timestamp without time zone | | | col1 | text | | |Indexes: \"t_pkey\" PRIMARY KEY, btree (id) \"t_created_on_idx\" btree (created_on) WHERE col1 = 'a'::text --- useless index as all rows have col1 = 'a', but to attempt lossy case \"t_created_on_idx1\" btree (created_on)Referenced by: TABLE \"t1\" CONSTRAINT \"t1_id_fkey\" FOREIGN KEY (id) REFERENCES t(id)postgres=# \\d t1 Table \"public.t1\" Column | Type | Collation | Nullable | Default--------+---------+-----------+----------+--------- t1_id | integer | | not null | id | integer | | | col2 | text | | |Indexes: \"t1_pkey\" PRIMARY KEY, btree (t1_id)Foreign-key constraints: \"t1_id_fkey\" FOREIGN KEY (id) REFERENCES t(id)postgres=# update t set col1 = 'a';UPDATE 1000postgres=# explain analyze select 1 from t join t1 on (t.id = t1.id) where created_on = '2021-06-01 12:48:45.141123'; QUERY PLAN-------------------------------------------------------------------------------------------------------- Hash Join (cost=37.01..39.28 rows=1 width=4) (actual time=0.124..0.125 rows=0 loops=1) Hash Cond: (t1.id = t.id) -> Seq Scan on t1 (cost=0.00..2.00 rows=100 width=4) (actual time=0.004..0.008 rows=100 loops=1) -> Hash (cost=37.00..37.00 rows=1 width=4) (actual time=0.109..0.109 rows=1 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB -> Seq Scan on t (cost=0.00..37.00 rows=1 width=4) (actual time=0.058..0.107 rows=1 loops=1) Filter: (created_on = '2021-06-01 12:48:45.141123'::timestamp without time zone) Rows Removed by Filter: 999 --- as no useful index, t_created_on_idx will fetch all pages and then remove rows from them, expensive Planning Time: 0.111 ms Execution Time: 0.162 ms(10 rows)postgres=# explain analyze select 1 from t join t1 on (t.id = t1.id) where created_on = '2021-06-01 12:48:45.141123'; QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=8.32..33.47 rows=1 width=4) (actual time=0.025..0.026 rows=0 loops=1) Hash Cond: (t1.id = t.id) -> Seq Scan on t1 (cost=0.00..22.00 rows=1200 width=4) (actual time=0.009..0.009 rows=1 loops=1) -> Hash (cost=8.31..8.31 rows=1 width=4) (actual time=0.014..0.014 rows=0 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 8kB -> Index Scan using t_created_on_idx1 on t (cost=0.29..8.31 rows=1 width=4) (actual time=0.014..0.014 rows=0 loops=1) Index Cond: (created_on = '2021-06-01 12:48:45.141123'::timestamp without time zone) -- exact match using btree index, Planning Time: 0.255 ms Execution Time: 0.071 ms(9 rows)but from Ayub's plan, the number of rows fetched are a lot, but is also removing rows post index scan.if that can be improved with a btree index that does not filter unwanted rows, the run may be faster ?but i guess if there are 156k rows, planner would a have found a win in seq scan.Ayub,just for the sake of understanding,can you run the query usingpostgres=# set enable_seqscan TO 0;SETpostgres=# -- explain analyze <run the query>postgres=# set enable_seqscan TO 1;SETOn Sun, 6 Jun 2021 at 00:59, Tom Lane <[email protected]> wrote:Ayub Khan <[email protected]> writes:\n> could someone clarify why the LEFT JOIN order_offer_map oom using\n> (order_id) in the below query is using sequential scan instead of\n> using index on order_id which is defined in order_offer_map table.\n\nProbably because it estimates the hash join to restaurant_order is\nfaster than a nestloop join would be. I think it's likely right.\nYou'd need very optimistic assumptions about the cost of an\nindividual index probe into order_offer_map to conclude that 156K\nof them would be faster than the 476ms that are being spent here\nto read order_offer_map and join it to the result of the\nindexscan on restaurant_order.\n\nIf, indeed, that *is* faster on your hardware, you might want\nto dial down random_page_cost to get more-relevant estimates.\n\n regards, tom lane\n\n\n-- Thanks,VijayMumbai, India",
"msg_date": "Sun, 6 Jun 2021 01:22:06 +0530",
"msg_from": "Vijaykumar Jain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query planner not using index, instead using squential scan"
},
{
"msg_contents": "by setting enable_sequence_scan=OFF, the query execution time seems to\nhave slowed down even though the index is being used for left join of\norder_offer_map\n\nHash Left Join (cost=72639.74..8176118.25 rows=19276467 width=293) (actual\ntime=858.853..3166.994 rows=230222 loops=1)\n Hash Cond: (ro.order_id = oom.order_id)\n -> Hash Join (cost=1947.33..2053190.95 rows=19276467 width=211) (actual\ntime=20.550..462.303 rows=230222 loops=1)\n Hash Cond: (b.city_id = c.city_id)\n -> Hash Join (cost=1937.65..1998751.06 rows=19276467 width=190)\n(actual time=20.523..399.979 rows=230222 loops=1)\n Hash Cond: (b.restaurant_id = r.restaurant_id)\n -> Hash Join (cost=1596.61..1947784.40 rows=19276467\nwidth=190) (actual time=19.047..339.984 rows=230222 loops=1)\n Hash Cond: (ro.branch_id = b.branch_id)\n -> Nested Loop (cost=0.56..1895577.38 rows=19276467\nwidth=108) (actual time=0.032..240.278 rows=230222 loops=1)\n -> Function Scan on start_date (cost=0.00..0.01\nrows=1 width=8) (actual time=0.003..0.003 rows=1 loops=1)\n -> Index Scan using \"idx$$_00010001\" on\nrestaurant_order ro (cost=0.56..1702812.70 rows=19276467 width=108)\n(actual time=0.025..117.525 rows=230222 loops=1)\n Index Cond: ((date_time >=\nstart_date.start_date) AND (date_time <= '2021-06-05\n21:09:50.161463+00'::timestamp with time zone))\n -> Hash (cost=1334.19..1334.19 rows=20949 width=90)\n(actual time=18.969..18.969 rows=20949 loops=1)\n Buckets: 32768 Batches: 1 Memory Usage: 2758kB\n -> Index Scan using \"branch_idx$$_274b0038\" on\nbranch b (cost=0.29..1334.19 rows=20949 width=90) (actual\ntime=0.008..14.371 rows=20949 loops=1)\n -> Hash (cost=233.42..233.42 rows=8609 width=8) (actual\ntime=1.450..1.451 rows=8609 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 465kB\n -> Index Only Scan using \"restaurant_idx$$_274b003d\"\non restaurant r (cost=0.29..233.42 rows=8609 width=8) (actual\ntime=0.011..0.660 rows=8609 loops=1)\n Heap Fetches: 0\n -> Hash (cost=8.98..8.98 rows=56 width=29) (actual\ntime=0.021..0.021 rows=56 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 12kB\n -> Index Only Scan using \"city_idx$$_274b0022\" on city c\n (cost=0.14..8.98 rows=56 width=29) (actual time=0.004..0.010 rows=56\nloops=1)\n Heap Fetches: 0\n -> Hash (cost=54779.81..54779.81 rows=1273009 width=15) (actual\ntime=836.132..836.133 rows=1273009 loops=1)\n Buckets: 2097152 Batches: 1 Memory Usage: 81629kB\n -> Index Scan Backward using order_offer_map_order_id on\norder_offer_map oom (cost=0.43..54779.81 rows=1273009 width=15) (actual\ntime=0.010..578.226 rows=1273009 loops=1)\nPlanning Time: 1.229 ms\nExecution Time: 3183.248 ms\n\nOn Sat, Jun 5, 2021 at 10:52 PM Vijaykumar Jain <\[email protected]> wrote:\n\n> thanks Tom.\n>\n> I was trying to simulate some scenarios to be able to explain how the plan\n> would change with/without\n> *Rows Removed by Filter: 73969 * -- by using a different/correct index.\n>\n> postgres=# \\d t\n> Table \"public.t\"\n> Column | Type | Collation | Nullable | Default\n> ------------+-----------------------------+-----------+----------+---------\n> id | integer | | not null |\n> created_on | timestamp without time zone | | |\n> col1 | text | | |\n> Indexes:\n> \"t_pkey\" PRIMARY KEY, btree (id)\n> \"t_created_on_idx\" btree (created_on) WHERE col1 = 'a'::text ---\n> useless index as all rows have col1 = 'a', but to attempt lossy case\n> \"t_created_on_idx1\" btree (created_on)\n> Referenced by:\n> TABLE \"t1\" CONSTRAINT \"t1_id_fkey\" FOREIGN KEY (id) REFERENCES t(id)\n>\n> postgres=# \\d t1\n> Table \"public.t1\"\n> Column | Type | Collation | Nullable | Default\n> --------+---------+-----------+----------+---------\n> t1_id | integer | | not null |\n> id | integer | | |\n> col2 | text | | |\n> Indexes:\n> \"t1_pkey\" PRIMARY KEY, btree (t1_id)\n> Foreign-key constraints:\n> \"t1_id_fkey\" FOREIGN KEY (id) REFERENCES t(id)\n>\n>\n>\n> postgres=# update t set col1 = 'a';\n> UPDATE 1000\n>\n> postgres=# explain analyze select 1 from t join t1 on (t.id = t1.id)\n> where created_on = '2021-06-01 12:48:45.141123';\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------\n> Hash Join (cost=37.01..39.28 rows=1 width=4) (actual time=0.124..0.125\n> rows=0 loops=1)\n> Hash Cond: (t1.id = t.id)\n> -> Seq Scan on t1 (cost=0.00..2.00 rows=100 width=4) (actual\n> time=0.004..0.008 rows=100 loops=1)\n> -> Hash (cost=37.00..37.00 rows=1 width=4) (actual time=0.109..0.109\n> rows=1 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n> -> Seq Scan on t (cost=0.00..37.00 rows=1 width=4) (actual\n> time=0.058..0.107 rows=1 loops=1)\n> Filter: (created_on = '2021-06-01\n> 12:48:45.141123'::timestamp without time zone)\n> *Rows Removed by Filter: 999 --- as no useful\n> index, t_created_on_idx will fetch all pages and then remove rows from\n> them, expensive*\n> Planning Time: 0.111 ms\n> Execution Time: 0.162 ms\n> (10 rows)\n>\n>\n> postgres=# explain analyze select 1 from t join t1 on (t.id = t1.id)\n> where created_on = '2021-06-01 12:48:45.141123';\n> QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=8.32..33.47 rows=1 width=4) (actual time=0.025..0.026\n> rows=0 loops=1)\n> Hash Cond: (t1.id = t.id)\n> -> Seq Scan on t1 (cost=0.00..22.00 rows=1200 width=4) (actual\n> time=0.009..0.009 rows=1 loops=1)\n> -> Hash (cost=8.31..8.31 rows=1 width=4) (actual time=0.014..0.014\n> rows=0 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 8kB\n> -> Index Scan using t_created_on_idx1 on t (cost=0.29..8.31\n> rows=1 width=4) (actual time=0.014..0.014 rows=0 loops=1)\n> Index Cond: (created_on = '2021-06-01\n> 12:48:45.141123'::timestamp without time zone) -- *exact match using\n> btree index, *\n> Planning Time: 0.255 ms\n> Execution Time: 0.071 ms\n> (9 rows)\n>\n>\n> but from Ayub's plan, the number of rows fetched are a lot, but is also\n> removing rows post index scan.\n> if that can be improved with a btree index that does not filter unwanted\n> rows, the run may be faster ?\n> but i guess if there are 156k rows, planner would a have found a win in\n> seq scan.\n>\n> Ayub,\n> just for the sake of understanding,\n>\n> can you run the query using\n>\n> postgres=# set enable_seqscan TO 0;\n> SET\n> postgres=# -- explain analyze <run the query>\n>\n> postgres=# set enable_seqscan TO 1;\n> SET\n>\n>\n> On Sun, 6 Jun 2021 at 00:59, Tom Lane <[email protected]> wrote:\n>\n>> Ayub Khan <[email protected]> writes:\n>> > could someone clarify why the LEFT JOIN order_offer_map oom using\n>> > (order_id) in the below query is using sequential scan instead of\n>> > using index on order_id which is defined in order_offer_map table.\n>>\n>> Probably because it estimates the hash join to restaurant_order is\n>> faster than a nestloop join would be. I think it's likely right.\n>> You'd need very optimistic assumptions about the cost of an\n>> individual index probe into order_offer_map to conclude that 156K\n>> of them would be faster than the 476ms that are being spent here\n>> to read order_offer_map and join it to the result of the\n>> indexscan on restaurant_order.\n>>\n>> If, indeed, that *is* faster on your hardware, you might want\n>> to dial down random_page_cost to get more-relevant estimates.\n>>\n>> regards, tom lane\n>>\n>>\n>>\n>\n> --\n> Thanks,\n> Vijay\n> Mumbai, India\n>\n\n\n-- \n--------------------------------------------------------------------\nSun Certified Enterprise Architect 1.5\nSun Certified Java Programmer 1.4\nMicrosoft Certified Systems Engineer 2000\nhttp://in.linkedin.com/pub/ayub-khan/a/811/b81\nmobile:+966-502674604\n----------------------------------------------------------------------\nIt is proved that Hard Work and kowledge will get you close but attitude\nwill get you there. However, it's the Love\nof God that will put you over the top!!\n\nby setting enable_sequence_scan=OFF, the query execution time seems to have slowed down even though the index is being used for left join of order_offer_map\n\nHash Left Join (cost=72639.74..8176118.25 rows=19276467 width=293) (actual time=858.853..3166.994 rows=230222 loops=1) Hash Cond: (ro.order_id = oom.order_id) -> Hash Join (cost=1947.33..2053190.95 rows=19276467 width=211) (actual time=20.550..462.303 rows=230222 loops=1) Hash Cond: (b.city_id = c.city_id) -> Hash Join (cost=1937.65..1998751.06 rows=19276467 width=190) (actual time=20.523..399.979 rows=230222 loops=1) Hash Cond: (b.restaurant_id = r.restaurant_id) -> Hash Join (cost=1596.61..1947784.40 rows=19276467 width=190) (actual time=19.047..339.984 rows=230222 loops=1) Hash Cond: (ro.branch_id = b.branch_id) -> Nested Loop (cost=0.56..1895577.38 rows=19276467 width=108) (actual time=0.032..240.278 rows=230222 loops=1) -> Function Scan on start_date (cost=0.00..0.01 rows=1 width=8) (actual time=0.003..0.003 rows=1 loops=1) -> Index Scan using \"idx$$_00010001\" on restaurant_order ro (cost=0.56..1702812.70 rows=19276467 width=108) (actual time=0.025..117.525 rows=230222 loops=1) Index Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-05 21:09:50.161463+00'::timestamp with time zone)) -> Hash (cost=1334.19..1334.19 rows=20949 width=90) (actual time=18.969..18.969 rows=20949 loops=1) Buckets: 32768 Batches: 1 Memory Usage: 2758kB -> Index Scan using \"branch_idx$$_274b0038\" on branch b (cost=0.29..1334.19 rows=20949 width=90) (actual time=0.008..14.371 rows=20949 loops=1) -> Hash (cost=233.42..233.42 rows=8609 width=8) (actual time=1.450..1.451 rows=8609 loops=1) Buckets: 16384 Batches: 1 Memory Usage: 465kB -> Index Only Scan using \"restaurant_idx$$_274b003d\" on restaurant r (cost=0.29..233.42 rows=8609 width=8) (actual time=0.011..0.660 rows=8609 loops=1) Heap Fetches: 0 -> Hash (cost=8.98..8.98 rows=56 width=29) (actual time=0.021..0.021 rows=56 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 12kB -> Index Only Scan using \"city_idx$$_274b0022\" on city c (cost=0.14..8.98 rows=56 width=29) (actual time=0.004..0.010 rows=56 loops=1) Heap Fetches: 0 -> Hash (cost=54779.81..54779.81 rows=1273009 width=15) (actual time=836.132..836.133 rows=1273009 loops=1) Buckets: 2097152 Batches: 1 Memory Usage: 81629kB -> Index Scan Backward using order_offer_map_order_id on order_offer_map oom (cost=0.43..54779.81 rows=1273009 width=15) (actual time=0.010..578.226 rows=1273009 loops=1)Planning Time: 1.229 msExecution Time: 3183.248 msOn Sat, Jun 5, 2021 at 10:52 PM Vijaykumar Jain <[email protected]> wrote:thanks Tom.I was trying to simulate some scenarios to be able to explain how the plan would change with/without Rows Removed by Filter: 73969 -- by using a different/correct index.postgres=# \\d t Table \"public.t\" Column | Type | Collation | Nullable | Default------------+-----------------------------+-----------+----------+--------- id | integer | | not null | created_on | timestamp without time zone | | | col1 | text | | |Indexes: \"t_pkey\" PRIMARY KEY, btree (id) \"t_created_on_idx\" btree (created_on) WHERE col1 = 'a'::text --- useless index as all rows have col1 = 'a', but to attempt lossy case \"t_created_on_idx1\" btree (created_on)Referenced by: TABLE \"t1\" CONSTRAINT \"t1_id_fkey\" FOREIGN KEY (id) REFERENCES t(id)postgres=# \\d t1 Table \"public.t1\" Column | Type | Collation | Nullable | Default--------+---------+-----------+----------+--------- t1_id | integer | | not null | id | integer | | | col2 | text | | |Indexes: \"t1_pkey\" PRIMARY KEY, btree (t1_id)Foreign-key constraints: \"t1_id_fkey\" FOREIGN KEY (id) REFERENCES t(id)postgres=# update t set col1 = 'a';UPDATE 1000postgres=# explain analyze select 1 from t join t1 on (t.id = t1.id) where created_on = '2021-06-01 12:48:45.141123'; QUERY PLAN-------------------------------------------------------------------------------------------------------- Hash Join (cost=37.01..39.28 rows=1 width=4) (actual time=0.124..0.125 rows=0 loops=1) Hash Cond: (t1.id = t.id) -> Seq Scan on t1 (cost=0.00..2.00 rows=100 width=4) (actual time=0.004..0.008 rows=100 loops=1) -> Hash (cost=37.00..37.00 rows=1 width=4) (actual time=0.109..0.109 rows=1 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB -> Seq Scan on t (cost=0.00..37.00 rows=1 width=4) (actual time=0.058..0.107 rows=1 loops=1) Filter: (created_on = '2021-06-01 12:48:45.141123'::timestamp without time zone) Rows Removed by Filter: 999 --- as no useful index, t_created_on_idx will fetch all pages and then remove rows from them, expensive Planning Time: 0.111 ms Execution Time: 0.162 ms(10 rows)postgres=# explain analyze select 1 from t join t1 on (t.id = t1.id) where created_on = '2021-06-01 12:48:45.141123'; QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=8.32..33.47 rows=1 width=4) (actual time=0.025..0.026 rows=0 loops=1) Hash Cond: (t1.id = t.id) -> Seq Scan on t1 (cost=0.00..22.00 rows=1200 width=4) (actual time=0.009..0.009 rows=1 loops=1) -> Hash (cost=8.31..8.31 rows=1 width=4) (actual time=0.014..0.014 rows=0 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 8kB -> Index Scan using t_created_on_idx1 on t (cost=0.29..8.31 rows=1 width=4) (actual time=0.014..0.014 rows=0 loops=1) Index Cond: (created_on = '2021-06-01 12:48:45.141123'::timestamp without time zone) -- exact match using btree index, Planning Time: 0.255 ms Execution Time: 0.071 ms(9 rows)but from Ayub's plan, the number of rows fetched are a lot, but is also removing rows post index scan.if that can be improved with a btree index that does not filter unwanted rows, the run may be faster ?but i guess if there are 156k rows, planner would a have found a win in seq scan.Ayub,just for the sake of understanding,can you run the query usingpostgres=# set enable_seqscan TO 0;SETpostgres=# -- explain analyze <run the query>postgres=# set enable_seqscan TO 1;SETOn Sun, 6 Jun 2021 at 00:59, Tom Lane <[email protected]> wrote:Ayub Khan <[email protected]> writes:\n> could someone clarify why the LEFT JOIN order_offer_map oom using\n> (order_id) in the below query is using sequential scan instead of\n> using index on order_id which is defined in order_offer_map table.\n\nProbably because it estimates the hash join to restaurant_order is\nfaster than a nestloop join would be. I think it's likely right.\nYou'd need very optimistic assumptions about the cost of an\nindividual index probe into order_offer_map to conclude that 156K\nof them would be faster than the 476ms that are being spent here\nto read order_offer_map and join it to the result of the\nindexscan on restaurant_order.\n\nIf, indeed, that *is* faster on your hardware, you might want\nto dial down random_page_cost to get more-relevant estimates.\n\n regards, tom lane\n\n\n-- Thanks,VijayMumbai, India\n-- --------------------------------------------------------------------Sun Certified Enterprise Architect 1.5Sun Certified Java Programmer 1.4Microsoft Certified Systems Engineer 2000http://in.linkedin.com/pub/ayub-khan/a/811/b81mobile:+966-502674604----------------------------------------------------------------------It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!",
"msg_date": "Sun, 6 Jun 2021 00:14:18 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query planner not using index, instead using squential scan"
},
{
"msg_contents": "Yes, slowdown was expected :)\n\nI was just interested in cost estimates.\nAlso did you try to set random_page_cost to 1 if your storage is not hdd.\n\n\n\nOn Sun, 6 Jun 2021 at 2:44 AM Ayub Khan <[email protected]> wrote:\n\n>\n> by setting enable_sequence_scan=OFF, the query execution time seems to\n> have slowed down even though the index is being used for left join of\n> order_offer_map\n>\n> Hash Left Join (cost=72639.74..8176118.25 rows=19276467 width=293)\n> (actual time=858.853..3166.994 rows=230222 loops=1)\n> Hash Cond: (ro.order_id = oom.order_id)\n> -> Hash Join (cost=1947.33..2053190.95 rows=19276467 width=211)\n> (actual time=20.550..462.303 rows=230222 loops=1)\n> Hash Cond: (b.city_id = c.city_id)\n> -> Hash Join (cost=1937.65..1998751.06 rows=19276467 width=190)\n> (actual time=20.523..399.979 rows=230222 loops=1)\n> Hash Cond: (b.restaurant_id = r.restaurant_id)\n> -> Hash Join (cost=1596.61..1947784.40 rows=19276467\n> width=190) (actual time=19.047..339.984 rows=230222 loops=1)\n> Hash Cond: (ro.branch_id = b.branch_id)\n> -> Nested Loop (cost=0.56..1895577.38 rows=19276467\n> width=108) (actual time=0.032..240.278 rows=230222 loops=1)\n> -> Function Scan on start_date\n> (cost=0.00..0.01 rows=1 width=8) (actual time=0.003..0.003 rows=1 loops=1)\n> -> Index Scan using \"idx$$_00010001\" on\n> restaurant_order ro (cost=0.56..1702812.70 rows=19276467 width=108)\n> (actual time=0.025..117.525 rows=230222 loops=1)\n> Index Cond: ((date_time >=\n> start_date.start_date) AND (date_time <= '2021-06-05\n> 21:09:50.161463+00'::timestamp with time zone))\n> -> Hash (cost=1334.19..1334.19 rows=20949 width=90)\n> (actual time=18.969..18.969 rows=20949 loops=1)\n> Buckets: 32768 Batches: 1 Memory Usage: 2758kB\n> -> Index Scan using \"branch_idx$$_274b0038\" on\n> branch b (cost=0.29..1334.19 rows=20949 width=90) (actual\n> time=0.008..14.371 rows=20949 loops=1)\n> -> Hash (cost=233.42..233.42 rows=8609 width=8) (actual\n> time=1.450..1.451 rows=8609 loops=1)\n> Buckets: 16384 Batches: 1 Memory Usage: 465kB\n> -> Index Only Scan using \"restaurant_idx$$_274b003d\"\n> on restaurant r (cost=0.29..233.42 rows=8609 width=8) (actual\n> time=0.011..0.660 rows=8609 loops=1)\n> Heap Fetches: 0\n> -> Hash (cost=8.98..8.98 rows=56 width=29) (actual\n> time=0.021..0.021 rows=56 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 12kB\n> -> Index Only Scan using \"city_idx$$_274b0022\" on city c\n> (cost=0.14..8.98 rows=56 width=29) (actual time=0.004..0.010 rows=56\n> loops=1)\n> Heap Fetches: 0\n> -> Hash (cost=54779.81..54779.81 rows=1273009 width=15) (actual\n> time=836.132..836.133 rows=1273009 loops=1)\n> Buckets: 2097152 Batches: 1 Memory Usage: 81629kB\n> -> Index Scan Backward using order_offer_map_order_id on\n> order_offer_map oom (cost=0.43..54779.81 rows=1273009 width=15) (actual\n> time=0.010..578.226 rows=1273009 loops=1)\n> Planning Time: 1.229 ms\n> Execution Time: 3183.248 ms\n>\n> On Sat, Jun 5, 2021 at 10:52 PM Vijaykumar Jain <\n> [email protected]> wrote:\n>\n>> thanks Tom.\n>>\n>> I was trying to simulate some scenarios to be able to explain how the\n>> plan would change with/without\n>> *Rows Removed by Filter: 73969 * -- by using a different/correct index.\n>>\n>> postgres=# \\d t\n>> Table \"public.t\"\n>> Column | Type | Collation | Nullable | Default\n>>\n>> ------------+-----------------------------+-----------+----------+---------\n>> id | integer | | not null |\n>> created_on | timestamp without time zone | | |\n>> col1 | text | | |\n>> Indexes:\n>> \"t_pkey\" PRIMARY KEY, btree (id)\n>> \"t_created_on_idx\" btree (created_on) WHERE col1 = 'a'::text ---\n>> useless index as all rows have col1 = 'a', but to attempt lossy case\n>> \"t_created_on_idx1\" btree (created_on)\n>> Referenced by:\n>> TABLE \"t1\" CONSTRAINT \"t1_id_fkey\" FOREIGN KEY (id) REFERENCES t(id)\n>>\n>> postgres=# \\d t1\n>> Table \"public.t1\"\n>> Column | Type | Collation | Nullable | Default\n>> --------+---------+-----------+----------+---------\n>> t1_id | integer | | not null |\n>> id | integer | | |\n>> col2 | text | | |\n>> Indexes:\n>> \"t1_pkey\" PRIMARY KEY, btree (t1_id)\n>> Foreign-key constraints:\n>> \"t1_id_fkey\" FOREIGN KEY (id) REFERENCES t(id)\n>>\n>>\n>>\n>> postgres=# update t set col1 = 'a';\n>> UPDATE 1000\n>>\n>> postgres=# explain analyze select 1 from t join t1 on (t.id = t1.id)\n>> where created_on = '2021-06-01 12:48:45.141123';\n>> QUERY PLAN\n>>\n>> --------------------------------------------------------------------------------------------------------\n>> Hash Join (cost=37.01..39.28 rows=1 width=4) (actual time=0.124..0.125\n>> rows=0 loops=1)\n>> Hash Cond: (t1.id = t.id)\n>> -> Seq Scan on t1 (cost=0.00..2.00 rows=100 width=4) (actual\n>> time=0.004..0.008 rows=100 loops=1)\n>> -> Hash (cost=37.00..37.00 rows=1 width=4) (actual time=0.109..0.109\n>> rows=1 loops=1)\n>> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n>> -> Seq Scan on t (cost=0.00..37.00 rows=1 width=4) (actual\n>> time=0.058..0.107 rows=1 loops=1)\n>> Filter: (created_on = '2021-06-01\n>> 12:48:45.141123'::timestamp without time zone)\n>> *Rows Removed by Filter: 999 --- as no useful\n>> index, t_created_on_idx will fetch all pages and then remove rows from\n>> them, expensive*\n>> Planning Time: 0.111 ms\n>> Execution Time: 0.162 ms\n>> (10 rows)\n>>\n>>\n>> postgres=# explain analyze select 1 from t join t1 on (t.id = t1.id)\n>> where created_on = '2021-06-01 12:48:45.141123';\n>> QUERY PLAN\n>>\n>> ---------------------------------------------------------------------------------------------------------------------------------\n>> Hash Join (cost=8.32..33.47 rows=1 width=4) (actual time=0.025..0.026\n>> rows=0 loops=1)\n>> Hash Cond: (t1.id = t.id)\n>> -> Seq Scan on t1 (cost=0.00..22.00 rows=1200 width=4) (actual\n>> time=0.009..0.009 rows=1 loops=1)\n>> -> Hash (cost=8.31..8.31 rows=1 width=4) (actual time=0.014..0.014\n>> rows=0 loops=1)\n>> Buckets: 1024 Batches: 1 Memory Usage: 8kB\n>> -> Index Scan using t_created_on_idx1 on t (cost=0.29..8.31\n>> rows=1 width=4) (actual time=0.014..0.014 rows=0 loops=1)\n>> Index Cond: (created_on = '2021-06-01\n>> 12:48:45.141123'::timestamp without time zone) -- *exact match using\n>> btree index, *\n>> Planning Time: 0.255 ms\n>> Execution Time: 0.071 ms\n>> (9 rows)\n>>\n>>\n>> but from Ayub's plan, the number of rows fetched are a lot, but is also\n>> removing rows post index scan.\n>> if that can be improved with a btree index that does not filter unwanted\n>> rows, the run may be faster ?\n>> but i guess if there are 156k rows, planner would a have found a win in\n>> seq scan.\n>>\n>> Ayub,\n>> just for the sake of understanding,\n>>\n>> can you run the query using\n>>\n>> postgres=# set enable_seqscan TO 0;\n>> SET\n>> postgres=# -- explain analyze <run the query>\n>>\n>> postgres=# set enable_seqscan TO 1;\n>> SET\n>>\n>>\n>> On Sun, 6 Jun 2021 at 00:59, Tom Lane <[email protected]> wrote:\n>>\n>>> Ayub Khan <[email protected]> writes:\n>>> > could someone clarify why the LEFT JOIN order_offer_map oom using\n>>> > (order_id) in the below query is using sequential scan instead of\n>>> > using index on order_id which is defined in order_offer_map table.\n>>>\n>>> Probably because it estimates the hash join to restaurant_order is\n>>> faster than a nestloop join would be. I think it's likely right.\n>>> You'd need very optimistic assumptions about the cost of an\n>>> individual index probe into order_offer_map to conclude that 156K\n>>> of them would be faster than the 476ms that are being spent here\n>>> to read order_offer_map and join it to the result of the\n>>> indexscan on restaurant_order.\n>>>\n>>> If, indeed, that *is* faster on your hardware, you might want\n>>> to dial down random_page_cost to get more-relevant estimates.\n>>>\n>>> regards, tom lane\n>>>\n>>>\n>>>\n>>\n>> --\n>> Thanks,\n>> Vijay\n>> Mumbai, India\n>>\n>\n>\n> --\n> --------------------------------------------------------------------\n> Sun Certified Enterprise Architect 1.5\n> Sun Certified Java Programmer 1.4\n> Microsoft Certified Systems Engineer 2000\n> http://in.linkedin.com/pub/ayub-khan/a/811/b81\n> mobile:+966-502674604\n> ----------------------------------------------------------------------\n> It is proved that Hard Work and kowledge will get you close but attitude\n> will get you there. However, it's the Love\n> of God that will put you over the top!!\n>\n-- \nThanks,\nVijay\nMumbai, India\n\nYes, slowdown was expected :)I was just interested in cost estimates.Also did you try to set random_page_cost to 1 if your storage is not hdd.On Sun, 6 Jun 2021 at 2:44 AM Ayub Khan <[email protected]> wrote:by setting enable_sequence_scan=OFF, the query execution time seems to have slowed down even though the index is being used for left join of order_offer_map\n\nHash Left Join (cost=72639.74..8176118.25 rows=19276467 width=293) (actual time=858.853..3166.994 rows=230222 loops=1) Hash Cond: (ro.order_id = oom.order_id) -> Hash Join (cost=1947.33..2053190.95 rows=19276467 width=211) (actual time=20.550..462.303 rows=230222 loops=1) Hash Cond: (b.city_id = c.city_id) -> Hash Join (cost=1937.65..1998751.06 rows=19276467 width=190) (actual time=20.523..399.979 rows=230222 loops=1) Hash Cond: (b.restaurant_id = r.restaurant_id) -> Hash Join (cost=1596.61..1947784.40 rows=19276467 width=190) (actual time=19.047..339.984 rows=230222 loops=1) Hash Cond: (ro.branch_id = b.branch_id) -> Nested Loop (cost=0.56..1895577.38 rows=19276467 width=108) (actual time=0.032..240.278 rows=230222 loops=1) -> Function Scan on start_date (cost=0.00..0.01 rows=1 width=8) (actual time=0.003..0.003 rows=1 loops=1) -> Index Scan using \"idx$$_00010001\" on restaurant_order ro (cost=0.56..1702812.70 rows=19276467 width=108) (actual time=0.025..117.525 rows=230222 loops=1) Index Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-05 21:09:50.161463+00'::timestamp with time zone)) -> Hash (cost=1334.19..1334.19 rows=20949 width=90) (actual time=18.969..18.969 rows=20949 loops=1) Buckets: 32768 Batches: 1 Memory Usage: 2758kB -> Index Scan using \"branch_idx$$_274b0038\" on branch b (cost=0.29..1334.19 rows=20949 width=90) (actual time=0.008..14.371 rows=20949 loops=1) -> Hash (cost=233.42..233.42 rows=8609 width=8) (actual time=1.450..1.451 rows=8609 loops=1) Buckets: 16384 Batches: 1 Memory Usage: 465kB -> Index Only Scan using \"restaurant_idx$$_274b003d\" on restaurant r (cost=0.29..233.42 rows=8609 width=8) (actual time=0.011..0.660 rows=8609 loops=1) Heap Fetches: 0 -> Hash (cost=8.98..8.98 rows=56 width=29) (actual time=0.021..0.021 rows=56 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 12kB -> Index Only Scan using \"city_idx$$_274b0022\" on city c (cost=0.14..8.98 rows=56 width=29) (actual time=0.004..0.010 rows=56 loops=1) Heap Fetches: 0 -> Hash (cost=54779.81..54779.81 rows=1273009 width=15) (actual time=836.132..836.133 rows=1273009 loops=1) Buckets: 2097152 Batches: 1 Memory Usage: 81629kB -> Index Scan Backward using order_offer_map_order_id on order_offer_map oom (cost=0.43..54779.81 rows=1273009 width=15) (actual time=0.010..578.226 rows=1273009 loops=1)Planning Time: 1.229 msExecution Time: 3183.248 msOn Sat, Jun 5, 2021 at 10:52 PM Vijaykumar Jain <[email protected]> wrote:thanks Tom.I was trying to simulate some scenarios to be able to explain how the plan would change with/without Rows Removed by Filter: 73969 -- by using a different/correct index.postgres=# \\d t Table \"public.t\" Column | Type | Collation | Nullable | Default------------+-----------------------------+-----------+----------+--------- id | integer | | not null | created_on | timestamp without time zone | | | col1 | text | | |Indexes: \"t_pkey\" PRIMARY KEY, btree (id) \"t_created_on_idx\" btree (created_on) WHERE col1 = 'a'::text --- useless index as all rows have col1 = 'a', but to attempt lossy case \"t_created_on_idx1\" btree (created_on)Referenced by: TABLE \"t1\" CONSTRAINT \"t1_id_fkey\" FOREIGN KEY (id) REFERENCES t(id)postgres=# \\d t1 Table \"public.t1\" Column | Type | Collation | Nullable | Default--------+---------+-----------+----------+--------- t1_id | integer | | not null | id | integer | | | col2 | text | | |Indexes: \"t1_pkey\" PRIMARY KEY, btree (t1_id)Foreign-key constraints: \"t1_id_fkey\" FOREIGN KEY (id) REFERENCES t(id)postgres=# update t set col1 = 'a';UPDATE 1000postgres=# explain analyze select 1 from t join t1 on (t.id = t1.id) where created_on = '2021-06-01 12:48:45.141123'; QUERY PLAN-------------------------------------------------------------------------------------------------------- Hash Join (cost=37.01..39.28 rows=1 width=4) (actual time=0.124..0.125 rows=0 loops=1) Hash Cond: (t1.id = t.id) -> Seq Scan on t1 (cost=0.00..2.00 rows=100 width=4) (actual time=0.004..0.008 rows=100 loops=1) -> Hash (cost=37.00..37.00 rows=1 width=4) (actual time=0.109..0.109 rows=1 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 9kB -> Seq Scan on t (cost=0.00..37.00 rows=1 width=4) (actual time=0.058..0.107 rows=1 loops=1) Filter: (created_on = '2021-06-01 12:48:45.141123'::timestamp without time zone) Rows Removed by Filter: 999 --- as no useful index, t_created_on_idx will fetch all pages and then remove rows from them, expensive Planning Time: 0.111 ms Execution Time: 0.162 ms(10 rows)postgres=# explain analyze select 1 from t join t1 on (t.id = t1.id) where created_on = '2021-06-01 12:48:45.141123'; QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=8.32..33.47 rows=1 width=4) (actual time=0.025..0.026 rows=0 loops=1) Hash Cond: (t1.id = t.id) -> Seq Scan on t1 (cost=0.00..22.00 rows=1200 width=4) (actual time=0.009..0.009 rows=1 loops=1) -> Hash (cost=8.31..8.31 rows=1 width=4) (actual time=0.014..0.014 rows=0 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 8kB -> Index Scan using t_created_on_idx1 on t (cost=0.29..8.31 rows=1 width=4) (actual time=0.014..0.014 rows=0 loops=1) Index Cond: (created_on = '2021-06-01 12:48:45.141123'::timestamp without time zone) -- exact match using btree index, Planning Time: 0.255 ms Execution Time: 0.071 ms(9 rows)but from Ayub's plan, the number of rows fetched are a lot, but is also removing rows post index scan.if that can be improved with a btree index that does not filter unwanted rows, the run may be faster ?but i guess if there are 156k rows, planner would a have found a win in seq scan.Ayub,just for the sake of understanding,can you run the query usingpostgres=# set enable_seqscan TO 0;SETpostgres=# -- explain analyze <run the query>postgres=# set enable_seqscan TO 1;SETOn Sun, 6 Jun 2021 at 00:59, Tom Lane <[email protected]> wrote:Ayub Khan <[email protected]> writes:\n> could someone clarify why the LEFT JOIN order_offer_map oom using\n> (order_id) in the below query is using sequential scan instead of\n> using index on order_id which is defined in order_offer_map table.\n\nProbably because it estimates the hash join to restaurant_order is\nfaster than a nestloop join would be. I think it's likely right.\nYou'd need very optimistic assumptions about the cost of an\nindividual index probe into order_offer_map to conclude that 156K\nof them would be faster than the 476ms that are being spent here\nto read order_offer_map and join it to the result of the\nindexscan on restaurant_order.\n\nIf, indeed, that *is* faster on your hardware, you might want\nto dial down random_page_cost to get more-relevant estimates.\n\n regards, tom lane\n\n\n-- Thanks,VijayMumbai, India\n-- --------------------------------------------------------------------Sun Certified Enterprise Architect 1.5Sun Certified Java Programmer 1.4Microsoft Certified Systems Engineer 2000http://in.linkedin.com/pub/ayub-khan/a/811/b81mobile:+966-502674604----------------------------------------------------------------------It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!\n-- Thanks,VijayMumbai, India",
"msg_date": "Sun, 6 Jun 2021 02:48:10 +0530",
"msg_from": "Vijaykumar Jain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query planner not using index, instead using squential scan"
}
] |
[
{
"msg_contents": "Other than Dexter, Is there an auto tune or query performance indicator for\npostgres ?\nAlso which are the most commonly used monitoring (slow query, cpu, index\ncreation for missing indexs ) tools being used for postgres ?\n\n--Ayub\n\nOther than Dexter, Is there an auto tune or query performance indicator for postgres ?Also which are the most commonly used monitoring (slow query, cpu, index creation for missing indexs ) tools being used for postgres ?--Ayub",
"msg_date": "Mon, 7 Jun 2021 07:51:49 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "dexter on AWS RDS auto tune queries"
},
{
"msg_contents": "Hi,\n\nOn Mon, Jun 7, 2021 at 12:52 PM Ayub Khan <[email protected]> wrote:\n>\n> Other than Dexter, Is there an auto tune or query performance indicator for postgres ?\n\nIt depends. If you're on AWS or any other cloud, probably nothing\napart from tools based on logs or standard SQL execution (so nothing\nbased on third party extension).\n\n> Also which are the most commonly used monitoring (slow query, cpu, index creation for missing indexs ) tools being used for postgres ?\n\nThere are a lot of projects documented at\nhttps://wiki.postgresql.org/wiki/Monitoring\n\n\n",
"msg_date": "Mon, 7 Jun 2021 12:57:37 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dexter on AWS RDS auto tune queries"
},
{
"msg_contents": "\n\n> On Jun 6, 2021, at 21:51, Ayub Khan <[email protected]> wrote:\n> Other than Dexter, Is there an auto tune or query performance indicator for postgres ?\n\nGenerally, auto-creating indexes isn't a great idea. I respect the work that went into Dexter, but it's much better to find the queries and study them, then decide if index creation is the right thing.\n\nRDS has Performance Insights, which is a very useful tool for finding where the load on your server is actually coming from.\n\n",
"msg_date": "Sun, 6 Jun 2021 22:00:46 -0700",
"msg_from": "Christophe Pettus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dexter on AWS RDS auto tune queries"
},
{
"msg_contents": "Thank you @Julian\n\n@Christophe: yes I am using RDS performance insights, however it might\nbe more helpful if it could give more info about the slowness of the\nqueries and what improvements could be done to the queries itself.\n\nI am using pgMusted to analyze a slow query and there the suggestion is to\ncreate an index on app2.user_id, however app2.user_id is a primary key.\n\nbelow is the query and its explain:\n\nselect * from (\n SELECT\n act.*,\n app1.user_name AS created_by_username,\n app2.user_name AS last_updated_by_username\n FROM\n account_transactions AS act LEFT OUTER JOIN app_user AS app1 ON\napp1.user_id = act.created_by\n LEFT OUTER JOIN app_user AS app2 ON app2.user_id = act.last_updated_by\n WHERE act.is_deleted = 'false' AND\n act.CREATION_DATE BETWEEN TO_DATE('06/06/2021', 'DD-MM-YYYY')\nAND TO_DATE('07-06-2021', 'DD-MM-YYYY')\n ORDER BY act.ID DESC\n) as items order by id desc\n\n\nSort (cost=488871.14..489914.69 rows=417420 width=270) (actual\ntime=2965.815..2979.921 rows=118040 loops=1)\n Sort Key: act.id DESC\n Sort Method: quicksort Memory: 57607kB\n -> Merge Left Join (cost=422961.21..449902.61 rows=417420 width=270)\n(actual time=2120.021..2884.484 rows=118040 loops=1)\n Merge Cond: (act.last_updated_by = ((app2.user_id)::numeric))\n -> Sort (cost=7293.98..7301.62 rows=3054 width=257) (actual\ntime=464.243..481.292 rows=118040 loops=1)\n Sort Key: act.last_updated_by\n Sort Method: quicksort Memory: 50899kB\n -> Nested Loop Left Join (cost=0.87..7117.21 rows=3054\nwidth=257) (actual time=0.307..316.148 rows=118040 loops=1)\n -> Index Scan using creation_date on\naccount_transactions act (cost=0.44..192.55 rows=3054 width=244) (actual\ntime=0.295..67.330 rows=118040 loops=1)\n\" Index Cond: ((creation_date >=\nto_date('06/06/2021'::text, 'DD-MM-YYYY'::text)) AND (creation_date <=\nto_date('07-06-2021'::text, 'DD-MM-YYYY'::text)))\"\n Filter: ((is_deleted)::text = 'false'::text)\n -> Index Scan using app_user_pk on app_user app1\n (cost=0.43..2.27 rows=1 width=21) (actual time=0.002..0.002 rows=1\nloops=118040)\n Index Cond: (user_id = act.created_by)\n -> Sort (cost=415667.22..423248.65 rows=3032573 width=21) (actual\ntime=1655.748..1876.596 rows=3079326 loops=1)\n Sort Key: ((app2.user_id)::numeric)\n Sort Method: quicksort Memory: 335248kB\n -> Seq Scan on app_user app2 (cost=0.00..89178.73\nrows=3032573 width=21) (actual time=0.013..575.630 rows=3032702 loops=1)\nPlanning Time: 2.222 ms\nExecution Time: 3009.387 ms\n\n\nOn Mon, Jun 7, 2021 at 8:00 AM Christophe Pettus <[email protected]> wrote:\n\n>\n>\n> > On Jun 6, 2021, at 21:51, Ayub Khan <[email protected]> wrote:\n> > Other than Dexter, Is there an auto tune or query performance indicator\n> for postgres ?\n>\n> Generally, auto-creating indexes isn't a great idea. I respect the work\n> that went into Dexter, but it's much better to find the queries and study\n> them, then decide if index creation is the right thing.\n>\n> RDS has Performance Insights, which is a very useful tool for finding\n> where the load on your server is actually coming from.\n\n\n\n-- \n--------------------------------------------------------------------\nSun Certified Enterprise Architect 1.5\nSun Certified Java Programmer 1.4\nMicrosoft Certified Systems Engineer 2000\nhttp://in.linkedin.com/pub/ayub-khan/a/811/b81\nmobile:+966-502674604\n----------------------------------------------------------------------\nIt is proved that Hard Work and kowledge will get you close but attitude\nwill get you there. However, it's the Love\nof God that will put you over the top!!\n\nThank you @Julian@Christophe: yes I am using RDS performance insights, however it might be more helpful if it could give more info about the slowness of the queries and what improvements could be done to the queries itself.I am using pgMusted to analyze a slow query and there the suggestion is to create an index on \n\napp2.user_id, however app2.user_id is a primary key.below is the query and its explain:select * from ( SELECT act.*, app1.user_name AS created_by_username, app2.user_name AS last_updated_by_username FROM account_transactions AS act LEFT OUTER JOIN app_user AS app1 ON app1.user_id = act.created_by LEFT OUTER JOIN app_user AS app2 ON app2.user_id = act.last_updated_by WHERE act.is_deleted = 'false' AND act.CREATION_DATE BETWEEN TO_DATE('06/06/2021', 'DD-MM-YYYY') AND TO_DATE('07-06-2021', 'DD-MM-YYYY') ORDER BY act.ID DESC) as items order by id descSort (cost=488871.14..489914.69 rows=417420 width=270) (actual time=2965.815..2979.921 rows=118040 loops=1) Sort Key: act.id DESC Sort Method: quicksort Memory: 57607kB -> Merge Left Join (cost=422961.21..449902.61 rows=417420 width=270) (actual time=2120.021..2884.484 rows=118040 loops=1) Merge Cond: (act.last_updated_by = ((app2.user_id)::numeric)) -> Sort (cost=7293.98..7301.62 rows=3054 width=257) (actual time=464.243..481.292 rows=118040 loops=1) Sort Key: act.last_updated_by Sort Method: quicksort Memory: 50899kB -> Nested Loop Left Join (cost=0.87..7117.21 rows=3054 width=257) (actual time=0.307..316.148 rows=118040 loops=1) -> Index Scan using creation_date on account_transactions act (cost=0.44..192.55 rows=3054 width=244) (actual time=0.295..67.330 rows=118040 loops=1)\" Index Cond: ((creation_date >= to_date('06/06/2021'::text, 'DD-MM-YYYY'::text)) AND (creation_date <= to_date('07-06-2021'::text, 'DD-MM-YYYY'::text)))\" Filter: ((is_deleted)::text = 'false'::text) -> Index Scan using app_user_pk on app_user app1 (cost=0.43..2.27 rows=1 width=21) (actual time=0.002..0.002 rows=1 loops=118040) Index Cond: (user_id = act.created_by) -> Sort (cost=415667.22..423248.65 rows=3032573 width=21) (actual time=1655.748..1876.596 rows=3079326 loops=1) Sort Key: ((app2.user_id)::numeric) Sort Method: quicksort Memory: 335248kB -> Seq Scan on app_user app2 (cost=0.00..89178.73 rows=3032573 width=21) (actual time=0.013..575.630 rows=3032702 loops=1)Planning Time: 2.222 msExecution Time: 3009.387 msOn Mon, Jun 7, 2021 at 8:00 AM Christophe Pettus <[email protected]> wrote:\n\n> On Jun 6, 2021, at 21:51, Ayub Khan <[email protected]> wrote:\n> Other than Dexter, Is there an auto tune or query performance indicator for postgres ?\n\nGenerally, auto-creating indexes isn't a great idea. I respect the work that went into Dexter, but it's much better to find the queries and study them, then decide if index creation is the right thing.\n\nRDS has Performance Insights, which is a very useful tool for finding where the load on your server is actually coming from.-- --------------------------------------------------------------------Sun Certified Enterprise Architect 1.5Sun Certified Java Programmer 1.4Microsoft Certified Systems Engineer 2000http://in.linkedin.com/pub/ayub-khan/a/811/b81mobile:+966-502674604----------------------------------------------------------------------It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!",
"msg_date": "Mon, 7 Jun 2021 10:49:52 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dexter on AWS RDS auto tune queries"
},
{
"msg_contents": "Please don't top post here.\n\nOn Mon, Jun 7, 2021 at 3:50 PM Ayub Khan <[email protected]> wrote:\n>\n> @Christophe: yes I am using RDS performance insights, however it might be more helpful if it could give more info about the slowness of the queries and what improvements could be done to the queries itself.\n>\n> I am using pgMusted to analyze a slow query and there the suggestion is to create an index on app2.user_id, however app2.user_id is a primary key.\n>\n> below is the query and its explain:\n>\n> select * from (\n> SELECT\n> act.*,\n> app1.user_name AS created_by_username,\n> app2.user_name AS last_updated_by_username\n> FROM\n> account_transactions AS act LEFT OUTER JOIN app_user AS app1 ON app1.user_id = act.created_by\n> LEFT OUTER JOIN app_user AS app2 ON app2.user_id = act.last_updated_by\n> WHERE act.is_deleted = 'false' AND\n> act.CREATION_DATE BETWEEN TO_DATE('06/06/2021', 'DD-MM-YYYY') AND TO_DATE('07-06-2021', 'DD-MM-YYYY')\n> ORDER BY act.ID DESC\n> ) as items order by id desc\n>\n>\n> Sort (cost=488871.14..489914.69 rows=417420 width=270) (actual time=2965.815..2979.921 rows=118040 loops=1)\n> Sort Key: act.id DESC\n> Sort Method: quicksort Memory: 57607kB\n> -> Merge Left Join (cost=422961.21..449902.61 rows=417420 width=270) (actual time=2120.021..2884.484 rows=118040 loops=1)\n> Merge Cond: (act.last_updated_by = ((app2.user_id)::numeric))\n> -> Sort (cost=7293.98..7301.62 rows=3054 width=257) (actual time=464.243..481.292 rows=118040 loops=1)\n> Sort Key: act.last_updated_by\n> Sort Method: quicksort Memory: 50899kB\n> -> Nested Loop Left Join (cost=0.87..7117.21 rows=3054 width=257) (actual time=0.307..316.148 rows=118040 loops=1)\n> -> Index Scan using creation_date on account_transactions act (cost=0.44..192.55 rows=3054 width=244) (actual time=0.295..67.330 rows=118040 loops=1)\n> \" Index Cond: ((creation_date >= to_date('06/06/2021'::text, 'DD-MM-YYYY'::text)) AND (creation_date <= to_date('07-06-2021'::text, 'DD-MM-YYYY'::text)))\"\n> Filter: ((is_deleted)::text = 'false'::text)\n> -> Index Scan using app_user_pk on app_user app1 (cost=0.43..2.27 rows=1 width=21) (actual time=0.002..0.002 rows=1 loops=118040)\n> Index Cond: (user_id = act.created_by)\n> -> Sort (cost=415667.22..423248.65 rows=3032573 width=21) (actual time=1655.748..1876.596 rows=3079326 loops=1)\n> Sort Key: ((app2.user_id)::numeric)\n> Sort Method: quicksort Memory: 335248kB\n> -> Seq Scan on app_user app2 (cost=0.00..89178.73 rows=3032573 width=21) (actual time=0.013..575.630 rows=3032702 loops=1)\n> Planning Time: 2.222 ms\n> Execution Time: 3009.387 ms\n\nI'd say that your problem is that account_transactions.updated_by is\nnumeric (which seems like a terrible idea) while app_user.user_id is\nnot, so the index can't be used. Some extensions could detect that,\nbut you won't be able to install them on RDS.\n\n\n",
"msg_date": "Mon, 7 Jun 2021 15:57:33 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dexter on AWS RDS auto tune queries"
},
{
"msg_contents": "Julien,\n\nThank you for the pointer. I will change the data type and verify the query\nagain.\n\n-Ayub\n\nOn Mon, Jun 7, 2021 at 7:51 AM Ayub Khan <[email protected]> wrote:\n\n>\n> Other than Dexter, Is there an auto tune or query performance indicator\n> for postgres ?\n> Also which are the most commonly used monitoring (slow query, cpu, index\n> creation for missing indexs ) tools being used for postgres ?\n>\n> --Ayub\n>\n\n\n-- \n--------------------------------------------------------------------\nSun Certified Enterprise Architect 1.5\nSun Certified Java Programmer 1.4\nMicrosoft Certified Systems Engineer 2000\nhttp://in.linkedin.com/pub/ayub-khan/a/811/b81\nmobile:+966-502674604\n----------------------------------------------------------------------\nIt is proved that Hard Work and kowledge will get you close but attitude\nwill get you there. However, it's the Love\nof God that will put you over the top!!\n\nJulien,Thank you for the pointer. I will change the data type and verify the query again.-AyubOn Mon, Jun 7, 2021 at 7:51 AM Ayub Khan <[email protected]> wrote:Other than Dexter, Is there an auto tune or query performance indicator for postgres ?Also which are the most commonly used monitoring (slow query, cpu, index creation for missing indexs ) tools being used for postgres ?--Ayub\n-- --------------------------------------------------------------------Sun Certified Enterprise Architect 1.5Sun Certified Java Programmer 1.4Microsoft Certified Systems Engineer 2000http://in.linkedin.com/pub/ayub-khan/a/811/b81mobile:+966-502674604----------------------------------------------------------------------It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!",
"msg_date": "Mon, 7 Jun 2021 13:05:55 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dexter on AWS RDS auto tune queries"
}
] |
[
{
"msg_contents": "I checked all the indexes are defined on the tables however the query seems\nslow, below is the plan. Can any one give any pointers to verify ?\n\nSELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id,\nb.menu_item_category_desc, c.menu_item_variant_id,\nc.menu_item_variant_type_id, c.price, c.size_id,\nc.parent_menu_item_variant_id, d.menu_item_variant_type_desc,\ne.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name\n\n FROM menu_item_category AS b, menu_item_variant AS c,\nmenu_item_variant_type AS d, item_size AS e, restaurant AS f,\nmenu_item AS a\n\n LEFT OUTER JOIN mark AS m ON (a.mark_id = m.mark_id) WHERE\na.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id =\nc.menu_item_id AND c.menu_item_variant_type_id =\nd.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id =\ne.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id =\n1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL)\n\n AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM\nmenu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted =\n'N' limit 1) AND a.active = 'Y'\n AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE\nCONCAT_WS('', '%,4191,%') OR NULL IS NULL)\n AND is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n\n ORDER BY a.row_order, menu_item_id;\n\n\nbelow is the plan\n\n\nSort (cost=189.27..189.27 rows=1 width=152) (actual time=5.876..5.885\nrows=89 loops=1)\n\" Sort Key: a.row_order, a.menu_item_id\"\n Sort Method: quicksort Memory: 48kB\n -> Nested Loop Left Join (cost=5.19..189.26 rows=1 width=152)\n(actual time=0.188..5.809 rows=89 loops=1)\n Join Filter: (a.mark_id = m.mark_id)\n Rows Removed by Join Filter: 267\n -> Nested Loop (cost=5.19..188.19 rows=1 width=148) (actual\ntime=0.181..5.629 rows=89 loops=1)\n -> Nested Loop (cost=4.90..185.88 rows=1 width=152)\n(actual time=0.174..5.443 rows=89 loops=1)\n -> Nested Loop (cost=4.61..185.57 rows=1\nwidth=144) (actual time=0.168..5.272 rows=89 loops=1)\n -> Nested Loop (cost=4.32..185.25 rows=1\nwidth=136) (actual time=0.162..5.066 rows=89 loops=1)\n -> Nested Loop (cost=0.71..179.62\nrows=1 width=99) (actual time=0.137..3.986 rows=89 loops=1)\n -> Index Scan using\nmenu_item_restaurant_id on menu_item a (cost=0.42..177.31 rows=1\nwidth=87) (actual time=0.130..3.769 rows=89 loops=1)\n Index Cond: (restaurant_id = 1528)\n\" Filter: ((active =\n'Y'::bpchar) AND (is_menu_item_available(menu_item_id, 'Y'::bpchar) =\n'Y'::bpchar))\"\n Rows Removed by Filter: 194\n -> Index Scan using\nmenu_item_category_pk on menu_item_category b (cost=0.29..2.31 rows=1\nwidth=20) (actual time=0.002..0.002 rows=1 loops=89)\n Index Cond:\n(menu_item_category_id = a.menu_item_category_id)\n -> Index Scan using\nmenu_item_variant_pk on menu_item_variant c (cost=3.60..5.62 rows=1\nwidth=45) (actual time=0.002..0.002 rows=1 loops=89)\n Index Cond:\n(menu_item_variant_id = (SubPlan 1))\n Filter: (a.menu_item_id = menu_item_id)\n SubPlan 1\n -> Limit (cost=3.17..3.18\nrows=1 width=8) (actual time=0.009..0.009 rows=1 loops=89)\n -> Aggregate\n(cost=3.17..3.18 rows=1 width=8) (actual time=0.008..0.008 rows=1\nloops=89)\n -> Index Scan\nusing \"idx$$_023a0001\" on menu_item_variant (cost=0.43..3.15 rows=8\nwidth=8) (actual time=0.004..0.007 rows=7 loops=89)\n Index Cond:\n(menu_item_id = a.menu_item_id)\n Filter:\n(deleted = 'N'::bpchar)\n Rows Removed\nby Filter: 4\n -> Index Scan using\nmenu_item_variant_type_pk on menu_item_variant_type d\n(cost=0.29..0.31 rows=1 width=16) (actual time=0.002..0.002 rows=1\nloops=89)\n Index Cond: (menu_item_variant_type_id\n= c.menu_item_variant_type_id)\n Filter: ((is_hidden)::text = 'false'::text)\n -> Index Scan using size_pk on item_size e\n(cost=0.29..0.31 rows=1 width=16) (actual time=0.001..0.001 rows=1\nloops=89)\n Index Cond: (size_id = c.size_id)\n -> Index Scan using \"restaurant_idx$$_274b003d\" on\nrestaurant f (cost=0.29..2.30 rows=1 width=12) (actual\ntime=0.001..0.002 rows=1 loops=89)\n Index Cond: (restaurant_id = 1528)\n -> Seq Scan on mark m (cost=0.00..1.03 rows=3 width=12)\n(actual time=0.000..0.001 rows=3 loops=89)\nPlanning Time: 1.510 ms\nExecution Time: 5.972 ms\n\nI checked all the indexes are defined on the tables however the query seems slow, below is the plan. Can any one give any pointers to verify ?SELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM menu_item_category AS b, menu_item_variant AS c, menu_item_variant_type AS d, item_size AS e, restaurant AS f, menu_item AS a LEFT OUTER JOIN mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' limit 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL) AND is_menu_item_available(a.menu_item_id, 'Y') = 'Y' ORDER BY a.row_order, menu_item_id;below is the planSort (cost=189.27..189.27 rows=1 width=152) (actual time=5.876..5.885 rows=89 loops=1)\" Sort Key: a.row_order, a.menu_item_id\" Sort Method: quicksort Memory: 48kB -> Nested Loop Left Join (cost=5.19..189.26 rows=1 width=152) (actual time=0.188..5.809 rows=89 loops=1) Join Filter: (a.mark_id = m.mark_id) Rows Removed by Join Filter: 267 -> Nested Loop (cost=5.19..188.19 rows=1 width=148) (actual time=0.181..5.629 rows=89 loops=1) -> Nested Loop (cost=4.90..185.88 rows=1 width=152) (actual time=0.174..5.443 rows=89 loops=1) -> Nested Loop (cost=4.61..185.57 rows=1 width=144) (actual time=0.168..5.272 rows=89 loops=1) -> Nested Loop (cost=4.32..185.25 rows=1 width=136) (actual time=0.162..5.066 rows=89 loops=1) -> Nested Loop (cost=0.71..179.62 rows=1 width=99) (actual time=0.137..3.986 rows=89 loops=1) -> Index Scan using menu_item_restaurant_id on menu_item a (cost=0.42..177.31 rows=1 width=87) (actual time=0.130..3.769 rows=89 loops=1) Index Cond: (restaurant_id = 1528)\" Filter: ((active = 'Y'::bpchar) AND (is_menu_item_available(menu_item_id, 'Y'::bpchar) = 'Y'::bpchar))\" Rows Removed by Filter: 194 -> Index Scan using menu_item_category_pk on menu_item_category b (cost=0.29..2.31 rows=1 width=20) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_category_id = a.menu_item_category_id) -> Index Scan using menu_item_variant_pk on menu_item_variant c (cost=3.60..5.62 rows=1 width=45) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_variant_id = (SubPlan 1)) Filter: (a.menu_item_id = menu_item_id) SubPlan 1 -> Limit (cost=3.17..3.18 rows=1 width=8) (actual time=0.009..0.009 rows=1 loops=89) -> Aggregate (cost=3.17..3.18 rows=1 width=8) (actual time=0.008..0.008 rows=1 loops=89) -> Index Scan using \"idx$$_023a0001\" on menu_item_variant (cost=0.43..3.15 rows=8 width=8) (actual time=0.004..0.007 rows=7 loops=89) Index Cond: (menu_item_id = a.menu_item_id) Filter: (deleted = 'N'::bpchar) Rows Removed by Filter: 4 -> Index Scan using menu_item_variant_type_pk on menu_item_variant_type d (cost=0.29..0.31 rows=1 width=16) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_variant_type_id = c.menu_item_variant_type_id) Filter: ((is_hidden)::text = 'false'::text) -> Index Scan using size_pk on item_size e (cost=0.29..0.31 rows=1 width=16) (actual time=0.001..0.001 rows=1 loops=89) Index Cond: (size_id = c.size_id) -> Index Scan using \"restaurant_idx$$_274b003d\" on restaurant f (cost=0.29..2.30 rows=1 width=12) (actual time=0.001..0.002 rows=1 loops=89) Index Cond: (restaurant_id = 1528) -> Seq Scan on mark m (cost=0.00..1.03 rows=3 width=12) (actual time=0.000..0.001 rows=3 loops=89)Planning Time: 1.510 msExecution Time: 5.972 ms",
"msg_date": "Tue, 8 Jun 2021 19:03:48 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow query"
},
{
"msg_contents": "\n\n> On Jun 8, 2021, at 09:03, Ayub Khan <[email protected]> wrote:\n> I checked all the indexes are defined on the tables however the query seems slow, below is the plan.\n\nIt's currently running in slightly under six milliseconds. That seems reasonably fast given the number of operations required to fulfill it.\n\n",
"msg_date": "Tue, 8 Jun 2021 09:08:15 -0700",
"msg_from": "Christophe Pettus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow query"
},
{
"msg_contents": "In AWS RDS performance insights the client writes is high and the api\nwhich receives data on the mobile side is slow during load test.\n\nOn Tue, 8 Jun 2021, 19:03 Ayub Khan, <[email protected]> wrote:\n\n>\n> I checked all the indexes are defined on the tables however the query\n> seems slow, below is the plan. Can any one give any pointers to verify ?\n>\n> SELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name\n>\n> FROM menu_item_category AS b, menu_item_variant AS c, menu_item_variant_type AS d, item_size AS e, restaurant AS f, menu_item AS a\n>\n> LEFT OUTER JOIN mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL)\n>\n> AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' limit 1) AND a.active = 'Y'\n> AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL)\n> AND is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n>\n> ORDER BY a.row_order, menu_item_id;\n>\n>\n> below is the plan\n>\n>\n> Sort (cost=189.27..189.27 rows=1 width=152) (actual time=5.876..5.885 rows=89 loops=1)\n> \" Sort Key: a.row_order, a.menu_item_id\"\n> Sort Method: quicksort Memory: 48kB\n> -> Nested Loop Left Join (cost=5.19..189.26 rows=1 width=152) (actual time=0.188..5.809 rows=89 loops=1)\n> Join Filter: (a.mark_id = m.mark_id)\n> Rows Removed by Join Filter: 267\n> -> Nested Loop (cost=5.19..188.19 rows=1 width=148) (actual time=0.181..5.629 rows=89 loops=1)\n> -> Nested Loop (cost=4.90..185.88 rows=1 width=152) (actual time=0.174..5.443 rows=89 loops=1)\n> -> Nested Loop (cost=4.61..185.57 rows=1 width=144) (actual time=0.168..5.272 rows=89 loops=1)\n> -> Nested Loop (cost=4.32..185.25 rows=1 width=136) (actual time=0.162..5.066 rows=89 loops=1)\n> -> Nested Loop (cost=0.71..179.62 rows=1 width=99) (actual time=0.137..3.986 rows=89 loops=1)\n> -> Index Scan using menu_item_restaurant_id on menu_item a (cost=0.42..177.31 rows=1 width=87) (actual time=0.130..3.769 rows=89 loops=1)\n> Index Cond: (restaurant_id = 1528)\n> \" Filter: ((active = 'Y'::bpchar) AND (is_menu_item_available(menu_item_id, 'Y'::bpchar) = 'Y'::bpchar))\"\n> Rows Removed by Filter: 194\n> -> Index Scan using menu_item_category_pk on menu_item_category b (cost=0.29..2.31 rows=1 width=20) (actual time=0.002..0.002 rows=1 loops=89)\n> Index Cond: (menu_item_category_id = a.menu_item_category_id)\n> -> Index Scan using menu_item_variant_pk on menu_item_variant c (cost=3.60..5.62 rows=1 width=45) (actual time=0.002..0.002 rows=1 loops=89)\n> Index Cond: (menu_item_variant_id = (SubPlan 1))\n> Filter: (a.menu_item_id = menu_item_id)\n> SubPlan 1\n> -> Limit (cost=3.17..3.18 rows=1 width=8) (actual time=0.009..0.009 rows=1 loops=89)\n> -> Aggregate (cost=3.17..3.18 rows=1 width=8) (actual time=0.008..0.008 rows=1 loops=89)\n> -> Index Scan using \"idx$$_023a0001\" on menu_item_variant (cost=0.43..3.15 rows=8 width=8) (actual time=0.004..0.007 rows=7 loops=89)\n> Index Cond: (menu_item_id = a.menu_item_id)\n> Filter: (deleted = 'N'::bpchar)\n> Rows Removed by Filter: 4\n> -> Index Scan using menu_item_variant_type_pk on menu_item_variant_type d (cost=0.29..0.31 rows=1 width=16) (actual time=0.002..0.002 rows=1 loops=89)\n> Index Cond: (menu_item_variant_type_id = c.menu_item_variant_type_id)\n> Filter: ((is_hidden)::text = 'false'::text)\n> -> Index Scan using size_pk on item_size e (cost=0.29..0.31 rows=1 width=16) (actual time=0.001..0.001 rows=1 loops=89)\n> Index Cond: (size_id = c.size_id)\n> -> Index Scan using \"restaurant_idx$$_274b003d\" on restaurant f (cost=0.29..2.30 rows=1 width=12) (actual time=0.001..0.002 rows=1 loops=89)\n> Index Cond: (restaurant_id = 1528)\n> -> Seq Scan on mark m (cost=0.00..1.03 rows=3 width=12) (actual time=0.000..0.001 rows=3 loops=89)\n> Planning Time: 1.510 ms\n> Execution Time: 5.972 ms\n>\n>\n\nIn AWS RDS performance insights the client writes is high and the api which receives data on the mobile side is slow during load test.On Tue, 8 Jun 2021, 19:03 Ayub Khan, <[email protected]> wrote:I checked all the indexes are defined on the tables however the query seems slow, below is the plan. Can any one give any pointers to verify ?SELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM menu_item_category AS b, menu_item_variant AS c, menu_item_variant_type AS d, item_size AS e, restaurant AS f, menu_item AS a LEFT OUTER JOIN mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' limit 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL) AND is_menu_item_available(a.menu_item_id, 'Y') = 'Y' ORDER BY a.row_order, menu_item_id;below is the planSort (cost=189.27..189.27 rows=1 width=152) (actual time=5.876..5.885 rows=89 loops=1)\" Sort Key: a.row_order, a.menu_item_id\" Sort Method: quicksort Memory: 48kB -> Nested Loop Left Join (cost=5.19..189.26 rows=1 width=152) (actual time=0.188..5.809 rows=89 loops=1) Join Filter: (a.mark_id = m.mark_id) Rows Removed by Join Filter: 267 -> Nested Loop (cost=5.19..188.19 rows=1 width=148) (actual time=0.181..5.629 rows=89 loops=1) -> Nested Loop (cost=4.90..185.88 rows=1 width=152) (actual time=0.174..5.443 rows=89 loops=1) -> Nested Loop (cost=4.61..185.57 rows=1 width=144) (actual time=0.168..5.272 rows=89 loops=1) -> Nested Loop (cost=4.32..185.25 rows=1 width=136) (actual time=0.162..5.066 rows=89 loops=1) -> Nested Loop (cost=0.71..179.62 rows=1 width=99) (actual time=0.137..3.986 rows=89 loops=1) -> Index Scan using menu_item_restaurant_id on menu_item a (cost=0.42..177.31 rows=1 width=87) (actual time=0.130..3.769 rows=89 loops=1) Index Cond: (restaurant_id = 1528)\" Filter: ((active = 'Y'::bpchar) AND (is_menu_item_available(menu_item_id, 'Y'::bpchar) = 'Y'::bpchar))\" Rows Removed by Filter: 194 -> Index Scan using menu_item_category_pk on menu_item_category b (cost=0.29..2.31 rows=1 width=20) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_category_id = a.menu_item_category_id) -> Index Scan using menu_item_variant_pk on menu_item_variant c (cost=3.60..5.62 rows=1 width=45) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_variant_id = (SubPlan 1)) Filter: (a.menu_item_id = menu_item_id) SubPlan 1 -> Limit (cost=3.17..3.18 rows=1 width=8) (actual time=0.009..0.009 rows=1 loops=89) -> Aggregate (cost=3.17..3.18 rows=1 width=8) (actual time=0.008..0.008 rows=1 loops=89) -> Index Scan using \"idx$$_023a0001\" on menu_item_variant (cost=0.43..3.15 rows=8 width=8) (actual time=0.004..0.007 rows=7 loops=89) Index Cond: (menu_item_id = a.menu_item_id) Filter: (deleted = 'N'::bpchar) Rows Removed by Filter: 4 -> Index Scan using menu_item_variant_type_pk on menu_item_variant_type d (cost=0.29..0.31 rows=1 width=16) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_variant_type_id = c.menu_item_variant_type_id) Filter: ((is_hidden)::text = 'false'::text) -> Index Scan using size_pk on item_size e (cost=0.29..0.31 rows=1 width=16) (actual time=0.001..0.001 rows=1 loops=89) Index Cond: (size_id = c.size_id) -> Index Scan using \"restaurant_idx$$_274b003d\" on restaurant f (cost=0.29..2.30 rows=1 width=12) (actual time=0.001..0.002 rows=1 loops=89) Index Cond: (restaurant_id = 1528) -> Seq Scan on mark m (cost=0.00..1.03 rows=3 width=12) (actual time=0.000..0.001 rows=3 loops=89)Planning Time: 1.510 msExecution Time: 5.972 ms",
"msg_date": "Tue, 8 Jun 2021 19:32:12 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow query"
},
{
"msg_contents": "Ayub Khan <[email protected]> writes:\n> I checked all the indexes are defined on the tables however the query seems\n> slow, below is the plan. Can any one give any pointers to verify ?\n\nYou might try to do something about the poor selectivity estimate here:\n\n> -> Index Scan using\n> menu_item_restaurant_id on menu_item a (cost=0.42..177.31 rows=1\n> width=87) (actual time=0.130..3.769 rows=89 loops=1)\n> Index Cond: (restaurant_id = 1528)\n> \" Filter: ((active =\n> 'Y'::bpchar) AND (is_menu_item_available(menu_item_id, 'Y'::bpchar) =\n> 'Y'::bpchar))\"\n> Rows Removed by Filter: 194\n\nIf the planner realized that this'd produce O(100) rows not 1,\nit'd likely have picked a different plan. I'm guessing that\nthe issue is lack of knowledge about what is_menu_item_available()\nwill do. Maybe you could replace that with a status column?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 08 Jun 2021 13:14:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow query"
},
{
"msg_contents": "below is function definition of is_menu_item_available, for each item\nbased on current day time it returns when it's available or not. The same\napi works fine on oracle, I am seeing this slowness after migrating the\nqueries to postgresql RDS on AWS\n\n\nCREATE OR REPLACE FUNCTION is_menu_item_available(\n i_menu_item_id bigint,\n i_check_availability character)\n RETURNS character\n LANGUAGE 'plpgsql'\n COST 100\n VOLATILE PARALLEL UNSAFE\n AS $BODY$\n DECLARE\n l_current_day NUMERIC(1);\n o_time CHARACTER VARYING(10);\n l_current_interval INTERVAL DAY TO SECOND(2);\n item_available_count NUMERIC(10);\n BEGIN\n item_available_count := 0;\n\n BEGIN\n IF i_check_availability = 'Y' THEN\n BEGIN\n SELECT\n CASE TO_CHAR(now(), 'fmday')\n WHEN 'monday' THEN 1\n WHEN 'tuesday' THEN 2\n WHEN 'wednesday' THEN 3\n WHEN 'thursday' THEN 4\n WHEN 'friday' THEN 5\n WHEN 'saturday' THEN 6\n WHEN 'sunday' THEN 7\n END AS d\n INTO STRICT l_current_day;\n select (('0 ' ||\n EXTRACT (HOUR FROM ((now() at time zone 'UTC') at time zone '+03:00'))\n|| ':' ||\n EXTRACT (minute FROM ((now() at time zone 'UTC') at time zone\n'+03:00')) || ':00') :: interval)\n INTO l_current_interval;\n\n END;\n\n BEGIN\n SELECT\n COUNT(*)\n INTO STRICT item_available_count\n FROM menu_item_availability\n WHERE menu_item_id = i_menu_item_id;\n\n IF item_available_count = 0 THEN\n RETURN 'Y';\n ELSE\n SELECT\n COUNT(*)\n INTO STRICT item_available_count\n FROM menu_item_availability AS mia, availability AS av\n WHERE mia.menu_item_id = i_menu_item_id\n AND mia.availability_id = av.id\n AND date_trunc('DAY',now()) + l_current_interval >= (CASE\n WHEN l_current_interval < '6 hour'::INTERVAL THEN\ndate_trunc('DAY',now()) + av.start_time - (1::NUMERIC || ' days')::INTERVAL\n WHEN l_current_interval >= '6 hour'::INTERVAL THEN\ndate_trunc('DAY',now())+ av.start_time\n END) AND date_trunc('DAY',now()) + l_current_interval <= (CASE\n WHEN l_current_interval < '6 hour'::INTERVAL THEN\ndate_trunc('DAY',now()) + av.end_time - (1::NUMERIC || ' days')::INTERVAL\n WHEN l_current_interval >= '6 hour'::INTERVAL THEN\ndate_trunc('DAY',now()) + av.end_time\n END) AND (av.day_of_week LIKE CONCAT_WS('', '%', l_current_day, '%')\nOR av.day_of_week LIKE '%0%') AND is_deleted = 0;\n END IF;\n END;\n\n BEGIN\n IF item_available_count > 0 THEN\n RETURN 'Y';\n ELSE\n RETURN 'N';\n END IF;\n END;\n ELSE\n RETURN 'Y';\n END IF;\n END;\n END;\n $BODY$;\n\n\n\nOn Tue, Jun 8, 2021 at 7:03 PM Ayub Khan <[email protected]> wrote:\n\n>\n> I checked all the indexes are defined on the tables however the query\n> seems slow, below is the plan. Can any one give any pointers to verify ?\n>\n> SELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name\n>\n> FROM menu_item_category AS b, menu_item_variant AS c, menu_item_variant_type AS d, item_size AS e, restaurant AS f, menu_item AS a\n>\n> LEFT OUTER JOIN mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL)\n>\n> AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' limit 1) AND a.active = 'Y'\n> AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL)\n> AND is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n>\n> ORDER BY a.row_order, menu_item_id;\n>\n>\n> below is the plan\n>\n>\n> Sort (cost=189.27..189.27 rows=1 width=152) (actual time=5.876..5.885 rows=89 loops=1)\n> \" Sort Key: a.row_order, a.menu_item_id\"\n> Sort Method: quicksort Memory: 48kB\n> -> Nested Loop Left Join (cost=5.19..189.26 rows=1 width=152) (actual time=0.188..5.809 rows=89 loops=1)\n> Join Filter: (a.mark_id = m.mark_id)\n> Rows Removed by Join Filter: 267\n> -> Nested Loop (cost=5.19..188.19 rows=1 width=148) (actual time=0.181..5.629 rows=89 loops=1)\n> -> Nested Loop (cost=4.90..185.88 rows=1 width=152) (actual time=0.174..5.443 rows=89 loops=1)\n> -> Nested Loop (cost=4.61..185.57 rows=1 width=144) (actual time=0.168..5.272 rows=89 loops=1)\n> -> Nested Loop (cost=4.32..185.25 rows=1 width=136) (actual time=0.162..5.066 rows=89 loops=1)\n> -> Nested Loop (cost=0.71..179.62 rows=1 width=99) (actual time=0.137..3.986 rows=89 loops=1)\n> -> Index Scan using menu_item_restaurant_id on menu_item a (cost=0.42..177.31 rows=1 width=87) (actual time=0.130..3.769 rows=89 loops=1)\n> Index Cond: (restaurant_id = 1528)\n> \" Filter: ((active = 'Y'::bpchar) AND (is_menu_item_available(menu_item_id, 'Y'::bpchar) = 'Y'::bpchar))\"\n> Rows Removed by Filter: 194\n> -> Index Scan using menu_item_category_pk on menu_item_category b (cost=0.29..2.31 rows=1 width=20) (actual time=0.002..0.002 rows=1 loops=89)\n> Index Cond: (menu_item_category_id = a.menu_item_category_id)\n> -> Index Scan using menu_item_variant_pk on menu_item_variant c (cost=3.60..5.62 rows=1 width=45) (actual time=0.002..0.002 rows=1 loops=89)\n> Index Cond: (menu_item_variant_id = (SubPlan 1))\n> Filter: (a.menu_item_id = menu_item_id)\n> SubPlan 1\n> -> Limit (cost=3.17..3.18 rows=1 width=8) (actual time=0.009..0.009 rows=1 loops=89)\n> -> Aggregate (cost=3.17..3.18 rows=1 width=8) (actual time=0.008..0.008 rows=1 loops=89)\n> -> Index Scan using \"idx$$_023a0001\" on menu_item_variant (cost=0.43..3.15 rows=8 width=8) (actual time=0.004..0.007 rows=7 loops=89)\n> Index Cond: (menu_item_id = a.menu_item_id)\n> Filter: (deleted = 'N'::bpchar)\n> Rows Removed by Filter: 4\n> -> Index Scan using menu_item_variant_type_pk on menu_item_variant_type d (cost=0.29..0.31 rows=1 width=16) (actual time=0.002..0.002 rows=1 loops=89)\n> Index Cond: (menu_item_variant_type_id = c.menu_item_variant_type_id)\n> Filter: ((is_hidden)::text = 'false'::text)\n> -> Index Scan using size_pk on item_size e (cost=0.29..0.31 rows=1 width=16) (actual time=0.001..0.001 rows=1 loops=89)\n> Index Cond: (size_id = c.size_id)\n> -> Index Scan using \"restaurant_idx$$_274b003d\" on restaurant f (cost=0.29..2.30 rows=1 width=12) (actual time=0.001..0.002 rows=1 loops=89)\n> Index Cond: (restaurant_id = 1528)\n> -> Seq Scan on mark m (cost=0.00..1.03 rows=3 width=12) (actual time=0.000..0.001 rows=3 loops=89)\n> Planning Time: 1.510 ms\n> Execution Time: 5.972 ms\n>\n>\n\n-- \n--------------------------------------------------------------------\nSun Certified Enterprise Architect 1.5\nSun Certified Java Programmer 1.4\nMicrosoft Certified Systems Engineer 2000\nhttp://in.linkedin.com/pub/ayub-khan/a/811/b81\nmobile:+966-502674604\n----------------------------------------------------------------------\nIt is proved that Hard Work and kowledge will get you close but attitude\nwill get you there. However, it's the Love\nof God that will put you over the top!!\n\nbelow is function definition of is_menu_item_available, for each item based on current day time it returns when it's available or not. The same api works fine on oracle, I am seeing this slowness after migrating the queries to postgresql RDS on AWSCREATE OR REPLACE FUNCTION is_menu_item_available( \ti_menu_item_id bigint, \ti_check_availability character) RETURNS character LANGUAGE 'plpgsql' COST 100 VOLATILE PARALLEL UNSAFE AS $BODY$ DECLARE l_current_day NUMERIC(1); o_time CHARACTER VARYING(10); l_current_interval INTERVAL DAY TO SECOND(2); item_available_count NUMERIC(10); BEGIN item_available_count := 0; BEGIN IF i_check_availability = 'Y' THEN BEGIN SELECT CASE TO_CHAR(now(), 'fmday') WHEN 'monday' THEN 1 WHEN 'tuesday' THEN 2 WHEN 'wednesday' THEN 3 WHEN 'thursday' THEN 4 WHEN 'friday' THEN 5 WHEN 'saturday' THEN 6 WHEN 'sunday' THEN 7 END AS d INTO STRICT l_current_day; select (('0 ' || \t\t\tEXTRACT (HOUR FROM ((now() at time zone 'UTC') at time zone '+03:00')) || ':' || \t\t\tEXTRACT (minute FROM ((now() at time zone 'UTC') at time zone '+03:00')) || ':00') :: interval) INTO l_current_interval; END; BEGIN SELECT COUNT(*) INTO STRICT item_available_count FROM menu_item_availability WHERE menu_item_id = i_menu_item_id; IF item_available_count = 0 THEN RETURN 'Y'; ELSE SELECT COUNT(*) INTO STRICT item_available_count FROM menu_item_availability AS mia, availability AS av WHERE mia.menu_item_id = i_menu_item_id AND mia.availability_id = av.id AND date_trunc('DAY',now()) + l_current_interval >= (CASE WHEN l_current_interval < '6 hour'::INTERVAL THEN date_trunc('DAY',now()) + av.start_time - (1::NUMERIC || ' days')::INTERVAL WHEN l_current_interval >= '6 hour'::INTERVAL THEN date_trunc('DAY',now())+ av.start_time END) AND date_trunc('DAY',now()) + l_current_interval <= (CASE WHEN l_current_interval < '6 hour'::INTERVAL THEN date_trunc('DAY',now()) + av.end_time - (1::NUMERIC || ' days')::INTERVAL WHEN l_current_interval >= '6 hour'::INTERVAL THEN date_trunc('DAY',now()) + av.end_time END) AND (av.day_of_week LIKE CONCAT_WS('', '%', l_current_day, '%') OR av.day_of_week LIKE '%0%') AND is_deleted = 0; END IF; END; BEGIN IF item_available_count > 0 THEN RETURN 'Y'; ELSE RETURN 'N'; END IF; END; ELSE RETURN 'Y'; END IF; END; END; $BODY$;On Tue, Jun 8, 2021 at 7:03 PM Ayub Khan <[email protected]> wrote:I checked all the indexes are defined on the tables however the query seems slow, below is the plan. Can any one give any pointers to verify ?SELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM menu_item_category AS b, menu_item_variant AS c, menu_item_variant_type AS d, item_size AS e, restaurant AS f, menu_item AS a LEFT OUTER JOIN mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' limit 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL) AND is_menu_item_available(a.menu_item_id, 'Y') = 'Y' ORDER BY a.row_order, menu_item_id;below is the planSort (cost=189.27..189.27 rows=1 width=152) (actual time=5.876..5.885 rows=89 loops=1)\" Sort Key: a.row_order, a.menu_item_id\" Sort Method: quicksort Memory: 48kB -> Nested Loop Left Join (cost=5.19..189.26 rows=1 width=152) (actual time=0.188..5.809 rows=89 loops=1) Join Filter: (a.mark_id = m.mark_id) Rows Removed by Join Filter: 267 -> Nested Loop (cost=5.19..188.19 rows=1 width=148) (actual time=0.181..5.629 rows=89 loops=1) -> Nested Loop (cost=4.90..185.88 rows=1 width=152) (actual time=0.174..5.443 rows=89 loops=1) -> Nested Loop (cost=4.61..185.57 rows=1 width=144) (actual time=0.168..5.272 rows=89 loops=1) -> Nested Loop (cost=4.32..185.25 rows=1 width=136) (actual time=0.162..5.066 rows=89 loops=1) -> Nested Loop (cost=0.71..179.62 rows=1 width=99) (actual time=0.137..3.986 rows=89 loops=1) -> Index Scan using menu_item_restaurant_id on menu_item a (cost=0.42..177.31 rows=1 width=87) (actual time=0.130..3.769 rows=89 loops=1) Index Cond: (restaurant_id = 1528)\" Filter: ((active = 'Y'::bpchar) AND (is_menu_item_available(menu_item_id, 'Y'::bpchar) = 'Y'::bpchar))\" Rows Removed by Filter: 194 -> Index Scan using menu_item_category_pk on menu_item_category b (cost=0.29..2.31 rows=1 width=20) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_category_id = a.menu_item_category_id) -> Index Scan using menu_item_variant_pk on menu_item_variant c (cost=3.60..5.62 rows=1 width=45) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_variant_id = (SubPlan 1)) Filter: (a.menu_item_id = menu_item_id) SubPlan 1 -> Limit (cost=3.17..3.18 rows=1 width=8) (actual time=0.009..0.009 rows=1 loops=89) -> Aggregate (cost=3.17..3.18 rows=1 width=8) (actual time=0.008..0.008 rows=1 loops=89) -> Index Scan using \"idx$$_023a0001\" on menu_item_variant (cost=0.43..3.15 rows=8 width=8) (actual time=0.004..0.007 rows=7 loops=89) Index Cond: (menu_item_id = a.menu_item_id) Filter: (deleted = 'N'::bpchar) Rows Removed by Filter: 4 -> Index Scan using menu_item_variant_type_pk on menu_item_variant_type d (cost=0.29..0.31 rows=1 width=16) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_variant_type_id = c.menu_item_variant_type_id) Filter: ((is_hidden)::text = 'false'::text) -> Index Scan using size_pk on item_size e (cost=0.29..0.31 rows=1 width=16) (actual time=0.001..0.001 rows=1 loops=89) Index Cond: (size_id = c.size_id) -> Index Scan using \"restaurant_idx$$_274b003d\" on restaurant f (cost=0.29..2.30 rows=1 width=12) (actual time=0.001..0.002 rows=1 loops=89) Index Cond: (restaurant_id = 1528) -> Seq Scan on mark m (cost=0.00..1.03 rows=3 width=12) (actual time=0.000..0.001 rows=3 loops=89)Planning Time: 1.510 msExecution Time: 5.972 ms\n-- --------------------------------------------------------------------Sun Certified Enterprise Architect 1.5Sun Certified Java Programmer 1.4Microsoft Certified Systems Engineer 2000http://in.linkedin.com/pub/ayub-khan/a/811/b81mobile:+966-502674604----------------------------------------------------------------------It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!",
"msg_date": "Tue, 8 Jun 2021 20:43:13 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow query"
},
{
"msg_contents": "On Tue, Jun 8, 2021 at 12:32 PM Ayub Khan <[email protected]> wrote:\n\n> In AWS RDS performance insights the client writes is high and the api\n> which receives data on the mobile side is slow during load test.\n>\n\nThat indicates a client or network problem.\n\nJeff\n\nOn Tue, Jun 8, 2021 at 12:32 PM Ayub Khan <[email protected]> wrote:In AWS RDS performance insights the client writes is high and the api which receives data on the mobile side is slow during load test.That indicates a client or network problem.Jeff",
"msg_date": "Wed, 9 Jun 2021 15:57:04 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow query"
},
{
"msg_contents": "Below is the test setup\n\nJmeter-->(load balanced tomcat on ec2 instances)---->rds read replicas\n\nAll these are running on different ec2 instances in AWS cloud in the same\nregion\n\nOn Tue, 8 Jun 2021, 19:03 Ayub Khan, <[email protected]> wrote:\n\n>\n> I checked all the indexes are defined on the tables however the query\n> seems slow, below is the plan. Can any one give any pointers to verify ?\n>\n> SELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name\n>\n> FROM menu_item_category AS b, menu_item_variant AS c, menu_item_variant_type AS d, item_size AS e, restaurant AS f, menu_item AS a\n>\n> LEFT OUTER JOIN mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL)\n>\n> AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' limit 1) AND a.active = 'Y'\n> AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL)\n> AND is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n>\n> ORDER BY a.row_order, menu_item_id;\n>\n>\n> below is the plan\n>\n>\n> Sort (cost=189.27..189.27 rows=1 width=152) (actual time=5.876..5.885 rows=89 loops=1)\n> \" Sort Key: a.row_order, a.menu_item_id\"\n> Sort Method: quicksort Memory: 48kB\n> -> Nested Loop Left Join (cost=5.19..189.26 rows=1 width=152) (actual time=0.188..5.809 rows=89 loops=1)\n> Join Filter: (a.mark_id = m.mark_id)\n> Rows Removed by Join Filter: 267\n> -> Nested Loop (cost=5.19..188.19 rows=1 width=148) (actual time=0.181..5.629 rows=89 loops=1)\n> -> Nested Loop (cost=4.90..185.88 rows=1 width=152) (actual time=0.174..5.443 rows=89 loops=1)\n> -> Nested Loop (cost=4.61..185.57 rows=1 width=144) (actual time=0.168..5.272 rows=89 loops=1)\n> -> Nested Loop (cost=4.32..185.25 rows=1 width=136) (actual time=0.162..5.066 rows=89 loops=1)\n> -> Nested Loop (cost=0.71..179.62 rows=1 width=99) (actual time=0.137..3.986 rows=89 loops=1)\n> -> Index Scan using menu_item_restaurant_id on menu_item a (cost=0.42..177.31 rows=1 width=87) (actual time=0.130..3.769 rows=89 loops=1)\n> Index Cond: (restaurant_id = 1528)\n> \" Filter: ((active = 'Y'::bpchar) AND (is_menu_item_available(menu_item_id, 'Y'::bpchar) = 'Y'::bpchar))\"\n> Rows Removed by Filter: 194\n> -> Index Scan using menu_item_category_pk on menu_item_category b (cost=0.29..2.31 rows=1 width=20) (actual time=0.002..0.002 rows=1 loops=89)\n> Index Cond: (menu_item_category_id = a.menu_item_category_id)\n> -> Index Scan using menu_item_variant_pk on menu_item_variant c (cost=3.60..5.62 rows=1 width=45) (actual time=0.002..0.002 rows=1 loops=89)\n> Index Cond: (menu_item_variant_id = (SubPlan 1))\n> Filter: (a.menu_item_id = menu_item_id)\n> SubPlan 1\n> -> Limit (cost=3.17..3.18 rows=1 width=8) (actual time=0.009..0.009 rows=1 loops=89)\n> -> Aggregate (cost=3.17..3.18 rows=1 width=8) (actual time=0.008..0.008 rows=1 loops=89)\n> -> Index Scan using \"idx$$_023a0001\" on menu_item_variant (cost=0.43..3.15 rows=8 width=8) (actual time=0.004..0.007 rows=7 loops=89)\n> Index Cond: (menu_item_id = a.menu_item_id)\n> Filter: (deleted = 'N'::bpchar)\n> Rows Removed by Filter: 4\n> -> Index Scan using menu_item_variant_type_pk on menu_item_variant_type d (cost=0.29..0.31 rows=1 width=16) (actual time=0.002..0.002 rows=1 loops=89)\n> Index Cond: (menu_item_variant_type_id = c.menu_item_variant_type_id)\n> Filter: ((is_hidden)::text = 'false'::text)\n> -> Index Scan using size_pk on item_size e (cost=0.29..0.31 rows=1 width=16) (actual time=0.001..0.001 rows=1 loops=89)\n> Index Cond: (size_id = c.size_id)\n> -> Index Scan using \"restaurant_idx$$_274b003d\" on restaurant f (cost=0.29..2.30 rows=1 width=12) (actual time=0.001..0.002 rows=1 loops=89)\n> Index Cond: (restaurant_id = 1528)\n> -> Seq Scan on mark m (cost=0.00..1.03 rows=3 width=12) (actual time=0.000..0.001 rows=3 loops=89)\n> Planning Time: 1.510 ms\n> Execution Time: 5.972 ms\n>\n>\n\nBelow is the test setupJmeter-->(load balanced tomcat on ec2 instances)---->rds read replicasAll these are running on different ec2 instances in AWS cloud in the same regionOn Tue, 8 Jun 2021, 19:03 Ayub Khan, <[email protected]> wrote:I checked all the indexes are defined on the tables however the query seems slow, below is the plan. Can any one give any pointers to verify ?SELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM menu_item_category AS b, menu_item_variant AS c, menu_item_variant_type AS d, item_size AS e, restaurant AS f, menu_item AS a LEFT OUTER JOIN mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' limit 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL) AND is_menu_item_available(a.menu_item_id, 'Y') = 'Y' ORDER BY a.row_order, menu_item_id;below is the planSort (cost=189.27..189.27 rows=1 width=152) (actual time=5.876..5.885 rows=89 loops=1)\" Sort Key: a.row_order, a.menu_item_id\" Sort Method: quicksort Memory: 48kB -> Nested Loop Left Join (cost=5.19..189.26 rows=1 width=152) (actual time=0.188..5.809 rows=89 loops=1) Join Filter: (a.mark_id = m.mark_id) Rows Removed by Join Filter: 267 -> Nested Loop (cost=5.19..188.19 rows=1 width=148) (actual time=0.181..5.629 rows=89 loops=1) -> Nested Loop (cost=4.90..185.88 rows=1 width=152) (actual time=0.174..5.443 rows=89 loops=1) -> Nested Loop (cost=4.61..185.57 rows=1 width=144) (actual time=0.168..5.272 rows=89 loops=1) -> Nested Loop (cost=4.32..185.25 rows=1 width=136) (actual time=0.162..5.066 rows=89 loops=1) -> Nested Loop (cost=0.71..179.62 rows=1 width=99) (actual time=0.137..3.986 rows=89 loops=1) -> Index Scan using menu_item_restaurant_id on menu_item a (cost=0.42..177.31 rows=1 width=87) (actual time=0.130..3.769 rows=89 loops=1) Index Cond: (restaurant_id = 1528)\" Filter: ((active = 'Y'::bpchar) AND (is_menu_item_available(menu_item_id, 'Y'::bpchar) = 'Y'::bpchar))\" Rows Removed by Filter: 194 -> Index Scan using menu_item_category_pk on menu_item_category b (cost=0.29..2.31 rows=1 width=20) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_category_id = a.menu_item_category_id) -> Index Scan using menu_item_variant_pk on menu_item_variant c (cost=3.60..5.62 rows=1 width=45) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_variant_id = (SubPlan 1)) Filter: (a.menu_item_id = menu_item_id) SubPlan 1 -> Limit (cost=3.17..3.18 rows=1 width=8) (actual time=0.009..0.009 rows=1 loops=89) -> Aggregate (cost=3.17..3.18 rows=1 width=8) (actual time=0.008..0.008 rows=1 loops=89) -> Index Scan using \"idx$$_023a0001\" on menu_item_variant (cost=0.43..3.15 rows=8 width=8) (actual time=0.004..0.007 rows=7 loops=89) Index Cond: (menu_item_id = a.menu_item_id) Filter: (deleted = 'N'::bpchar) Rows Removed by Filter: 4 -> Index Scan using menu_item_variant_type_pk on menu_item_variant_type d (cost=0.29..0.31 rows=1 width=16) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_variant_type_id = c.menu_item_variant_type_id) Filter: ((is_hidden)::text = 'false'::text) -> Index Scan using size_pk on item_size e (cost=0.29..0.31 rows=1 width=16) (actual time=0.001..0.001 rows=1 loops=89) Index Cond: (size_id = c.size_id) -> Index Scan using \"restaurant_idx$$_274b003d\" on restaurant f (cost=0.29..2.30 rows=1 width=12) (actual time=0.001..0.002 rows=1 loops=89) Index Cond: (restaurant_id = 1528) -> Seq Scan on mark m (cost=0.00..1.03 rows=3 width=12) (actual time=0.000..0.001 rows=3 loops=89)Planning Time: 1.510 msExecution Time: 5.972 ms",
"msg_date": "Wed, 9 Jun 2021 23:23:48 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow query"
}
] |
[
{
"msg_contents": "attached is the screenshot of RDS performance insights for AWS and it shows\nhigh waiting client writes. The api performance is slow. I read that this\nmight be due to IOPS on RDS. However we have 80k IOPS on this test RDS.\n\nBelow is the query which is being load tested\n\nSELECT\n\n a.menu_item_id,\n a.menu_item_name,\n a.menu_item_category_id,\n b.menu_item_category_desc,\n c.menu_item_variant_id,\n c.menu_item_variant_type_id,\n c.price,\n c.size_id,\n c.parent_menu_item_variant_id,\n d.menu_item_variant_type_desc,\n e.size_desc,\n f.currency_code,\n a.image,\n a.mark_id,\n m.mark_name\n\n FROM .menu_item_category AS b, .menu_item_variant AS c,\n .menu_item_variant_type AS d, .item_size AS e,\n.restaurant AS f,\n .menu_item AS a\n\n LEFT OUTER JOIN .mark AS m\n ON (a.mark_id = m.mark_id)\n\n WHERE a.menu_item_category_id =\nb.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND\n c.menu_item_variant_type_id =\nd.menu_item_variant_type_id AND d.is_hidden = 'false' AND\n c.size_id = e.size_id AND a.restaurant_id =\nf.restaurant_id AND f.restaurant_id = 1528 AND\n (a.menu_item_category_id = NULL OR NULL IS NULL)\n\n AND c.menu_item_variant_id = (SELECT\nmin(menu_item_variant_id)\n FROM\n.menu_item_variant\n WHERE menu_item_id\n= a.menu_item_id AND deleted = 'N'\n LIMIT 1) AND\na.active = 'Y'\n AND (CONCAT_WS('', ',', a.hidden_branch_ids,\n',') NOT LIKE CONCAT_WS('', '%,4191,%') OR\n NULL IS NULL)\n AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n\n ORDER BY a.row_order, menu_item_id;\n\n--Ayub",
"msg_date": "Wed, 9 Jun 2021 17:47:02 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "waiting for client write"
},
{
"msg_contents": "On Wed, Jun 9, 2021 at 4:47 PM Ayub Khan <[email protected]> wrote:\n>\n> attached is the screenshot of RDS performance insights for AWS and it shows high waiting client writes. The api performance is slow. I read that this might be due to IOPS on RDS. However we have 80k IOPS on this test RDS.\n>\n\nClientWrite means Postgres is waiting on the *network* sending the\nreply back to the client, it is unrelated to I/O. So either your\nclient isn't consuming the response fast enough, or the network\nbetween them is too slow or shaped.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 9 Jun 2021 16:49:38 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "@Magnus\n\nThere is an EC2 tomcat server which communicates to postgresql. This is a\nreplica of our production server except that in this case the test database\nis postgres RDS and our production is running oracle on EC2 instance.\n\nOn Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:\n\n> attached is the screenshot of RDS performance insights for AWS and it\n> shows high waiting client writes. The api performance is slow. I read that\n> this might be due to IOPS on RDS. However we have 80k IOPS on this test\n> RDS.\n>\n> Below is the query which is being load tested\n>\n> SELECT\n>\n> a.menu_item_id,\n> a.menu_item_name,\n> a.menu_item_category_id,\n> b.menu_item_category_desc,\n> c.menu_item_variant_id,\n> c.menu_item_variant_type_id,\n> c.price,\n> c.size_id,\n> c.parent_menu_item_variant_id,\n> d.menu_item_variant_type_desc,\n> e.size_desc,\n> f.currency_code,\n> a.image,\n> a.mark_id,\n> m.mark_name\n>\n> FROM .menu_item_category AS b, .menu_item_variant AS\n> c,\n> .menu_item_variant_type AS d, .item_size AS e,\n> .restaurant AS f,\n> .menu_item AS a\n>\n> LEFT OUTER JOIN .mark AS m\n> ON (a.mark_id = m.mark_id)\n>\n> WHERE a.menu_item_category_id =\n> b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND\n> c.menu_item_variant_type_id =\n> d.menu_item_variant_type_id AND d.is_hidden = 'false' AND\n> c.size_id = e.size_id AND a.restaurant_id =\n> f.restaurant_id AND f.restaurant_id = 1528 AND\n> (a.menu_item_category_id = NULL OR NULL IS NULL)\n>\n> AND c.menu_item_variant_id = (SELECT\n> min(menu_item_variant_id)\n> FROM\n> .menu_item_variant\n> WHERE\n> menu_item_id = a.menu_item_id AND deleted = 'N'\n> LIMIT 1) AND\n> a.active = 'Y'\n> AND (CONCAT_WS('', ',', a.hidden_branch_ids,\n> ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR\n> NULL IS NULL)\n> AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n>\n> ORDER BY a.row_order, menu_item_id;\n>\n> --Ayub\n>\n\n\n-- \n--------------------------------------------------------------------\nSun Certified Enterprise Architect 1.5\nSun Certified Java Programmer 1.4\nMicrosoft Certified Systems Engineer 2000\nhttp://in.linkedin.com/pub/ayub-khan/a/811/b81\nmobile:+966-502674604\n----------------------------------------------------------------------\nIt is proved that Hard Work and kowledge will get you close but attitude\nwill get you there. However, it's the Love\nof God that will put you over the top!!\n\n@MagnusThere is an EC2 tomcat server which communicates to postgresql. This is a replica of our production server except that in this case the test database is postgres RDS and our production is running oracle on EC2 instance.On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:attached is the screenshot of RDS performance insights for AWS and it shows high waiting client writes. The api performance is slow. I read that this might be due to IOPS on RDS. However we have 80k IOPS on this test RDS. Below is the query which is being load testedSELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM .menu_item_category AS b, .menu_item_variant AS c, .menu_item_variant_type AS d, .item_size AS e, .restaurant AS f, .menu_item AS a LEFT OUTER JOIN .mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM .menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' LIMIT 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL) AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' ORDER BY a.row_order, menu_item_id;--Ayub\n-- --------------------------------------------------------------------Sun Certified Enterprise Architect 1.5Sun Certified Java Programmer 1.4Microsoft Certified Systems Engineer 2000http://in.linkedin.com/pub/ayub-khan/a/811/b81mobile:+966-502674604----------------------------------------------------------------------It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!",
"msg_date": "Wed, 9 Jun 2021 21:59:17 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "I did profiling of the application and it seems most of the CPU consumption\nis for executing the stored procedure. Attached is the screenshot of the\nprofile\n\n--Ayub\n\n\nOn Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:\n\n> attached is the screenshot of RDS performance insights for AWS and it\n> shows high waiting client writes. The api performance is slow. I read that\n> this might be due to IOPS on RDS. However we have 80k IOPS on this test\n> RDS.\n>\n> Below is the query which is being load tested\n>\n> SELECT\n>\n> a.menu_item_id,\n> a.menu_item_name,\n> a.menu_item_category_id,\n> b.menu_item_category_desc,\n> c.menu_item_variant_id,\n> c.menu_item_variant_type_id,\n> c.price,\n> c.size_id,\n> c.parent_menu_item_variant_id,\n> d.menu_item_variant_type_desc,\n> e.size_desc,\n> f.currency_code,\n> a.image,\n> a.mark_id,\n> m.mark_name\n>\n> FROM .menu_item_category AS b, .menu_item_variant AS\n> c,\n> .menu_item_variant_type AS d, .item_size AS e,\n> .restaurant AS f,\n> .menu_item AS a\n>\n> LEFT OUTER JOIN .mark AS m\n> ON (a.mark_id = m.mark_id)\n>\n> WHERE a.menu_item_category_id =\n> b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND\n> c.menu_item_variant_type_id =\n> d.menu_item_variant_type_id AND d.is_hidden = 'false' AND\n> c.size_id = e.size_id AND a.restaurant_id =\n> f.restaurant_id AND f.restaurant_id = 1528 AND\n> (a.menu_item_category_id = NULL OR NULL IS NULL)\n>\n> AND c.menu_item_variant_id = (SELECT\n> min(menu_item_variant_id)\n> FROM\n> .menu_item_variant\n> WHERE\n> menu_item_id = a.menu_item_id AND deleted = 'N'\n> LIMIT 1) AND\n> a.active = 'Y'\n> AND (CONCAT_WS('', ',', a.hidden_branch_ids,\n> ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR\n> NULL IS NULL)\n> AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n>\n> ORDER BY a.row_order, menu_item_id;\n>\n> --Ayub\n>\n\n\n-- \n--------------------------------------------------------------------\nSun Certified Enterprise Architect 1.5\nSun Certified Java Programmer 1.4\nMicrosoft Certified Systems Engineer 2000\nhttp://in.linkedin.com/pub/ayub-khan/a/811/b81\nmobile:+966-502674604\n----------------------------------------------------------------------\nIt is proved that Hard Work and kowledge will get you close but attitude\nwill get you there. However, it's the Love\nof God that will put you over the top!!",
"msg_date": "Thu, 10 Jun 2021 11:06:03 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "On Thu, Jun 10, 2021 at 4:06 AM Ayub Khan <[email protected]> wrote:\n\n> I did profiling of the application and it seems most of the CPU\n> consumption is for executing the stored procedure. Attached is the\n> screenshot of the profile\n>\n\nThat is of your tomcat server? If that is really a profile of your CPU\ntime (rather than wall-clock time) then it seems pretty clear your problem\nis on the client side, so there isn't much that can be done about it on the\ndatabase server.\n\nCheers,\n\nJeff\n\n>\n\nOn Thu, Jun 10, 2021 at 4:06 AM Ayub Khan <[email protected]> wrote:I did profiling of the application and it seems most of the CPU consumption is for executing the stored procedure. Attached is the screenshot of the profileThat is of your tomcat server? If that is really a profile of your CPU time (rather than wall-clock time) then it seems pretty clear your problem is on the client side, so there isn't much that can be done about it on the database server.Cheers,Jeff",
"msg_date": "Thu, 10 Jun 2021 11:40:01 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "Ayub,\n\nIdeally when i have to deal with this,\ni run a pgbench stress test locally on the db server on lo interface\nwhich does not suffer mtu / bandwidth saturation issues.\nthen run the same pgbench from a remote server in the same subnet as the\napp and record the results and compare.\nthat helps me get rid of any non standard client issues or network latency\nissues.\n\n\nA typical case where people above are pointing to is\n1) for ex. When I am in India and query a server in the US across WAN on a\nclient like pgadmin (which may not handle loading million rows\nefficiently), I have a high chance of getting ClientWrite ,ClientRead wait\nevents. ( Read client and or network issues )\nOf course this is much worse than ec2 and db in the same region, but you\nget the point that you have to rule out sketchy networks between the\nservers.\nIdeally an iperf like stress test can help to test bandwidth.\n\nSo if you can run pgbench from across some test servers and get consistent\nresults, then you can come back with a reply that more people can help with.\npgbench <https://www.postgresql.org/docs/current/pgbench.html>\nusing a custom script\n\npostgres@go:~/pgbench_example$ more pgbench.script\n\\set r1 random(0, 10000) -- you can use them below in queries as params\nlike col = :r1\n\\set r2 random(0, 8000)\n\nbegin;\nselect random();\nend;\n\n-- put in any query that you use in jmeter between begin/end like above\n-- select * from foo where (u1 = :r1 and u2 = :r2);\n-- insert into foo values (:u1v, :u2v) on conflict do nothing;\n-- update foo set u1 = :u1v where u2 = 100;\n-- select pg_sleep(1);\n\n\nand then run pgbench with the custom script\n\npostgres@go:~/pgbench_example$ pgbench -c 10 -f ./pgbench.script -j 10 -n\n-T 30\ntransaction type: ./pgbench.script\nscaling factor: 1\nquery mode: simple\nnumber of clients: 10\nnumber of threads: 10\nduration: 30 s\nnumber of transactions actually processed: 528984\nlatency average = 0.567 ms\ntps = 17631.650584 (including connections establishing)\ntps = 17642.174229 (excluding connections establishing)\n\nAyub,Ideally when i have to deal with this,i run a pgbench stress test locally on the db server on lo interface which does not suffer mtu / bandwidth saturation issues.then run the same pgbench from a remote server in the same subnet as the app and record the results and compare.that helps me get rid of any non standard client issues or network latency issues.A typical case where people above are pointing to is1) for ex. When I am in India and query a server in the US across WAN on a client like pgadmin (which may not handle loading million rows efficiently), I have a high chance of getting ClientWrite ,ClientRead wait events. ( Read client and or network issues )Of course this is much worse than ec2 and db in the same region, but you get the point that you have to rule out sketchy networks between the servers.Ideally an iperf like stress test can help to test bandwidth.So if you can run pgbench from across some test servers and get consistent results, then you can come back with a reply that more people can help with.pgbenchusing a custom scriptpostgres@go:~/pgbench_example$ more pgbench.script\\set r1 random(0, 10000) -- you can use them below in queries as params like col = :r1\\set r2 random(0, 8000)begin;select random();end;-- put in any query that you use in jmeter between begin/end like above-- select * from foo where (u1 = :r1 and u2 = :r2);-- insert into foo values (:u1v, :u2v) on conflict do nothing;-- update foo set u1 = :u1v where u2 = 100;-- select pg_sleep(1);and then run pgbench with the custom scriptpostgres@go:~/pgbench_example$ pgbench -c 10 -f ./pgbench.script -j 10 -n -T 30transaction type: ./pgbench.scriptscaling factor: 1query mode: simplenumber of clients: 10number of threads: 10duration: 30 snumber of transactions actually processed: 528984latency average = 0.567 mstps = 17631.650584 (including connections establishing)tps = 17642.174229 (excluding connections establishing)",
"msg_date": "Thu, 10 Jun 2021 22:28:59 +0530",
"msg_from": "Vijaykumar Jain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "Vijay,\n\nBoth tomcat and postgresql are on the same region as that of the database\nserver. It is an RDS so I do not have shell access to it.\n\nJeff,\n\nThe tomcat profile is suggesting that it's waiting for a response from the\ndatabase server.\n\nTomcat and RDS are in the same availability region as eu-central-1a\n\nOn Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:\n\n> attached is the screenshot of RDS performance insights for AWS and it\n> shows high waiting client writes. The api performance is slow. I read that\n> this might be due to IOPS on RDS. However we have 80k IOPS on this test\n> RDS.\n>\n> Below is the query which is being load tested\n>\n> SELECT\n>\n> a.menu_item_id,\n> a.menu_item_name,\n> a.menu_item_category_id,\n> b.menu_item_category_desc,\n> c.menu_item_variant_id,\n> c.menu_item_variant_type_id,\n> c.price,\n> c.size_id,\n> c.parent_menu_item_variant_id,\n> d.menu_item_variant_type_desc,\n> e.size_desc,\n> f.currency_code,\n> a.image,\n> a.mark_id,\n> m.mark_name\n>\n> FROM .menu_item_category AS b, .menu_item_variant AS\n> c,\n> .menu_item_variant_type AS d, .item_size AS e,\n> .restaurant AS f,\n> .menu_item AS a\n>\n> LEFT OUTER JOIN .mark AS m\n> ON (a.mark_id = m.mark_id)\n>\n> WHERE a.menu_item_category_id =\n> b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND\n> c.menu_item_variant_type_id =\n> d.menu_item_variant_type_id AND d.is_hidden = 'false' AND\n> c.size_id = e.size_id AND a.restaurant_id =\n> f.restaurant_id AND f.restaurant_id = 1528 AND\n> (a.menu_item_category_id = NULL OR NULL IS NULL)\n>\n> AND c.menu_item_variant_id = (SELECT\n> min(menu_item_variant_id)\n> FROM\n> .menu_item_variant\n> WHERE\n> menu_item_id = a.menu_item_id AND deleted = 'N'\n> LIMIT 1) AND\n> a.active = 'Y'\n> AND (CONCAT_WS('', ',', a.hidden_branch_ids,\n> ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR\n> NULL IS NULL)\n> AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n>\n> ORDER BY a.row_order, menu_item_id;\n>\n> --Ayub\n>\n\n\n-- \n--------------------------------------------------------------------\nSun Certified Enterprise Architect 1.5\nSun Certified Java Programmer 1.4\nMicrosoft Certified Systems Engineer 2000\nhttp://in.linkedin.com/pub/ayub-khan/a/811/b81\nmobile:+966-502674604\n----------------------------------------------------------------------\nIt is proved that Hard Work and kowledge will get you close but attitude\nwill get you there. However, it's the Love\nof God that will put you over the top!!\n\nVijay, Both tomcat and postgresql are on the same region as that of the database server. It is an RDS so I do not have shell access to it.Jeff,The tomcat profile is suggesting that it's waiting for a response from the database server. Tomcat and RDS are in the same availability region as eu-central-1aOn Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:attached is the screenshot of RDS performance insights for AWS and it shows high waiting client writes. The api performance is slow. I read that this might be due to IOPS on RDS. However we have 80k IOPS on this test RDS. Below is the query which is being load testedSELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM .menu_item_category AS b, .menu_item_variant AS c, .menu_item_variant_type AS d, .item_size AS e, .restaurant AS f, .menu_item AS a LEFT OUTER JOIN .mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM .menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' LIMIT 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL) AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' ORDER BY a.row_order, menu_item_id;--Ayub\n-- --------------------------------------------------------------------Sun Certified Enterprise Architect 1.5Sun Certified Java Programmer 1.4Microsoft Certified Systems Engineer 2000http://in.linkedin.com/pub/ayub-khan/a/811/b81mobile:+966-502674604----------------------------------------------------------------------It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!",
"msg_date": "Fri, 11 Jun 2021 19:28:26 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "Hi Ayub\n\nSo, i understand the client are blocked waiting on a write to the database!\n\nWhat does the blocked thread signature say?\n\nAre you pre-creating any partitions?\n\nAre you experiencing Timed outs??\n\nWhat is the driver you are using now? If you are using Jdbc, can you update\nyour driver to the latest version?\n\n\n\nRegards\nPavan\n\n\n\n\n\n\nOn Fri, Jun 11, 2021, 11:28 AM Ayub Khan <[email protected]> wrote:\n\n> Vijay,\n>\n> Both tomcat and postgresql are on the same region as that of the database\n> server. It is an RDS so I do not have shell access to it.\n>\n> Jeff,\n>\n> The tomcat profile is suggesting that it's waiting for a response from the\n> database server.\n>\n> Tomcat and RDS are in the same availability region as eu-central-1a\n>\n> On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:\n>\n>> attached is the screenshot of RDS performance insights for AWS and it\n>> shows high waiting client writes. The api performance is slow. I read that\n>> this might be due to IOPS on RDS. However we have 80k IOPS on this test\n>> RDS.\n>>\n>> Below is the query which is being load tested\n>>\n>> SELECT\n>>\n>> a.menu_item_id,\n>> a.menu_item_name,\n>> a.menu_item_category_id,\n>> b.menu_item_category_desc,\n>> c.menu_item_variant_id,\n>> c.menu_item_variant_type_id,\n>> c.price,\n>> c.size_id,\n>> c.parent_menu_item_variant_id,\n>> d.menu_item_variant_type_desc,\n>> e.size_desc,\n>> f.currency_code,\n>> a.image,\n>> a.mark_id,\n>> m.mark_name\n>>\n>> FROM .menu_item_category AS b, .menu_item_variant AS\n>> c,\n>> .menu_item_variant_type AS d, .item_size AS e,\n>> .restaurant AS f,\n>> .menu_item AS a\n>>\n>> LEFT OUTER JOIN .mark AS m\n>> ON (a.mark_id = m.mark_id)\n>>\n>> WHERE a.menu_item_category_id =\n>> b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND\n>> c.menu_item_variant_type_id =\n>> d.menu_item_variant_type_id AND d.is_hidden = 'false' AND\n>> c.size_id = e.size_id AND a.restaurant_id =\n>> f.restaurant_id AND f.restaurant_id = 1528 AND\n>> (a.menu_item_category_id = NULL OR NULL IS\n>> NULL)\n>>\n>> AND c.menu_item_variant_id = (SELECT\n>> min(menu_item_variant_id)\n>> FROM\n>> .menu_item_variant\n>> WHERE\n>> menu_item_id = a.menu_item_id AND deleted = 'N'\n>> LIMIT 1) AND\n>> a.active = 'Y'\n>> AND (CONCAT_WS('', ',', a.hidden_branch_ids,\n>> ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR\n>> NULL IS NULL)\n>> AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n>>\n>> ORDER BY a.row_order, menu_item_id;\n>>\n>> --Ayub\n>>\n>\n>\n> --\n> --------------------------------------------------------------------\n> Sun Certified Enterprise Architect 1.5\n> Sun Certified Java Programmer 1.4\n> Microsoft Certified Systems Engineer 2000\n> http://in.linkedin.com/pub/ayub-khan/a/811/b81\n> mobile:+966-502674604\n> ----------------------------------------------------------------------\n> It is proved that Hard Work and kowledge will get you close but attitude\n> will get you there. However, it's the Love\n> of God that will put you over the top!!\n>\n\nHi AyubSo, i understand the client are blocked waiting on a write to the database!What does the blocked thread signature say?Are you pre-creating any partitions?Are you experiencing Timed outs??What is the driver you are using now? If you are using Jdbc, can you update your driver to the latest version?RegardsPavanOn Fri, Jun 11, 2021, 11:28 AM Ayub Khan <[email protected]> wrote:Vijay, Both tomcat and postgresql are on the same region as that of the database server. It is an RDS so I do not have shell access to it.Jeff,The tomcat profile is suggesting that it's waiting for a response from the database server. Tomcat and RDS are in the same availability region as eu-central-1aOn Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:attached is the screenshot of RDS performance insights for AWS and it shows high waiting client writes. The api performance is slow. I read that this might be due to IOPS on RDS. However we have 80k IOPS on this test RDS. Below is the query which is being load testedSELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM .menu_item_category AS b, .menu_item_variant AS c, .menu_item_variant_type AS d, .item_size AS e, .restaurant AS f, .menu_item AS a LEFT OUTER JOIN .mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM .menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' LIMIT 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL) AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' ORDER BY a.row_order, menu_item_id;--Ayub\n-- --------------------------------------------------------------------Sun Certified Enterprise Architect 1.5Sun Certified Java Programmer 1.4Microsoft Certified Systems Engineer 2000http://in.linkedin.com/pub/ayub-khan/a/811/b81mobile:+966-502674604----------------------------------------------------------------------It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!",
"msg_date": "Fri, 11 Jun 2021 11:46:43 -0500",
"msg_from": "Pavan Pusuluri <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "Em sex., 11 de jun. de 2021 às 13:28, Ayub Khan <[email protected]>\nescreveu:\n\n> Vijay,\n>\n> Both tomcat and postgresql are on the same region as that of the database\n> server. It is an RDS so I do not have shell access to it.\n>\n> Jeff,\n>\n> The tomcat profile is suggesting that it's waiting for a response from the\n> database server.\n>\n> Tomcat and RDS are in the same availability region as eu-central-1a\n>\n> On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:\n>\n>> attached is the screenshot of RDS performance insights for AWS and it\n>> shows high waiting client writes. The api performance is slow. I read that\n>> this might be due to IOPS on RDS. However we have 80k IOPS on this test\n>> RDS.\n>>\n>> Below is the query which is being load tested\n>>\n>> SELECT\n>>\n>> a.menu_item_id,\n>> a.menu_item_name,\n>> a.menu_item_category_id,\n>> b.menu_item_category_desc,\n>> c.menu_item_variant_id,\n>> c.menu_item_variant_type_id,\n>> c.price,\n>> c.size_id,\n>> c.parent_menu_item_variant_id,\n>> d.menu_item_variant_type_desc,\n>> e.size_desc,\n>> f.currency_code,\n>> a.image,\n>> a.mark_id,\n>> m.mark_name\n>>\n>> FROM .menu_item_category AS b, .menu_item_variant AS\n>> c,\n>> .menu_item_variant_type AS d, .item_size AS e,\n>> .restaurant AS f,\n>> .menu_item AS a\n>>\n>> LEFT OUTER JOIN .mark AS m\n>> ON (a.mark_id = m.mark_id)\n>>\n>> WHERE a.menu_item_category_id =\n>> b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND\n>> c.menu_item_variant_type_id =\n>> d.menu_item_variant_type_id AND d.is_hidden = 'false' AND\n>> c.size_id = e.size_id AND a.restaurant_id =\n>> f.restaurant_id AND f.restaurant_id = 1528 AND\n>> (a.menu_item_category_id = NULL OR NULL IS\n>> NULL)\n>>\n>> AND c.menu_item_variant_id = (SELECT\n>> min(menu_item_variant_id)\n>> FROM\n>> .menu_item_variant\n>> WHERE\n>> menu_item_id = a.menu_item_id AND deleted = 'N'\n>> LIMIT 1) AND\n>> a.active = 'Y'\n>> AND (CONCAT_WS('', ',', a.hidden_branch_ids,\n>> ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR\n>> NULL IS NULL)\n>> AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n>>\n>> ORDER BY a.row_order, menu_item_id;\n>>\n>> --Ayub\n>>\n> Can you post the results with: explain analyze?\nEXPLAIN ANALYZE\nSELECT ....\n\nregards,\nRanier Vilela\n\nEm sex., 11 de jun. de 2021 às 13:28, Ayub Khan <[email protected]> escreveu:Vijay, Both tomcat and postgresql are on the same region as that of the database server. It is an RDS so I do not have shell access to it.Jeff,The tomcat profile is suggesting that it's waiting for a response from the database server. Tomcat and RDS are in the same availability region as eu-central-1aOn Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:attached is the screenshot of RDS performance insights for AWS and it shows high waiting client writes. The api performance is slow. I read that this might be due to IOPS on RDS. However we have 80k IOPS on this test RDS. Below is the query which is being load testedSELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM .menu_item_category AS b, .menu_item_variant AS c, .menu_item_variant_type AS d, .item_size AS e, .restaurant AS f, .menu_item AS a LEFT OUTER JOIN .mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM .menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' LIMIT 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL) AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' ORDER BY a.row_order, menu_item_id;--AyubCan you post the results with: \r\nexplain analyze?EXPLAIN ANALYZE SELECT ....regards,Ranier Vilela",
"msg_date": "Fri, 11 Jun 2021 13:52:44 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "Pavan,\n\nIn jProfiler , I see that most cpu is consumed when the Tomcat thread is\nstuck at PgPreparedStatement.execute. I am using version 42.2.16 of JDBC\ndriver.\n\n\nRanier,\n\nEXPLAIN ANALYZE\n\nSELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id,\nb.menu_item_category_desc, c.menu_item_variant_id,\nc.menu_item_variant_type_id, c.price, c.size_id,\nc.parent_menu_item_variant_id, d.menu_item_variant_type_desc,\ne.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name\n\n FROM menu_item_category AS b, menu_item_variant AS c,\nmenu_item_variant_type AS d, item_size AS e, restaurant AS f,\nmenu_item AS a\n\n LEFT OUTER JOIN mark AS m ON (a.mark_id = m.mark_id) WHERE\na.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id =\nc.menu_item_id AND c.menu_item_variant_type_id =\nd.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id =\ne.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id =\n1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL)\n\n AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM\nmenu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted =\n'N' limit 1) AND a.active = 'Y'\n AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE\nCONCAT_WS('', '%,4191,%') OR NULL IS NULL)\nAND is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n\nNested Loop Left Join (cost=5.15..162.10 rows=1 width=148) (actual\ntime=0.168..5.070 rows=89 loops=1)\n Join Filter: (a.mark_id = m.mark_id)\n Rows Removed by Join Filter: 267\n -> Nested Loop (cost=5.15..161.04 rows=1 width=144) (actual\ntime=0.161..4.901 rows=89 loops=1)\n -> Nested Loop (cost=4.86..158.72 rows=1 width=148) (actual\ntime=0.156..4.729 rows=89 loops=1)\n -> Nested Loop (cost=4.57..158.41 rows=1 width=140)\n(actual time=0.151..4.572 rows=89 loops=1)\n -> Nested Loop (cost=4.28..158.10 rows=1\nwidth=132) (actual time=0.145..4.378 rows=89 loops=1)\n -> Nested Loop (cost=0.71..152.51 rows=1\nwidth=95) (actual time=0.121..3.334 rows=89 loops=1)\n -> Index Scan using\nmenu_item_restaurant_id on menu_item a (cost=0.42..150.20 rows=1\nwidth=83) (actual time=0.115..3.129 rows=89 loops=1)\n Index Cond: (restaurant_id = 1528)\n\" Filter: ((active = 'Y'::bpchar)\nAND (is_menu_item_available(menu_item_id, 'Y'::bpchar) =\n'Y'::bpchar))\"\n Rows Removed by Filter: 194\n -> Index Scan using\nmenu_item_category_pk on menu_item_category b (cost=0.29..2.31 rows=1\nwidth=20) (actual time=0.002..0.002 rows=1 loops=89)\n Index Cond:\n(menu_item_category_id = a.menu_item_category_id)\n -> Index Scan using menu_item_variant_pk on\nmenu_item_variant c (cost=3.57..5.59 rows=1 width=45) (actual\ntime=0.002..0.002 rows=1 loops=89)\n Index Cond: (menu_item_variant_id = (SubPlan 1))\n Filter: (a.menu_item_id = menu_item_id)\n SubPlan 1\n -> Limit (cost=3.13..3.14 rows=1\nwidth=8) (actual time=0.008..0.008 rows=1 loops=89)\n -> Aggregate\n(cost=3.13..3.14 rows=1 width=8) (actual time=0.008..0.008 rows=1\nloops=89)\n -> Index Scan using\n\"idx$$_023a0001\" on menu_item_variant (cost=0.43..3.11 rows=8\nwidth=8) (actual time=0.003..0.007 rows=7 loops=89)\n Index Cond:\n(menu_item_id = a.menu_item_id)\n Filter: (deleted =\n'N'::bpchar)\n Rows Removed by Filter: 4\n -> Index Scan using menu_item_variant_type_pk on\nmenu_item_variant_type d (cost=0.29..0.31 rows=1 width=16) (actual\ntime=0.002..0.002 rows=1 loops=89)\n Index Cond: (menu_item_variant_type_id =\nc.menu_item_variant_type_id)\n Filter: ((is_hidden)::text = 'false'::text)\n -> Index Scan using size_pk on item_size e\n(cost=0.29..0.31 rows=1 width=16) (actual time=0.001..0.001 rows=1\nloops=89)\n Index Cond: (size_id = c.size_id)\n -> Index Scan using \"restaurant_idx$$_274b003d\" on restaurant\nf (cost=0.29..2.30 rows=1 width=12) (actual time=0.001..0.001 rows=1\nloops=89)\n Index Cond: (restaurant_id = 1528)\n -> Seq Scan on mark m (cost=0.00..1.03 rows=3 width=12) (actual\ntime=0.000..0.001 rows=3 loops=89)\nPlanning Time: 2.078 ms\nExecution Time: 5.141 ms\n\n\nOn Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:\n\n> attached is the screenshot of RDS performance insights for AWS and it\n> shows high waiting client writes. The api performance is slow. I read that\n> this might be due to IOPS on RDS. However we have 80k IOPS on this test\n> RDS.\n>\n> Below is the query which is being load tested\n>\n> SELECT\n>\n> a.menu_item_id,\n> a.menu_item_name,\n> a.menu_item_category_id,\n> b.menu_item_category_desc,\n> c.menu_item_variant_id,\n> c.menu_item_variant_type_id,\n> c.price,\n> c.size_id,\n> c.parent_menu_item_variant_id,\n> d.menu_item_variant_type_desc,\n> e.size_desc,\n> f.currency_code,\n> a.image,\n> a.mark_id,\n> m.mark_name\n>\n> FROM .menu_item_category AS b, .menu_item_variant AS\n> c,\n> .menu_item_variant_type AS d, .item_size AS e,\n> .restaurant AS f,\n> .menu_item AS a\n>\n> LEFT OUTER JOIN .mark AS m\n> ON (a.mark_id = m.mark_id)\n>\n> WHERE a.menu_item_category_id =\n> b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND\n> c.menu_item_variant_type_id =\n> d.menu_item_variant_type_id AND d.is_hidden = 'false' AND\n> c.size_id = e.size_id AND a.restaurant_id =\n> f.restaurant_id AND f.restaurant_id = 1528 AND\n> (a.menu_item_category_id = NULL OR NULL IS NULL)\n>\n> AND c.menu_item_variant_id = (SELECT\n> min(menu_item_variant_id)\n> FROM\n> .menu_item_variant\n> WHERE\n> menu_item_id = a.menu_item_id AND deleted = 'N'\n> LIMIT 1) AND\n> a.active = 'Y'\n> AND (CONCAT_WS('', ',', a.hidden_branch_ids,\n> ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR\n> NULL IS NULL)\n> AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n>\n> ORDER BY a.row_order, menu_item_id;\n>\n> --Ayub\n>\n\n\n-- \n--------------------------------------------------------------------\nSun Certified Enterprise Architect 1.5\nSun Certified Java Programmer 1.4\nMicrosoft Certified Systems Engineer 2000\nhttp://in.linkedin.com/pub/ayub-khan/a/811/b81\nmobile:+966-502674604\n----------------------------------------------------------------------\nIt is proved that Hard Work and kowledge will get you close but attitude\nwill get you there. However, it's the Love\nof God that will put you over the top!!\n\nPavan,In jProfiler , I see that most cpu is consumed when the Tomcat thread is stuck at PgPreparedStatement.execute. I am using version 42.2.16 of JDBC driver.Ranier,EXPLAIN ANALYZESELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM menu_item_category AS b, menu_item_variant AS c, menu_item_variant_type AS d, item_size AS e, restaurant AS f, menu_item AS a LEFT OUTER JOIN mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' limit 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL)AND is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n\n\nNested Loop Left Join (cost=5.15..162.10 rows=1 width=148) (actual time=0.168..5.070 rows=89 loops=1) Join Filter: (a.mark_id = m.mark_id) Rows Removed by Join Filter: 267 -> Nested Loop (cost=5.15..161.04 rows=1 width=144) (actual time=0.161..4.901 rows=89 loops=1) -> Nested Loop (cost=4.86..158.72 rows=1 width=148) (actual time=0.156..4.729 rows=89 loops=1) -> Nested Loop (cost=4.57..158.41 rows=1 width=140) (actual time=0.151..4.572 rows=89 loops=1) -> Nested Loop (cost=4.28..158.10 rows=1 width=132) (actual time=0.145..4.378 rows=89 loops=1) -> Nested Loop (cost=0.71..152.51 rows=1 width=95) (actual time=0.121..3.334 rows=89 loops=1) -> Index Scan using menu_item_restaurant_id on menu_item a (cost=0.42..150.20 rows=1 width=83) (actual time=0.115..3.129 rows=89 loops=1) Index Cond: (restaurant_id = 1528)\" Filter: ((active = 'Y'::bpchar) AND (is_menu_item_available(menu_item_id, 'Y'::bpchar) = 'Y'::bpchar))\" Rows Removed by Filter: 194 -> Index Scan using menu_item_category_pk on menu_item_category b (cost=0.29..2.31 rows=1 width=20) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_category_id = a.menu_item_category_id) -> Index Scan using menu_item_variant_pk on menu_item_variant c (cost=3.57..5.59 rows=1 width=45) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_variant_id = (SubPlan 1)) Filter: (a.menu_item_id = menu_item_id) SubPlan 1 -> Limit (cost=3.13..3.14 rows=1 width=8) (actual time=0.008..0.008 rows=1 loops=89) -> Aggregate (cost=3.13..3.14 rows=1 width=8) (actual time=0.008..0.008 rows=1 loops=89) -> Index Scan using \"idx$$_023a0001\" on menu_item_variant (cost=0.43..3.11 rows=8 width=8) (actual time=0.003..0.007 rows=7 loops=89) Index Cond: (menu_item_id = a.menu_item_id) Filter: (deleted = 'N'::bpchar) Rows Removed by Filter: 4 -> Index Scan using menu_item_variant_type_pk on menu_item_variant_type d (cost=0.29..0.31 rows=1 width=16) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_variant_type_id = c.menu_item_variant_type_id) Filter: ((is_hidden)::text = 'false'::text) -> Index Scan using size_pk on item_size e (cost=0.29..0.31 rows=1 width=16) (actual time=0.001..0.001 rows=1 loops=89) Index Cond: (size_id = c.size_id) -> Index Scan using \"restaurant_idx$$_274b003d\" on restaurant f (cost=0.29..2.30 rows=1 width=12) (actual time=0.001..0.001 rows=1 loops=89) Index Cond: (restaurant_id = 1528) -> Seq Scan on mark m (cost=0.00..1.03 rows=3 width=12) (actual time=0.000..0.001 rows=3 loops=89)Planning Time: 2.078 msExecution Time: 5.141 ms\nOn Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:attached is the screenshot of RDS performance insights for AWS and it shows high waiting client writes. The api performance is slow. I read that this might be due to IOPS on RDS. However we have 80k IOPS on this test RDS. Below is the query which is being load testedSELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM .menu_item_category AS b, .menu_item_variant AS c, .menu_item_variant_type AS d, .item_size AS e, .restaurant AS f, .menu_item AS a LEFT OUTER JOIN .mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM .menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' LIMIT 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL) AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' ORDER BY a.row_order, menu_item_id;--Ayub\n-- --------------------------------------------------------------------Sun Certified Enterprise Architect 1.5Sun Certified Java Programmer 1.4Microsoft Certified Systems Engineer 2000http://in.linkedin.com/pub/ayub-khan/a/811/b81mobile:+966-502674604----------------------------------------------------------------------It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!",
"msg_date": "Fri, 11 Jun 2021 19:59:28 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "Em sex., 11 de jun. de 2021 às 13:59, Ayub Khan <[email protected]>\nescreveu:\n\n> Pavan,\n>\n> In jProfiler , I see that most cpu is consumed when the Tomcat thread is\n> stuck at PgPreparedStatement.execute. I am using version 42.2.16 of JDBC\n> driver.\n>\n>\n> Ranier,\n>\n> EXPLAIN ANALYZE\n>\n> SELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name\n>\n> FROM menu_item_category AS b, menu_item_variant AS c, menu_item_variant_type AS d, item_size AS e, restaurant AS f, menu_item AS a\n>\n> LEFT OUTER JOIN mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL)\n>\n> AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' limit 1) AND a.active = 'Y'\n> AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL)\n> AND is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n>\n> Nested Loop Left Join (cost=5.15..162.10 rows=1 width=148) (actual time=0.168..5.070 rows=89 loops=1)\n> Join Filter: (a.mark_id = m.mark_id)\n> Rows Removed by Join Filter: 267\n> -> Nested Loop (cost=5.15..161.04 rows=1 width=144) (actual time=0.161..4.901 rows=89 loops=1)\n> -> Nested Loop (cost=4.86..158.72 rows=1 width=148) (actual time=0.156..4.729 rows=89 loops=1)\n> -> Nested Loop (cost=4.57..158.41 rows=1 width=140) (actual time=0.151..4.572 rows=89 loops=1)\n> -> Nested Loop (cost=4.28..158.10 rows=1 width=132) (actual time=0.145..4.378 rows=89 loops=1)\n> -> Nested Loop (cost=0.71..152.51 rows=1 width=95) (actual time=0.121..3.334 rows=89 loops=1)\n> -> Index Scan using menu_item_restaurant_id on menu_item a (cost=0.42..150.20 rows=1 width=83) (actual time=0.115..3.129 rows=89 loops=1)\n> Index Cond: (restaurant_id = 1528)\n> \" Filter: ((active = 'Y'::bpchar) AND (is_menu_item_available(menu_item_id, 'Y'::bpchar) = 'Y'::bpchar))\"\n> Rows Removed by Filter: 194\n> -> Index Scan using menu_item_category_pk on menu_item_category b (cost=0.29..2.31 rows=1 width=20) (actual time=0.002..0.002 rows=1 loops=89)\n> Index Cond: (menu_item_category_id = a.menu_item_category_id)\n> -> Index Scan using menu_item_variant_pk on menu_item_variant c (cost=3.57..5.59 rows=1 width=45) (actual time=0.002..0.002 rows=1 loops=89)\n> Index Cond: (menu_item_variant_id = (SubPlan 1))\n> Filter: (a.menu_item_id = menu_item_id)\n> SubPlan 1\n> -> Limit (cost=3.13..3.14 rows=1 width=8) (actual time=0.008..0.008 rows=1 loops=89)\n> -> Aggregate (cost=3.13..3.14 rows=1 width=8) (actual time=0.008..0.008 rows=1 loops=89)\n> -> Index Scan using \"idx$$_023a0001\" on menu_item_variant (cost=0.43..3.11 rows=8 width=8) (actual time=0.003..0.007 rows=7 loops=89)\n> Index Cond: (menu_item_id = a.menu_item_id)\n> Filter: (deleted = 'N'::bpchar)\n> Rows Removed by Filter: 4\n> -> Index Scan using menu_item_variant_type_pk on menu_item_variant_type d (cost=0.29..0.31 rows=1 width=16) (actual time=0.002..0.002 rows=1 loops=89)\n> Index Cond: (menu_item_variant_type_id = c.menu_item_variant_type_id)\n> Filter: ((is_hidden)::text = 'false'::text)\n> -> Index Scan using size_pk on item_size e (cost=0.29..0.31 rows=1 width=16) (actual time=0.001..0.001 rows=1 loops=89)\n> Index Cond: (size_id = c.size_id)\n> -> Index Scan using \"restaurant_idx$$_274b003d\" on restaurant f (cost=0.29..2.30 rows=1 width=12) (actual time=0.001..0.001 rows=1 loops=89)\n> Index Cond: (restaurant_id = 1528)\n> -> Seq Scan on mark m (cost=0.00..1.03 rows=3 width=12) (actual time=0.000..0.001 rows=3 loops=89)\n> Planning Time: 2.078 ms\n> Execution Time: 5.141 ms\n>\n> My guess is a bad planner (or slow planner) because of wrong or\ninconsistent parameters.\nYou must define precise parameters before calling a Prepared Statement or\nPlanner will try to guess to do the best.\n\nBut this is a simple \"guess\" and can be completely wrong.\n\nregards,\nRanier Vilela\n\nEm sex., 11 de jun. de 2021 às 13:59, Ayub Khan <[email protected]> escreveu:Pavan,In jProfiler , I see that most cpu is consumed when the Tomcat thread is stuck at PgPreparedStatement.execute. I am using version 42.2.16 of JDBC driver.Ranier,EXPLAIN ANALYZESELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM menu_item_category AS b, menu_item_variant AS c, menu_item_variant_type AS d, item_size AS e, restaurant AS f, menu_item AS a LEFT OUTER JOIN mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' limit 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL)AND is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n\n\nNested Loop Left Join (cost=5.15..162.10 rows=1 width=148) (actual time=0.168..5.070 rows=89 loops=1) Join Filter: (a.mark_id = m.mark_id) Rows Removed by Join Filter: 267 -> Nested Loop (cost=5.15..161.04 rows=1 width=144) (actual time=0.161..4.901 rows=89 loops=1) -> Nested Loop (cost=4.86..158.72 rows=1 width=148) (actual time=0.156..4.729 rows=89 loops=1) -> Nested Loop (cost=4.57..158.41 rows=1 width=140) (actual time=0.151..4.572 rows=89 loops=1) -> Nested Loop (cost=4.28..158.10 rows=1 width=132) (actual time=0.145..4.378 rows=89 loops=1) -> Nested Loop (cost=0.71..152.51 rows=1 width=95) (actual time=0.121..3.334 rows=89 loops=1) -> Index Scan using menu_item_restaurant_id on menu_item a (cost=0.42..150.20 rows=1 width=83) (actual time=0.115..3.129 rows=89 loops=1) Index Cond: (restaurant_id = 1528)\" Filter: ((active = 'Y'::bpchar) AND (is_menu_item_available(menu_item_id, 'Y'::bpchar) = 'Y'::bpchar))\" Rows Removed by Filter: 194 -> Index Scan using menu_item_category_pk on menu_item_category b (cost=0.29..2.31 rows=1 width=20) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_category_id = a.menu_item_category_id) -> Index Scan using menu_item_variant_pk on menu_item_variant c (cost=3.57..5.59 rows=1 width=45) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_variant_id = (SubPlan 1)) Filter: (a.menu_item_id = menu_item_id) SubPlan 1 -> Limit (cost=3.13..3.14 rows=1 width=8) (actual time=0.008..0.008 rows=1 loops=89) -> Aggregate (cost=3.13..3.14 rows=1 width=8) (actual time=0.008..0.008 rows=1 loops=89) -> Index Scan using \"idx$$_023a0001\" on menu_item_variant (cost=0.43..3.11 rows=8 width=8) (actual time=0.003..0.007 rows=7 loops=89) Index Cond: (menu_item_id = a.menu_item_id) Filter: (deleted = 'N'::bpchar) Rows Removed by Filter: 4 -> Index Scan using menu_item_variant_type_pk on menu_item_variant_type d (cost=0.29..0.31 rows=1 width=16) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_variant_type_id = c.menu_item_variant_type_id) Filter: ((is_hidden)::text = 'false'::text) -> Index Scan using size_pk on item_size e (cost=0.29..0.31 rows=1 width=16) (actual time=0.001..0.001 rows=1 loops=89) Index Cond: (size_id = c.size_id) -> Index Scan using \"restaurant_idx$$_274b003d\" on restaurant f (cost=0.29..2.30 rows=1 width=12) (actual time=0.001..0.001 rows=1 loops=89) Index Cond: (restaurant_id = 1528) -> Seq Scan on mark m (cost=0.00..1.03 rows=3 width=12) (actual time=0.000..0.001 rows=3 loops=89)Planning Time: 2.078 msExecution Time: 5.141 msMy guess is a bad planner (or slow planner) because of wrong or inconsistent parameters.You must define precise parameters before calling a Prepared Statement or Planner will try to guess to do the best.But this is a simple \"guess\" and can be completely wrong.regards,Ranier Vilela",
"msg_date": "Fri, 11 Jun 2021 14:37:53 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "Ranier,\n\nI tried to VACCUM ANALYZE the tables involved multiple times and also tried\nthe statistics approach as well\n\nPavan,\n\nI upgraded to 42.2.21 version of jdbc driver and using HikariCp connection\npool management 3.1.0\n\njProfiler shows the threads are stuck with high cpu usage on.\n\norg.postgresql.jdbc.PgPreparedStatement.execute ,\n\n below is the java code which calls postgresql\n\nConnection con = null;\nCallableStatement callableStatement = null;\nResultSet rs = null;\nResultSet rs1 = null;\nPreparedStatement ps = null;\ntry {\n con = connectionManager.getConnetion();\ncon.setAutoCommit(false);\n callableStatement = con.prepareCall(\"call\nmenu_pkg$get_menu_items_p_new(?,?,?,?,?,?)\");\n if (catId == 0)\n callableStatement.setNull(2, Types.BIGINT);\n else\n callableStatement.setLong(2, catId);\n callableStatement.setString(3, \"Y\");\n\n if (branchId == 0)\n callableStatement.setString(4, null);\n else\n callableStatement.setLong(4, branchId);\n\n callableStatement.setNull(5, Types.OTHER);\n callableStatement.setNull(6, Types.OTHER);\n callableStatement.registerOutParameter(5, Types.OTHER);\n callableStatement.registerOutParameter(6, Types.OTHER);\n callableStatement.execute();\n rs = (ResultSet) callableStatement.getObject(5);\n rs1 = (ResultSet) callableStatement.getObject(6);\n MenuMobile menuMobile;\n\n try {\n while (rs.next()) {\n\n //process rs\n }\n MenuCombo menuCombo;\n while (rs1.next()) {\n //process rs1\n }\n\n menuMobileListCombo.setMenuComboList(menuComboList);\n menuMobileListCombo.setMenuMobileList(menuMobileList);\n } catch (SQLException e) {\n LOG.error(e.getLocalizedMessage(), e);\n }\n\n con.commit();\n con.setAutoCommit(true);\n} catch (SQLException e) {\n LOG.error(e.getLocalizedMessage(), e);\n throw e;\n} finally {\n if (rs != null)\n rs.close();\n if (rs1 != null)\n rs1.close();\n if (ps != null)\n ps.close();\n\n if (callableStatement != null) callableStatement.close();\n if (con != null) con.close();\n}\nreturn menuMobileListCombo;\n}\n\nOn Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:\n\n> attached is the screenshot of RDS performance insights for AWS and it\n> shows high waiting client writes. The api performance is slow. I read that\n> this might be due to IOPS on RDS. However we have 80k IOPS on this test\n> RDS.\n>\n> Below is the query which is being load tested\n>\n> SELECT\n>\n> a.menu_item_id,\n> a.menu_item_name,\n> a.menu_item_category_id,\n> b.menu_item_category_desc,\n> c.menu_item_variant_id,\n> c.menu_item_variant_type_id,\n> c.price,\n> c.size_id,\n> c.parent_menu_item_variant_id,\n> d.menu_item_variant_type_desc,\n> e.size_desc,\n> f.currency_code,\n> a.image,\n> a.mark_id,\n> m.mark_name\n>\n> FROM .menu_item_category AS b, .menu_item_variant AS\n> c,\n> .menu_item_variant_type AS d, .item_size AS e,\n> .restaurant AS f,\n> .menu_item AS a\n>\n> LEFT OUTER JOIN .mark AS m\n> ON (a.mark_id = m.mark_id)\n>\n> WHERE a.menu_item_category_id =\n> b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND\n> c.menu_item_variant_type_id =\n> d.menu_item_variant_type_id AND d.is_hidden = 'false' AND\n> c.size_id = e.size_id AND a.restaurant_id =\n> f.restaurant_id AND f.restaurant_id = 1528 AND\n> (a.menu_item_category_id = NULL OR NULL IS NULL)\n>\n> AND c.menu_item_variant_id = (SELECT\n> min(menu_item_variant_id)\n> FROM\n> .menu_item_variant\n> WHERE\n> menu_item_id = a.menu_item_id AND deleted = 'N'\n> LIMIT 1) AND\n> a.active = 'Y'\n> AND (CONCAT_WS('', ',', a.hidden_branch_ids,\n> ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR\n> NULL IS NULL)\n> AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n>\n> ORDER BY a.row_order, menu_item_id;\n>\n> --Ayub\n>\n\n\n-- \n--------------------------------------------------------------------\nSun Certified Enterprise Architect 1.5\nSun Certified Java Programmer 1.4\nMicrosoft Certified Systems Engineer 2000\nhttp://in.linkedin.com/pub/ayub-khan/a/811/b81\nmobile:+966-502674604\n----------------------------------------------------------------------\nIt is proved that Hard Work and kowledge will get you close but attitude\nwill get you there. However, it's the Love\nof God that will put you over the top!!\n\nRanier,I tried to VACCUM ANALYZE the tables involved multiple times and also tried the statistics approach as wellPavan, I upgraded to 42.2.21 version of jdbc driver and using HikariCp connection pool management 3.1.0jProfiler shows the threads are stuck with high cpu usage on. org.postgresql.jdbc.PgPreparedStatement.execute , below is the java code which calls postgresql Connection con = null;CallableStatement callableStatement = null;ResultSet rs = null;ResultSet rs1 = null;PreparedStatement ps = null;try { con = connectionManager.getConnetion();con.setAutoCommit(false); callableStatement = con.prepareCall(\"call menu_pkg$get_menu_items_p_new(?,?,?,?,?,?)\"); if (catId == 0) callableStatement.setNull(2, Types.BIGINT); else callableStatement.setLong(2, catId); callableStatement.setString(3, \"Y\"); if (branchId == 0) callableStatement.setString(4, null); else callableStatement.setLong(4, branchId); callableStatement.setNull(5, Types.OTHER); callableStatement.setNull(6, Types.OTHER); callableStatement.registerOutParameter(5, Types.OTHER); callableStatement.registerOutParameter(6, Types.OTHER); callableStatement.execute(); rs = (ResultSet) callableStatement.getObject(5); rs1 = (ResultSet) callableStatement.getObject(6); MenuMobile menuMobile; try { while (rs.next()) { //process rs } MenuCombo menuCombo; while (rs1.next()) { //process rs1 } menuMobileListCombo.setMenuComboList(menuComboList); menuMobileListCombo.setMenuMobileList(menuMobileList); } catch (SQLException e) { LOG.error(e.getLocalizedMessage(), e); } con.commit(); con.setAutoCommit(true);} catch (SQLException e) { LOG.error(e.getLocalizedMessage(), e); throw e;} finally { if (rs != null) rs.close(); if (rs1 != null) rs1.close(); if (ps != null) ps.close(); if (callableStatement != null) callableStatement.close(); if (con != null) con.close();}return menuMobileListCombo;}On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:attached is the screenshot of RDS performance insights for AWS and it shows high waiting client writes. The api performance is slow. I read that this might be due to IOPS on RDS. However we have 80k IOPS on this test RDS. Below is the query which is being load testedSELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM .menu_item_category AS b, .menu_item_variant AS c, .menu_item_variant_type AS d, .item_size AS e, .restaurant AS f, .menu_item AS a LEFT OUTER JOIN .mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM .menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' LIMIT 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL) AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' ORDER BY a.row_order, menu_item_id;--Ayub\n-- --------------------------------------------------------------------Sun Certified Enterprise Architect 1.5Sun Certified Java Programmer 1.4Microsoft Certified Systems Engineer 2000http://in.linkedin.com/pub/ayub-khan/a/811/b81mobile:+966-502674604----------------------------------------------------------------------It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!",
"msg_date": "Fri, 11 Jun 2021 20:59:07 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "Em sex., 11 de jun. de 2021 às 14:59, Ayub Khan <[email protected]>\nescreveu:\n\n> Ranier,\n>\n> I tried to VACCUM ANALYZE the tables involved multiple times and also\n> tried the statistics approach as well\n>\nAyub you can try by the network side:\n\nhttps://stackoverflow.com/questions/50298447/postgres-jdbc-client-getting-stuck-at-reading-from-socket\n\n\" We found out that this was caused by the database server's MTU setting.\nMTU was set to 9000 by default and resulted in packet loss. Changing it to\n1500 resolved the issue.\"\n\nregards,\nRanier Vilela\n\nEm sex., 11 de jun. de 2021 às 14:59, Ayub Khan <[email protected]> escreveu:Ranier,I tried to VACCUM ANALYZE the tables involved multiple times and also tried the statistics approach as wellAyub you can try by the network side:https://stackoverflow.com/questions/50298447/postgres-jdbc-client-getting-stuck-at-reading-from-socket\"\nWe found out that this was caused by the database server's MTU setting. MTU was set to 9000 by default and resulted in packet loss. Changing it \nto 1500 resolved the issue.\"regards,Ranier Vilela",
"msg_date": "Fri, 11 Jun 2021 15:03:03 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "Ranier,\n\nI verified the link you gave and also checked AWS documentation and found\nthe exact output as shown in AWS:\n\nhttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html\n\n[ec2-user ~]$ tracepath amazon.com\n 1?: [LOCALHOST] pmtu 9001\n 1: ip-xxx-xx-xx-1.us-west-1.compute.internal (xxx.xx.xx.x) 0.187ms pmtu 1500\n\nShould the LOCALHOST pmtu needs to be updated to 1500 ?\n\n\n\nOn Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:\n\n> attached is the screenshot of RDS performance insights for AWS and it\n> shows high waiting client writes. The api performance is slow. I read that\n> this might be due to IOPS on RDS. However we have 80k IOPS on this test\n> RDS.\n>\n> Below is the query which is being load tested\n>\n> SELECT\n>\n> a.menu_item_id,\n> a.menu_item_name,\n> a.menu_item_category_id,\n> b.menu_item_category_desc,\n> c.menu_item_variant_id,\n> c.menu_item_variant_type_id,\n> c.price,\n> c.size_id,\n> c.parent_menu_item_variant_id,\n> d.menu_item_variant_type_desc,\n> e.size_desc,\n> f.currency_code,\n> a.image,\n> a.mark_id,\n> m.mark_name\n>\n> FROM .menu_item_category AS b, .menu_item_variant AS\n> c,\n> .menu_item_variant_type AS d, .item_size AS e,\n> .restaurant AS f,\n> .menu_item AS a\n>\n> LEFT OUTER JOIN .mark AS m\n> ON (a.mark_id = m.mark_id)\n>\n> WHERE a.menu_item_category_id =\n> b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND\n> c.menu_item_variant_type_id =\n> d.menu_item_variant_type_id AND d.is_hidden = 'false' AND\n> c.size_id = e.size_id AND a.restaurant_id =\n> f.restaurant_id AND f.restaurant_id = 1528 AND\n> (a.menu_item_category_id = NULL OR NULL IS NULL)\n>\n> AND c.menu_item_variant_id = (SELECT\n> min(menu_item_variant_id)\n> FROM\n> .menu_item_variant\n> WHERE\n> menu_item_id = a.menu_item_id AND deleted = 'N'\n> LIMIT 1) AND\n> a.active = 'Y'\n> AND (CONCAT_WS('', ',', a.hidden_branch_ids,\n> ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR\n> NULL IS NULL)\n> AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n>\n> ORDER BY a.row_order, menu_item_id;\n>\n> --Ayub\n>\n\n\n-- \n--------------------------------------------------------------------\nSun Certified Enterprise Architect 1.5\nSun Certified Java Programmer 1.4\nMicrosoft Certified Systems Engineer 2000\nhttp://in.linkedin.com/pub/ayub-khan/a/811/b81\nmobile:+966-502674604\n----------------------------------------------------------------------\nIt is proved that Hard Work and kowledge will get you close but attitude\nwill get you there. However, it's the Love\nof God that will put you over the top!!\n\nRanier,I verified the link you gave and also checked AWS documentation and found the exact output as shown in AWS:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html[ec2-user ~]$ tracepath amazon.com\r\n 1?: [LOCALHOST] pmtu 9001\r\n 1: ip-xxx-xx-xx-1.us-west-1.compute.internal (xxx.xx.xx.x) 0.187ms pmtu 1500\r\n\r\nShould the LOCALHOST pmtu needs to be updated to 1500 ?\r\n\r\nOn Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:attached is the screenshot of RDS performance insights for AWS and it shows high waiting client writes. The api performance is slow. I read that this might be due to IOPS on RDS. However we have 80k IOPS on this test RDS. Below is the query which is being load testedSELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM .menu_item_category AS b, .menu_item_variant AS c, .menu_item_variant_type AS d, .item_size AS e, .restaurant AS f, .menu_item AS a LEFT OUTER JOIN .mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM .menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' LIMIT 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL) AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' ORDER BY a.row_order, menu_item_id;--Ayub\n-- --------------------------------------------------------------------Sun Certified Enterprise Architect 1.5Sun Certified Java Programmer 1.4Microsoft Certified Systems Engineer 2000http://in.linkedin.com/pub/ayub-khan/a/811/b81mobile:+966-502674604----------------------------------------------------------------------It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!",
"msg_date": "Fri, 11 Jun 2021 21:19:19 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "Em sex., 11 de jun. de 2021 às 15:19, Ayub Khan <[email protected]>\nescreveu:\n\n> Ranier,\n>\n> I verified the link you gave and also checked AWS documentation and found\n> the exact output as shown in AWS:\n>\n> https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html\n>\n> [ec2-user ~]$ tracepath amazon.com\n> 1?: [LOCALHOST] pmtu 9001\n> 1: ip-xxx-xx-xx-1.us-west-1.compute.internal (xxx.xx.xx.x) 0.187ms pmtu 1500\n>\n> Should the LOCALHOST pmtu needs to be updated to 1500 ?\n>\n> Or us-west-1.compute.internal should be set to 9000.\nI think both must match. 9000 are jumbo frames.\nThe bigger the better.\n\nTry switching to 9000 first and 1500 if it does not work.\n\nregards,\nRanier Vilela\n\nEm sex., 11 de jun. de 2021 às 15:19, Ayub Khan <[email protected]> escreveu:Ranier,I verified the link you gave and also checked AWS documentation and found the exact output as shown in AWS:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html[ec2-user ~]$ tracepath amazon.com\n 1?: [LOCALHOST] pmtu 9001\n 1: ip-xxx-xx-xx-1.us-west-1.compute.internal (xxx.xx.xx.x) 0.187ms pmtu 1500\n\nShould the LOCALHOST pmtu needs to be updated to 1500 ?Or \nus-west-1.compute.internal should be set to 9000.I think both must match. 9000 are jumbo frames.The bigger the better.Try switching to 9000 first and 1500 if it does not work.regards,Ranier Vilela",
"msg_date": "Fri, 11 Jun 2021 15:25:04 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "Ranier,\n\nBoth production and test vms are running on Ubuntu:\n\nthe below command when executed from client VM shows that its using\nPMTU 9001.\n\n# tracepath dns-name-of-rds\n 1?: [LOCALHOST] pmtu 9001\n\nOn Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:\n\n> attached is the screenshot of RDS performance insights for AWS and it\n> shows high waiting client writes. The api performance is slow. I read that\n> this might be due to IOPS on RDS. However we have 80k IOPS on this test\n> RDS.\n>\n> Below is the query which is being load tested\n>\n> SELECT\n>\n> a.menu_item_id,\n> a.menu_item_name,\n> a.menu_item_category_id,\n> b.menu_item_category_desc,\n> c.menu_item_variant_id,\n> c.menu_item_variant_type_id,\n> c.price,\n> c.size_id,\n> c.parent_menu_item_variant_id,\n> d.menu_item_variant_type_desc,\n> e.size_desc,\n> f.currency_code,\n> a.image,\n> a.mark_id,\n> m.mark_name\n>\n> FROM .menu_item_category AS b, .menu_item_variant AS\n> c,\n> .menu_item_variant_type AS d, .item_size AS e,\n> .restaurant AS f,\n> .menu_item AS a\n>\n> LEFT OUTER JOIN .mark AS m\n> ON (a.mark_id = m.mark_id)\n>\n> WHERE a.menu_item_category_id =\n> b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND\n> c.menu_item_variant_type_id =\n> d.menu_item_variant_type_id AND d.is_hidden = 'false' AND\n> c.size_id = e.size_id AND a.restaurant_id =\n> f.restaurant_id AND f.restaurant_id = 1528 AND\n> (a.menu_item_category_id = NULL OR NULL IS NULL)\n>\n> AND c.menu_item_variant_id = (SELECT\n> min(menu_item_variant_id)\n> FROM\n> .menu_item_variant\n> WHERE\n> menu_item_id = a.menu_item_id AND deleted = 'N'\n> LIMIT 1) AND\n> a.active = 'Y'\n> AND (CONCAT_WS('', ',', a.hidden_branch_ids,\n> ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR\n> NULL IS NULL)\n> AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n>\n> ORDER BY a.row_order, menu_item_id;\n>\n> --Ayub\n>\n\n\n-- \n--------------------------------------------------------------------\nSun Certified Enterprise Architect 1.5\nSun Certified Java Programmer 1.4\nMicrosoft Certified Systems Engineer 2000\nhttp://in.linkedin.com/pub/ayub-khan/a/811/b81\nmobile:+966-502674604\n----------------------------------------------------------------------\nIt is proved that Hard Work and kowledge will get you close but attitude\nwill get you there. However, it's the Love\nof God that will put you over the top!!\n\nRanier,Both production and test vms are running on Ubuntu:the below command when executed from client VM shows that its using PMTU 9001.# tracepath dns-name-of-rds 1?: [LOCALHOST] pmtu 9001On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:attached is the screenshot of RDS performance insights for AWS and it shows high waiting client writes. The api performance is slow. I read that this might be due to IOPS on RDS. However we have 80k IOPS on this test RDS. Below is the query which is being load testedSELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM .menu_item_category AS b, .menu_item_variant AS c, .menu_item_variant_type AS d, .item_size AS e, .restaurant AS f, .menu_item AS a LEFT OUTER JOIN .mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM .menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' LIMIT 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL) AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' ORDER BY a.row_order, menu_item_id;--Ayub\n-- --------------------------------------------------------------------Sun Certified Enterprise Architect 1.5Sun Certified Java Programmer 1.4Microsoft Certified Systems Engineer 2000http://in.linkedin.com/pub/ayub-khan/a/811/b81mobile:+966-502674604----------------------------------------------------------------------It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!",
"msg_date": "Fri, 11 Jun 2021 22:45:27 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "On Fri, Jun 11, 2021 at 12:28 PM Ayub Khan <[email protected]> wrote:\n\n> Vijay,\n>\n> Both tomcat and postgresql are on the same region as that of the database\n> server. It is an RDS so I do not have shell access to it.\n>\n> Jeff,\n>\n> The tomcat profile is suggesting that it's waiting for a response from the\n> database server.\n>\n\nBut waiting for a response should consume zero CPU, that is why I wonder if\nthis is a CPU profile or a wall-time profile.\n\nTomcat and RDS are in the same availability region as eu-central-1a\n>\n\nI don't think that that necessarily guarantees high network performance.\nSome EC2 server classes have better networking than others. And if the\nserver says it is waiting for the client, and the client says it is\nwaiting server (assuming that that is really what it is saying), then what\nelse could it be but the network?\n\nCheers,\n\nJeff\n\nOn Fri, Jun 11, 2021 at 12:28 PM Ayub Khan <[email protected]> wrote:Vijay, Both tomcat and postgresql are on the same region as that of the database server. It is an RDS so I do not have shell access to it.Jeff,The tomcat profile is suggesting that it's waiting for a response from the database server.But waiting for a response should consume zero CPU, that is why I wonder if this is a CPU profile or a wall-time profile.Tomcat and RDS are in the same availability region as eu-central-1aI don't think that that necessarily guarantees high network performance. Some EC2 server classes have better networking than others. And if the server says it is waiting for the client, and the client says it is waiting server (assuming that that is really what it is saying), then what else could it be but the network?Cheers,Jeff",
"msg_date": "Fri, 11 Jun 2021 17:58:50 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "Jeff,\n\nBoth tomcat vm and RDS vms have 25Gbps\n\nPostgresql Db class is db.r6g.16xlarge\nTomcat vm is c5.9xlarge\n\n--Ayub\n\nOn Wed, 9 Jun 2021, 17:47 Ayub Khan, <[email protected]> wrote:\n\n> attached is the screenshot of RDS performance insights for AWS and it\n> shows high waiting client writes. The api performance is slow. I read that\n> this might be due to IOPS on RDS. However we have 80k IOPS on this test\n> RDS.\n>\n> Below is the query which is being load tested\n>\n> SELECT\n>\n> a.menu_item_id,\n> a.menu_item_name,\n> a.menu_item_category_id,\n> b.menu_item_category_desc,\n> c.menu_item_variant_id,\n> c.menu_item_variant_type_id,\n> c.price,\n> c.size_id,\n> c.parent_menu_item_variant_id,\n> d.menu_item_variant_type_desc,\n> e.size_desc,\n> f.currency_code,\n> a.image,\n> a.mark_id,\n> m.mark_name\n>\n> FROM .menu_item_category AS b, .menu_item_variant AS\n> c,\n> .menu_item_variant_type AS d, .item_size AS e,\n> .restaurant AS f,\n> .menu_item AS a\n>\n> LEFT OUTER JOIN .mark AS m\n> ON (a.mark_id = m.mark_id)\n>\n> WHERE a.menu_item_category_id =\n> b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND\n> c.menu_item_variant_type_id =\n> d.menu_item_variant_type_id AND d.is_hidden = 'false' AND\n> c.size_id = e.size_id AND a.restaurant_id =\n> f.restaurant_id AND f.restaurant_id = 1528 AND\n> (a.menu_item_category_id = NULL OR NULL IS NULL)\n>\n> AND c.menu_item_variant_id = (SELECT\n> min(menu_item_variant_id)\n> FROM\n> .menu_item_variant\n> WHERE\n> menu_item_id = a.menu_item_id AND deleted = 'N'\n> LIMIT 1) AND\n> a.active = 'Y'\n> AND (CONCAT_WS('', ',', a.hidden_branch_ids,\n> ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR\n> NULL IS NULL)\n> AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n>\n> ORDER BY a.row_order, menu_item_id;\n>\n> --Ayub\n>\n\nJeff,Both tomcat vm and RDS vms have 25Gbps Postgresql Db class is db.r6g.16xlarge Tomcat vm is c5.9xlarge --AyubOn Wed, 9 Jun 2021, 17:47 Ayub Khan, <[email protected]> wrote:attached is the screenshot of RDS performance insights for AWS and it shows high waiting client writes. The api performance is slow. I read that this might be due to IOPS on RDS. However we have 80k IOPS on this test RDS. Below is the query which is being load testedSELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM .menu_item_category AS b, .menu_item_variant AS c, .menu_item_variant_type AS d, .item_size AS e, .restaurant AS f, .menu_item AS a LEFT OUTER JOIN .mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM .menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' LIMIT 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL) AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' ORDER BY a.row_order, menu_item_id;--Ayub",
"msg_date": "Sat, 12 Jun 2021 01:06:20 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "Ranier,\n\nThis issue is only with queries which are slow, if it's an MTU issue then\nit should be with all the APIs.\n\nI tried on Aurora db and I see same plan and also same slowness\n\nOn Wed, 9 Jun 2021, 17:47 Ayub Khan, <[email protected]> wrote:\n\n> attached is the screenshot of RDS performance insights for AWS and it\n> shows high waiting client writes. The api performance is slow. I read that\n> this might be due to IOPS on RDS. However we have 80k IOPS on this test\n> RDS.\n>\n> Below is the query which is being load tested\n>\n> SELECT\n>\n> a.menu_item_id,\n> a.menu_item_name,\n> a.menu_item_category_id,\n> b.menu_item_category_desc,\n> c.menu_item_variant_id,\n> c.menu_item_variant_type_id,\n> c.price,\n> c.size_id,\n> c.parent_menu_item_variant_id,\n> d.menu_item_variant_type_desc,\n> e.size_desc,\n> f.currency_code,\n> a.image,\n> a.mark_id,\n> m.mark_name\n>\n> FROM .menu_item_category AS b, .menu_item_variant AS\n> c,\n> .menu_item_variant_type AS d, .item_size AS e,\n> .restaurant AS f,\n> .menu_item AS a\n>\n> LEFT OUTER JOIN .mark AS m\n> ON (a.mark_id = m.mark_id)\n>\n> WHERE a.menu_item_category_id =\n> b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND\n> c.menu_item_variant_type_id =\n> d.menu_item_variant_type_id AND d.is_hidden = 'false' AND\n> c.size_id = e.size_id AND a.restaurant_id =\n> f.restaurant_id AND f.restaurant_id = 1528 AND\n> (a.menu_item_category_id = NULL OR NULL IS NULL)\n>\n> AND c.menu_item_variant_id = (SELECT\n> min(menu_item_variant_id)\n> FROM\n> .menu_item_variant\n> WHERE\n> menu_item_id = a.menu_item_id AND deleted = 'N'\n> LIMIT 1) AND\n> a.active = 'Y'\n> AND (CONCAT_WS('', ',', a.hidden_branch_ids,\n> ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR\n> NULL IS NULL)\n> AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n>\n> ORDER BY a.row_order, menu_item_id;\n>\n> --Ayub\n>\n\nRanier, This issue is only with queries which are slow, if it's an MTU issue then it should be with all the APIs.I tried on Aurora db and I see same plan and also same slowness On Wed, 9 Jun 2021, 17:47 Ayub Khan, <[email protected]> wrote:attached is the screenshot of RDS performance insights for AWS and it shows high waiting client writes. The api performance is slow. I read that this might be due to IOPS on RDS. However we have 80k IOPS on this test RDS. Below is the query which is being load testedSELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM .menu_item_category AS b, .menu_item_variant AS c, .menu_item_variant_type AS d, .item_size AS e, .restaurant AS f, .menu_item AS a LEFT OUTER JOIN .mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM .menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' LIMIT 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL) AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' ORDER BY a.row_order, menu_item_id;--Ayub",
"msg_date": "Sat, 12 Jun 2021 11:19:57 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "Em sáb., 12 de jun. de 2021 às 05:20, Ayub Khan <[email protected]>\nescreveu:\n\n> Ranier,\n>\n> This issue is only with queries which are slow, if it's an MTU issue then\n> it should be with all the APIs.\n>\n> I tried on Aurora db and I see same plan and also same slowness\n>\nI think it is more indicative that the problem is in the network.\nDespite having a proposed solution (MTU), you still have to ensure that the\nmain problem happens (*packet loss*).\n\nregards,\nRanier Vilela\n\nEm sáb., 12 de jun. de 2021 às 05:20, Ayub Khan <[email protected]> escreveu:Ranier, This issue is only with queries which are slow, if it's an MTU issue then it should be with all the APIs.I tried on Aurora db and I see same plan and also same slowness I think it is more indicative that the problem is in the network.Despite having a proposed solution (MTU), you still have to ensure that the main problem happens (*packet loss*).regards,Ranier Vilela",
"msg_date": "Sat, 12 Jun 2021 09:03:58 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "since you are willing to try out options :)\n\nif your setup runs the same test plan queries on jmeter against oracle and\npostgresql\nand only postgresql shows waits or degraded performance I think this is\nmore then simply network.\n\ncan you simply boot up an ec2 ubuntu/centos and install postgresql.\nand run pgbench locally on the installed db with the same queries.\n\nand run the same pgbench against RDS and share the results.\n\n\n\n\n\n\n\n\nOn Sat, 12 Jun 2021 at 13:50, Ayub Khan <[email protected]> wrote:\n\n> Ranier,\n>\n> This issue is only with queries which are slow, if it's an MTU issue then\n> it should be with all the APIs.\n>\n> I tried on Aurora db and I see same plan and also same slowness\n>\n> On Wed, 9 Jun 2021, 17:47 Ayub Khan, <[email protected]> wrote:\n>\n>> attached is the screenshot of RDS performance insights for AWS and it\n>> shows high waiting client writes. The api performance is slow. I read that\n>> this might be due to IOPS on RDS. However we have 80k IOPS on this test\n>> RDS.\n>>\n>> Below is the query which is being load tested\n>>\n>> SELECT\n>>\n>> a.menu_item_id,\n>> a.menu_item_name,\n>> a.menu_item_category_id,\n>> b.menu_item_category_desc,\n>> c.menu_item_variant_id,\n>> c.menu_item_variant_type_id,\n>> c.price,\n>> c.size_id,\n>> c.parent_menu_item_variant_id,\n>> d.menu_item_variant_type_desc,\n>> e.size_desc,\n>> f.currency_code,\n>> a.image,\n>> a.mark_id,\n>> m.mark_name\n>>\n>> FROM .menu_item_category AS b, .menu_item_variant AS\n>> c,\n>> .menu_item_variant_type AS d, .item_size AS e,\n>> .restaurant AS f,\n>> .menu_item AS a\n>>\n>> LEFT OUTER JOIN .mark AS m\n>> ON (a.mark_id = m.mark_id)\n>>\n>> WHERE a.menu_item_category_id =\n>> b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND\n>> c.menu_item_variant_type_id =\n>> d.menu_item_variant_type_id AND d.is_hidden = 'false' AND\n>> c.size_id = e.size_id AND a.restaurant_id =\n>> f.restaurant_id AND f.restaurant_id = 1528 AND\n>> (a.menu_item_category_id = NULL OR NULL IS\n>> NULL)\n>>\n>> AND c.menu_item_variant_id = (SELECT\n>> min(menu_item_variant_id)\n>> FROM\n>> .menu_item_variant\n>> WHERE\n>> menu_item_id = a.menu_item_id AND deleted = 'N'\n>> LIMIT 1) AND\n>> a.active = 'Y'\n>> AND (CONCAT_WS('', ',', a.hidden_branch_ids,\n>> ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR\n>> NULL IS NULL)\n>> AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n>>\n>> ORDER BY a.row_order, menu_item_id;\n>>\n>> --Ayub\n>>\n>\n\n-- \nThanks,\nVijay\nMumbai, India\n\nsince you are willing to try out options :)if your setup runs the same test plan queries on jmeter against oracle and postgresql and only postgresql shows waits or degraded performance I think this is more then simply network.can you simply boot up an ec2 ubuntu/centos and install postgresql.and run pgbench locally on the installed db with the same queries.and run the same pgbench against RDS and share the results.On Sat, 12 Jun 2021 at 13:50, Ayub Khan <[email protected]> wrote:Ranier, This issue is only with queries which are slow, if it's an MTU issue then it should be with all the APIs.I tried on Aurora db and I see same plan and also same slowness On Wed, 9 Jun 2021, 17:47 Ayub Khan, <[email protected]> wrote:attached is the screenshot of RDS performance insights for AWS and it shows high waiting client writes. The api performance is slow. I read that this might be due to IOPS on RDS. However we have 80k IOPS on this test RDS. Below is the query which is being load testedSELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM .menu_item_category AS b, .menu_item_variant AS c, .menu_item_variant_type AS d, .item_size AS e, .restaurant AS f, .menu_item AS a LEFT OUTER JOIN .mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM .menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' LIMIT 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL) AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' ORDER BY a.row_order, menu_item_id;--Ayub\n\n-- Thanks,VijayMumbai, India",
"msg_date": "Sat, 12 Jun 2021 17:34:39 +0530",
"msg_from": "Vijaykumar Jain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "Ranier, Vijay,\n\nSure will try and check out pgbench and MTU\n\n--Ayub\n\nOn Wed, 9 Jun 2021, 17:47 Ayub Khan, <[email protected]> wrote:\n\n> attached is the screenshot of RDS performance insights for AWS and it\n> shows high waiting client writes. The api performance is slow. I read that\n> this might be due to IOPS on RDS. However we have 80k IOPS on this test\n> RDS.\n>\n> Below is the query which is being load tested\n>\n> SELECT\n>\n> a.menu_item_id,\n> a.menu_item_name,\n> a.menu_item_category_id,\n> b.menu_item_category_desc,\n> c.menu_item_variant_id,\n> c.menu_item_variant_type_id,\n> c.price,\n> c.size_id,\n> c.parent_menu_item_variant_id,\n> d.menu_item_variant_type_desc,\n> e.size_desc,\n> f.currency_code,\n> a.image,\n> a.mark_id,\n> m.mark_name\n>\n> FROM .menu_item_category AS b, .menu_item_variant AS\n> c,\n> .menu_item_variant_type AS d, .item_size AS e,\n> .restaurant AS f,\n> .menu_item AS a\n>\n> LEFT OUTER JOIN .mark AS m\n> ON (a.mark_id = m.mark_id)\n>\n> WHERE a.menu_item_category_id =\n> b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND\n> c.menu_item_variant_type_id =\n> d.menu_item_variant_type_id AND d.is_hidden = 'false' AND\n> c.size_id = e.size_id AND a.restaurant_id =\n> f.restaurant_id AND f.restaurant_id = 1528 AND\n> (a.menu_item_category_id = NULL OR NULL IS NULL)\n>\n> AND c.menu_item_variant_id = (SELECT\n> min(menu_item_variant_id)\n> FROM\n> .menu_item_variant\n> WHERE\n> menu_item_id = a.menu_item_id AND deleted = 'N'\n> LIMIT 1) AND\n> a.active = 'Y'\n> AND (CONCAT_WS('', ',', a.hidden_branch_ids,\n> ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR\n> NULL IS NULL)\n> AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n>\n> ORDER BY a.row_order, menu_item_id;\n>\n> --Ayub\n>\n\nRanier, Vijay,Sure will try and check out pgbench and MTU --AyubOn Wed, 9 Jun 2021, 17:47 Ayub Khan, <[email protected]> wrote:attached is the screenshot of RDS performance insights for AWS and it shows high waiting client writes. The api performance is slow. I read that this might be due to IOPS on RDS. However we have 80k IOPS on this test RDS. Below is the query which is being load testedSELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM .menu_item_category AS b, .menu_item_variant AS c, .menu_item_variant_type AS d, .item_size AS e, .restaurant AS f, .menu_item AS a LEFT OUTER JOIN .mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM .menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' LIMIT 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL) AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' ORDER BY a.row_order, menu_item_id;--Ayub",
"msg_date": "Sat, 12 Jun 2021 15:34:37 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "Ranier,\n\nI did the MTU change and it did seem to bring down the clientWrite waits to\nhalf.\n\nThe change I did was to enable ICMP to have Destination Unreachable\nfragmentation needed and DF set\n\n\"When there is a difference in the MTU size in the network between two\nhosts, first make sure that your network settings don't block path MTU\ndiscovery (PMTUD). PMTUD enables the receiving host to respond to the\noriginating host with the following ICMP message: Destination Unreachable:\nfragmentation needed and DF set (ICMP Type 3, Code 4). This message\ninstructs the originating host to use the lowest MTU size along the network\npath to resend the request. \"\n\nhttps://docs.aws.amazon.com/redshift/latest/mgmt/connecting-drop-issues.html\n\n\n\nVijay,\n\nBelow is the pgbench result which was executed from the remote instance\npointing to RDS\n\npostgres@localhost:/archive$ pgbench -h pg-cluster -p 5432 -U testuser -c\n50 -j 2 -P 60 -T 600 testdb -f /archive/test.sql\nstarting vacuum...pgbench: error: ERROR: relation \"pgbench_branches\" does\nnot exist\npgbench: (ignoring this error and continuing anyway)\npgbench: error: ERROR: relation \"pgbench_tellers\" does not exist\npgbench: (ignoring this error and continuing anyway)\npgbench: error: ERROR: relation \"pgbench_history\" does not exist\npgbench: (ignoring this error and continuing anyway)\nend.\nprogress: 60.0 s, 18.3 tps, lat 2631.655 ms stddev 293.592\nprogress: 120.0 s, 19.6 tps, lat 2533.271 ms stddev 223.722\nprogress: 180.0 s, 20.3 tps, lat 2446.050 ms stddev 158.397\nprogress: 240.1 s, 19.2 tps, lat 2575.506 ms stddev 292.418\nprogress: 300.0 s, 20.0 tps, lat 2482.908 ms stddev 181.770\nprogress: 360.0 s, 22.1 tps, lat 2245.147 ms stddev 110.855\nprogress: 420.0 s, 20.7 tps, lat 2397.270 ms stddev 289.324\nprogress: 480.0 s, 18.8 tps, lat 2625.595 ms stddev 240.250\nprogress: 540.0 s, 20.1 tps, lat 2467.336 ms stddev 133.121\nprogress: 600.0 s, 20.2 tps, lat 2455.824 ms stddev 137.976\ntransaction type: /archive/test.sql\nscaling factor: 1\nquery mode: simple\nnumber of clients: 50\nnumber of threads: 2\nduration: 600 s\nnumber of transactions actually processed: 12007\nlatency average = 2480.042 ms\nlatency stddev = 242.243 ms\ntps = 19.955602 (including connections establishing)\ntps = 19.955890 (excluding connections establishing)\n\nOn Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:\n\n> attached is the screenshot of RDS performance insights for AWS and it\n> shows high waiting client writes. The api performance is slow. I read that\n> this might be due to IOPS on RDS. However we have 80k IOPS on this test\n> RDS.\n>\n> Below is the query which is being load tested\n>\n> SELECT\n>\n> a.menu_item_id,\n> a.menu_item_name,\n> a.menu_item_category_id,\n> b.menu_item_category_desc,\n> c.menu_item_variant_id,\n> c.menu_item_variant_type_id,\n> c.price,\n> c.size_id,\n> c.parent_menu_item_variant_id,\n> d.menu_item_variant_type_desc,\n> e.size_desc,\n> f.currency_code,\n> a.image,\n> a.mark_id,\n> m.mark_name\n>\n> FROM .menu_item_category AS b, .menu_item_variant AS\n> c,\n> .menu_item_variant_type AS d, .item_size AS e,\n> .restaurant AS f,\n> .menu_item AS a\n>\n> LEFT OUTER JOIN .mark AS m\n> ON (a.mark_id = m.mark_id)\n>\n> WHERE a.menu_item_category_id =\n> b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND\n> c.menu_item_variant_type_id =\n> d.menu_item_variant_type_id AND d.is_hidden = 'false' AND\n> c.size_id = e.size_id AND a.restaurant_id =\n> f.restaurant_id AND f.restaurant_id = 1528 AND\n> (a.menu_item_category_id = NULL OR NULL IS NULL)\n>\n> AND c.menu_item_variant_id = (SELECT\n> min(menu_item_variant_id)\n> FROM\n> .menu_item_variant\n> WHERE\n> menu_item_id = a.menu_item_id AND deleted = 'N'\n> LIMIT 1) AND\n> a.active = 'Y'\n> AND (CONCAT_WS('', ',', a.hidden_branch_ids,\n> ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR\n> NULL IS NULL)\n> AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n>\n> ORDER BY a.row_order, menu_item_id;\n>\n> --Ayub\n>\n\n\n-- \n--------------------------------------------------------------------\nSun Certified Enterprise Architect 1.5\nSun Certified Java Programmer 1.4\nMicrosoft Certified Systems Engineer 2000\nhttp://in.linkedin.com/pub/ayub-khan/a/811/b81\nmobile:+966-502674604\n----------------------------------------------------------------------\nIt is proved that Hard Work and kowledge will get you close but attitude\nwill get you there. However, it's the Love\nof God that will put you over the top!!\n\nRanier,I did the MTU change and it did seem to bring down the clientWrite waits to half. The change I did was to enable ICMP to have Destination Unreachable fragmentation needed and DF set\"When there is a difference in the MTU size in the network between two hosts, first make sure that your network settings don't block path MTU discovery (PMTUD). PMTUD enables the receiving host to respond to the originating host with the following ICMP message: Destination Unreachable: fragmentation needed and DF set (ICMP Type 3, Code 4). This message instructs the originating host to use the lowest MTU size along the network path to resend the request. \"https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-drop-issues.htmlVijay,Below is the pgbench result which was executed from the remote instance pointing to RDSpostgres@localhost:/archive$ pgbench -h pg-cluster -p 5432 -U testuser -c 50 -j 2 -P 60 -T 600 testdb -f /archive/test.sqlstarting vacuum...pgbench: error: ERROR: relation \"pgbench_branches\" does not existpgbench: (ignoring this error and continuing anyway)pgbench: error: ERROR: relation \"pgbench_tellers\" does not existpgbench: (ignoring this error and continuing anyway)pgbench: error: ERROR: relation \"pgbench_history\" does not existpgbench: (ignoring this error and continuing anyway)end.progress: 60.0 s, 18.3 tps, lat 2631.655 ms stddev 293.592progress: 120.0 s, 19.6 tps, lat 2533.271 ms stddev 223.722progress: 180.0 s, 20.3 tps, lat 2446.050 ms stddev 158.397progress: 240.1 s, 19.2 tps, lat 2575.506 ms stddev 292.418progress: 300.0 s, 20.0 tps, lat 2482.908 ms stddev 181.770progress: 360.0 s, 22.1 tps, lat 2245.147 ms stddev 110.855progress: 420.0 s, 20.7 tps, lat 2397.270 ms stddev 289.324progress: 480.0 s, 18.8 tps, lat 2625.595 ms stddev 240.250progress: 540.0 s, 20.1 tps, lat 2467.336 ms stddev 133.121progress: 600.0 s, 20.2 tps, lat 2455.824 ms stddev 137.976transaction type: /archive/test.sqlscaling factor: 1query mode: simplenumber of clients: 50number of threads: 2duration: 600 snumber of transactions actually processed: 12007latency average = 2480.042 mslatency stddev = 242.243 mstps = 19.955602 (including connections establishing)tps = 19.955890 (excluding connections establishing)On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:attached is the screenshot of RDS performance insights for AWS and it shows high waiting client writes. The api performance is slow. I read that this might be due to IOPS on RDS. However we have 80k IOPS on this test RDS. Below is the query which is being load testedSELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM .menu_item_category AS b, .menu_item_variant AS c, .menu_item_variant_type AS d, .item_size AS e, .restaurant AS f, .menu_item AS a LEFT OUTER JOIN .mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM .menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' LIMIT 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL) AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' ORDER BY a.row_order, menu_item_id;--Ayub\n-- --------------------------------------------------------------------Sun Certified Enterprise Architect 1.5Sun Certified Java Programmer 1.4Microsoft Certified Systems Engineer 2000http://in.linkedin.com/pub/ayub-khan/a/811/b81mobile:+966-502674604----------------------------------------------------------------------It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!",
"msg_date": "Sun, 13 Jun 2021 17:00:23 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "thanks.\n\n>latency average = 2480.042 ms\n\nthat latency is pretty high, even after changing the mtu ? for a query that\ntakes 5ms to run (from your explain analyze above) and returns a few 100\nrows.\n\nso it does look like a network latency, but it seems strange when you said\nthe same query from the same ec2 host ran fast against oracle compared to\npostgres RDS.\nSo oracle and RDS on vms with separate mtu setting ?\n\ni was tried to simulate issues with the client if any :),\nSlow ClientWrites · Issue #2480 · brianc/node-postgres (github.com)\n<https://github.com/brianc/node-postgres/issues/2480>\nSome intial performance work by brianc · Pull Request #2031 ·\nbrianc/node-postgres (github.com)\n<https://github.com/brianc/node-postgres/pull/2031>\nClientRead statement_timeout · Issue #1952 · brianc/node-postgres\n(github.com) <https://github.com/brianc/node-postgres/issues/1952>\n\nI tried to play around with FEBE protocols to delay flush sync etc, but\ncould not simulate clientwrite wait event :(. Sorry.\nand i am not touching java :)\n\nI was asking for a run like this, with -r that would have shown latency per\nstatement. but anyways.\n\nBelow I make use of tc (traffic control) to add a delay to my lo interface,\nand check how pgbench runs vary with increased latency at interface.\nuseless for your case, but i use this to tell dev when they have\nslot queries if, roundtrip delay is high, it is not a pg fault :)\n\npostgres@db:~/playground$ sudo tc -s qdisc | head -3\nqdisc noqueue 0: dev lo root refcnt 2\n Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)\n backlog 0b 0p requeues 0\n\npostgres@db:~/playground$ sudo tc qdisc add dev lo root netem delay 100ms\n # add a delay on lo of 100ms via tc module\n\npostgres@db:~/playground$ sudo tc -s qdisc | head -3\nqdisc netem 8007: dev lo root refcnt 2 limit 1000 delay 100.0ms\n Sent 8398 bytes 15 pkt (dropped 0, overlimits 0 requeues 0)\n backlog 0b 0p requeues 0\n\npostgres@db:~/playground$ pgbench -c 2 -C -j 2 -n -P 2 -T 10 -r -f\npgbench.test -h 127.0.0.1\nprogress: 2.0 s, 1.0 tps, lat 603.211 ms stddev 0.007\nprogress: 4.2 s, 1.8 tps, lat 602.838 ms stddev 0.101\nprogress: 6.0 s, 1.1 tps, lat 603.730 ms stddev 0.034\nprogress: 8.0 s, 2.0 tps, lat 603.058 ms stddev 0.081\nprogress: 10.3 s, 1.8 tps, lat 600.852 ms stddev 0.030\npgbench (PostgreSQL) 14.0\ntransaction type: pgbench.test\nscaling factor: 1\nquery mode: simple\nnumber of clients: 2\nnumber of threads: 2\nduration: 10 s\nnumber of transactions actually processed: 18\nlatency average = 602.357 ms\nlatency stddev = 1.112 ms\naverage connection time = 605.499 ms\ntps = 1.655749 (including reconnection times)\nstatement latencies in milliseconds:\n 200.672 begin;\n 200.797 select 1;\n 200.917 end;\n\npostgres@db:~/playground$ sudo tc qdisc del dev lo root netem # remove\ndelay\npostgres@db:~/playground$ sudo tc -s qdisc | head -3\nqdisc noqueue 0: dev lo root refcnt 2\n Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)\n backlog 0b 0p requeues 0\n\npostgres@db:~/playground$ pgbench -c 2 -C -j 2 -n -P 2 -T 10 -r -f\npgbench.test -h 127.0.0.1\nprogress: 2.0 s, 1272.4 tps, lat 0.200 ms stddev 0.273\nprogress: 4.0 s, 1155.3 tps, lat 0.306 ms stddev 0.304\nprogress: 6.0 s, 1241.7 tps, lat 0.261 ms stddev 0.290\nprogress: 8.0 s, 1508.6 tps, lat 0.150 ms stddev 0.140\nprogress: 10.0 s, 1172.7 tps, lat 0.292 ms stddev 0.302\npgbench (PostgreSQL) 14.0\ntransaction type: pgbench.test\nscaling factor: 1\nquery mode: simple\nnumber of clients: 2\nnumber of threads: 2\nduration: 10 s\nnumber of transactions actually processed: 12704\nlatency average = 0.236 ms\nlatency stddev = 0.271 ms\naverage connection time = 1.228 ms\ntps = 1270.314254 (including reconnection times)\nstatement latencies in milliseconds:\n 0.074 begin;\n 0.115 select 1;\n 0.048 end;\n\nthanks.>latency average = 2480.042 msthat latency is pretty high, even after changing the mtu ? for a query that takes 5ms to run (from your explain analyze above) and returns a few 100 rows.so it does look like a network latency, but it seems strange when you said the same query from the same ec2 host ran fast against oracle compared to postgres RDS.So oracle and RDS on vms with separate mtu setting ?i was tried to simulate issues with the client if any :), Slow ClientWrites · Issue #2480 · brianc/node-postgres (github.com)Some intial performance work by brianc · Pull Request #2031 · brianc/node-postgres (github.com)ClientRead statement_timeout · Issue #1952 · brianc/node-postgres (github.com)I tried to play around with FEBE protocols to delay flush sync etc, but could not simulate clientwrite wait event :(. Sorry.and i am not touching java :) I was asking for a run like this, with -r that would have shown latency per statement. but anyways.Below I make use of tc (traffic control) to add a delay to my lo interface, and check how pgbench runs vary with increased latency at interface.useless for your case, but i use this to tell dev when they have slot queries if, roundtrip delay is high, it is not a pg fault :)postgres@db:~/playground$ sudo tc -s qdisc | head -3qdisc noqueue 0: dev lo root refcnt 2 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0postgres@db:~/playground$ sudo tc qdisc add dev lo root netem delay 100ms # add a delay on lo of 100ms via tc modulepostgres@db:~/playground$ sudo tc -s qdisc | head -3qdisc netem 8007: dev lo root refcnt 2 limit 1000 delay 100.0ms Sent 8398 bytes 15 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0postgres@db:~/playground$ pgbench -c 2 -C -j 2 -n -P 2 -T 10 -r -f pgbench.test -h 127.0.0.1progress: 2.0 s, 1.0 tps, lat 603.211 ms stddev 0.007progress: 4.2 s, 1.8 tps, lat 602.838 ms stddev 0.101progress: 6.0 s, 1.1 tps, lat 603.730 ms stddev 0.034progress: 8.0 s, 2.0 tps, lat 603.058 ms stddev 0.081progress: 10.3 s, 1.8 tps, lat 600.852 ms stddev 0.030pgbench (PostgreSQL) 14.0transaction type: pgbench.testscaling factor: 1query mode: simplenumber of clients: 2number of threads: 2duration: 10 snumber of transactions actually processed: 18latency average = 602.357 mslatency stddev = 1.112 msaverage connection time = 605.499 mstps = 1.655749 (including reconnection times)statement latencies in milliseconds: 200.672 begin; 200.797 select 1; 200.917 end;postgres@db:~/playground$ sudo tc qdisc del dev lo root netem # remove delaypostgres@db:~/playground$ sudo tc -s qdisc | head -3qdisc noqueue 0: dev lo root refcnt 2 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0postgres@db:~/playground$ pgbench -c 2 -C -j 2 -n -P 2 -T 10 -r -f pgbench.test -h 127.0.0.1progress: 2.0 s, 1272.4 tps, lat 0.200 ms stddev 0.273progress: 4.0 s, 1155.3 tps, lat 0.306 ms stddev 0.304progress: 6.0 s, 1241.7 tps, lat 0.261 ms stddev 0.290progress: 8.0 s, 1508.6 tps, lat 0.150 ms stddev 0.140progress: 10.0 s, 1172.7 tps, lat 0.292 ms stddev 0.302pgbench (PostgreSQL) 14.0transaction type: pgbench.testscaling factor: 1query mode: simplenumber of clients: 2number of threads: 2duration: 10 snumber of transactions actually processed: 12704latency average = 0.236 mslatency stddev = 0.271 msaverage connection time = 1.228 mstps = 1270.314254 (including reconnection times)statement latencies in milliseconds: 0.074 begin; 0.115 select 1; 0.048 end;",
"msg_date": "Sun, 13 Jun 2021 22:00:29 +0530",
"msg_from": "Vijaykumar Jain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "Vijay,\n\nI did not change the MTU on the network interface but created incoming\nrule on the security group as per the below documentation:\n\n PMTUD enables the receiving host to respond to the originating host with\nthe following ICMP message: Destination Unreachable: fragmentation needed\nand DF set (ICMP Type 3, Code 4). This message instructs the originating\nhost to use the lowest MTU size along the network path to resend the\nrequest. Without this negotiation, packet drop can occur because the\nrequest is too large for the receiving host to accept.\n\nI also did another test, instead of using RDS, installed postgresql on a\nsimilar VM as that of where oracle is installed and tested it. Now even\nwhen both client and postgresql VMs have the same MTU settings still in the\npg activity table I could see clientwrite waits.\n\n-Ayub\n\nOn Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:\n\n> attached is the screenshot of RDS performance insights for AWS and it\n> shows high waiting client writes. The api performance is slow. I read that\n> this might be due to IOPS on RDS. However we have 80k IOPS on this test\n> RDS.\n>\n> Below is the query which is being load tested\n>\n> SELECT\n>\n> a.menu_item_id,\n> a.menu_item_name,\n> a.menu_item_category_id,\n> b.menu_item_category_desc,\n> c.menu_item_variant_id,\n> c.menu_item_variant_type_id,\n> c.price,\n> c.size_id,\n> c.parent_menu_item_variant_id,\n> d.menu_item_variant_type_desc,\n> e.size_desc,\n> f.currency_code,\n> a.image,\n> a.mark_id,\n> m.mark_name\n>\n> FROM .menu_item_category AS b, .menu_item_variant AS\n> c,\n> .menu_item_variant_type AS d, .item_size AS e,\n> .restaurant AS f,\n> .menu_item AS a\n>\n> LEFT OUTER JOIN .mark AS m\n> ON (a.mark_id = m.mark_id)\n>\n> WHERE a.menu_item_category_id =\n> b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND\n> c.menu_item_variant_type_id =\n> d.menu_item_variant_type_id AND d.is_hidden = 'false' AND\n> c.size_id = e.size_id AND a.restaurant_id =\n> f.restaurant_id AND f.restaurant_id = 1528 AND\n> (a.menu_item_category_id = NULL OR NULL IS NULL)\n>\n> AND c.menu_item_variant_id = (SELECT\n> min(menu_item_variant_id)\n> FROM\n> .menu_item_variant\n> WHERE\n> menu_item_id = a.menu_item_id AND deleted = 'N'\n> LIMIT 1) AND\n> a.active = 'Y'\n> AND (CONCAT_WS('', ',', a.hidden_branch_ids,\n> ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR\n> NULL IS NULL)\n> AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n>\n> ORDER BY a.row_order, menu_item_id;\n>\n> --Ayub\n>\n\n\n-- \n--------------------------------------------------------------------\nSun Certified Enterprise Architect 1.5\nSun Certified Java Programmer 1.4\nMicrosoft Certified Systems Engineer 2000\nhttp://in.linkedin.com/pub/ayub-khan/a/811/b81\nmobile:+966-502674604\n----------------------------------------------------------------------\nIt is proved that Hard Work and kowledge will get you close but attitude\nwill get you there. However, it's the Love\nof God that will put you over the top!!\n\nVijay,I did not change the MTU on the network interface but created incoming rule on the security group as per the below documentation: PMTUD enables the receiving host to respond to the originating host with the following ICMP message: Destination Unreachable: fragmentation needed and DF set (ICMP Type 3, Code 4). This message instructs the originating host to use the lowest MTU size along the network path to resend the request. Without this negotiation, packet drop can occur because the request is too large for the receiving host to accept.I also did another test, instead of using RDS, installed postgresql on a similar VM as that of where oracle is installed and tested it. Now even when both client and postgresql VMs have the same MTU settings still in the pg activity table I could see clientwrite waits.-AyubOn Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:attached is the screenshot of RDS performance insights for AWS and it shows high waiting client writes. The api performance is slow. I read that this might be due to IOPS on RDS. However we have 80k IOPS on this test RDS. Below is the query which is being load testedSELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM .menu_item_category AS b, .menu_item_variant AS c, .menu_item_variant_type AS d, .item_size AS e, .restaurant AS f, .menu_item AS a LEFT OUTER JOIN .mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM .menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' LIMIT 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL) AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' ORDER BY a.row_order, menu_item_id;--Ayub\n-- --------------------------------------------------------------------Sun Certified Enterprise Architect 1.5Sun Certified Java Programmer 1.4Microsoft Certified Systems Engineer 2000http://in.linkedin.com/pub/ayub-khan/a/811/b81mobile:+966-502674604----------------------------------------------------------------------It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!",
"msg_date": "Sun, 13 Jun 2021 20:00:17 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "Vijay,\n\nbelow is the benchmark result when executed against bench_mark database\ninstead of running the test with slow query on the application database.\nThis shows that it might not be an issue with MTU but some issue with the\napplication database itself and the query.\n\n\npostgres@localhost:~$ pgbench -h test-cluster -p 5432 -U testuser -c 50 -j\n2 -P 60 -T 600 bench_mark\nstarting vacuum...end.\nprogress: 60.0 s, 17830.3 tps, lat 2.765 ms stddev 0.632\nprogress: 120.0 s, 18450.3 tps, lat 2.681 ms stddev 0.582\nprogress: 180.0 s, 18405.0 tps, lat 2.688 ms stddev 0.588\nprogress: 240.0 s, 17087.9 tps, lat 2.897 ms stddev 0.717\nprogress: 300.0 s, 18280.6 tps, lat 2.706 ms stddev 0.595\nprogress: 360.0 s, 18433.9 tps, lat 2.683 ms stddev 0.582\nprogress: 420.0 s, 18308.4 tps, lat 2.702 ms stddev 0.599\nprogress: 480.0 s, 18156.7 tps, lat 2.725 ms stddev 0.615\nprogress: 540.0 s, 16803.3 tps, lat 2.946 ms stddev 0.764\nprogress: 600.0 s, 18266.6 tps, lat 2.708 ms stddev 0.602\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 150\nquery mode: simple\nnumber of clients: 50\nnumber of threads: 2\nduration: 600 s\nnumber of transactions actually processed: 10801425\nlatency average = 2.747 ms\nlatency stddev = 0.635 ms\ntps = 18001.935315 (including connections establishing)\ntps = 18002.205940 (excluding connections establishing)\n\nOn Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:\n\n> attached is the screenshot of RDS performance insights for AWS and it\n> shows high waiting client writes. The api performance is slow. I read that\n> this might be due to IOPS on RDS. However we have 80k IOPS on this test\n> RDS.\n>\n> Below is the query which is being load tested\n>\n> SELECT\n>\n> a.menu_item_id,\n> a.menu_item_name,\n> a.menu_item_category_id,\n> b.menu_item_category_desc,\n> c.menu_item_variant_id,\n> c.menu_item_variant_type_id,\n> c.price,\n> c.size_id,\n> c.parent_menu_item_variant_id,\n> d.menu_item_variant_type_desc,\n> e.size_desc,\n> f.currency_code,\n> a.image,\n> a.mark_id,\n> m.mark_name\n>\n> FROM .menu_item_category AS b, .menu_item_variant AS\n> c,\n> .menu_item_variant_type AS d, .item_size AS e,\n> .restaurant AS f,\n> .menu_item AS a\n>\n> LEFT OUTER JOIN .mark AS m\n> ON (a.mark_id = m.mark_id)\n>\n> WHERE a.menu_item_category_id =\n> b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND\n> c.menu_item_variant_type_id =\n> d.menu_item_variant_type_id AND d.is_hidden = 'false' AND\n> c.size_id = e.size_id AND a.restaurant_id =\n> f.restaurant_id AND f.restaurant_id = 1528 AND\n> (a.menu_item_category_id = NULL OR NULL IS NULL)\n>\n> AND c.menu_item_variant_id = (SELECT\n> min(menu_item_variant_id)\n> FROM\n> .menu_item_variant\n> WHERE\n> menu_item_id = a.menu_item_id AND deleted = 'N'\n> LIMIT 1) AND\n> a.active = 'Y'\n> AND (CONCAT_WS('', ',', a.hidden_branch_ids,\n> ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR\n> NULL IS NULL)\n> AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n>\n> ORDER BY a.row_order, menu_item_id;\n>\n> --Ayub\n>\n\n\n-- \n--------------------------------------------------------------------\nSun Certified Enterprise Architect 1.5\nSun Certified Java Programmer 1.4\nMicrosoft Certified Systems Engineer 2000\nhttp://in.linkedin.com/pub/ayub-khan/a/811/b81\nmobile:+966-502674604\n----------------------------------------------------------------------\nIt is proved that Hard Work and kowledge will get you close but attitude\nwill get you there. However, it's the Love\nof God that will put you over the top!!\n\nVijay,below is the benchmark result when executed against bench_mark database instead of running the test with slow query on the application database. This shows that it might not be an issue with MTU but some issue with the application database itself and the query.postgres@localhost:~$ pgbench -h test-cluster -p 5432 -U testuser -c 50 -j 2 -P 60 -T 600 bench_markstarting vacuum...end.progress: 60.0 s, 17830.3 tps, lat 2.765 ms stddev 0.632progress: 120.0 s, 18450.3 tps, lat 2.681 ms stddev 0.582progress: 180.0 s, 18405.0 tps, lat 2.688 ms stddev 0.588progress: 240.0 s, 17087.9 tps, lat 2.897 ms stddev 0.717progress: 300.0 s, 18280.6 tps, lat 2.706 ms stddev 0.595progress: 360.0 s, 18433.9 tps, lat 2.683 ms stddev 0.582progress: 420.0 s, 18308.4 tps, lat 2.702 ms stddev 0.599progress: 480.0 s, 18156.7 tps, lat 2.725 ms stddev 0.615progress: 540.0 s, 16803.3 tps, lat 2.946 ms stddev 0.764progress: 600.0 s, 18266.6 tps, lat 2.708 ms stddev 0.602transaction type: <builtin: TPC-B (sort of)>scaling factor: 150query mode: simplenumber of clients: 50number of threads: 2duration: 600 snumber of transactions actually processed: 10801425latency average = 2.747 mslatency stddev = 0.635 mstps = 18001.935315 (including connections establishing)tps = 18002.205940 (excluding connections establishing)On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:attached is the screenshot of RDS performance insights for AWS and it shows high waiting client writes. The api performance is slow. I read that this might be due to IOPS on RDS. However we have 80k IOPS on this test RDS. Below is the query which is being load testedSELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM .menu_item_category AS b, .menu_item_variant AS c, .menu_item_variant_type AS d, .item_size AS e, .restaurant AS f, .menu_item AS a LEFT OUTER JOIN .mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM .menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' LIMIT 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL) AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' ORDER BY a.row_order, menu_item_id;--Ayub\n-- --------------------------------------------------------------------Sun Certified Enterprise Architect 1.5Sun Certified Java Programmer 1.4Microsoft Certified Systems Engineer 2000http://in.linkedin.com/pub/ayub-khan/a/811/b81mobile:+966-502674604----------------------------------------------------------------------It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!",
"msg_date": "Mon, 14 Jun 2021 08:02:17 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "Would it be a cursor issue on postgres, as there seems to be a\ndifference in how cursors are handled in postgres and Oracle database. It\nseems cursors are returned as buffers to the client side. Below are the\nsteps we take from jdbc side\n\nbelow is the stored procedure code:\n\nCREATE OR REPLACE PROCEDURE .\"menu_pkg$get_menu_items_p_new\"(\ni_restaurant_id bigint,\ni_category_id bigint,\ni_check_availability text,\ni_branch_id bigint,\nINOUT o_items refcursor,\nINOUT o_combo refcursor)\nLANGUAGE 'plpgsql'\n\nAS $BODY$\n\nBEGIN\n\n OPEN o_items FOR\n\n SELECT\n\n a.menu_item_id, a.menu_item_name, a.menu_item_category_id,\nb.menu_item_category_desc, c.menu_item_variant_id,\nc.menu_item_variant_type_id, c.price, c.size_id,\nc.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc,\nf.currency_code, a.image, a.mark_id, m.mark_name\n\n FROM .menu_item_category AS b, .menu_item_variant AS c,\n.menu_item_variant_type AS d, .item_size AS e, .restaurant AS f, .menu_item\nAS a\n\n LEFT OUTER JOIN .mark AS m ON (a.mark_id = m.mark_id) WHERE\na.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id =\nc.menu_item_id\n AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND\nd.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id =\nf.restaurant_id AND f.restaurant_id = i_restaurant_id AND\n(a.menu_item_category_id = i_category_id OR i_category_id IS NULL) AND\nc.menu_item_variant_id =\n (SELECT MIN(menu_item_variant_id) FROM .menu_item_variant WHERE\nmenu_item_id = a.menu_item_id AND deleted = 'N') AND a.active = 'Y' AND\n(CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,',\ni_branch_id, ',%') OR i_branch_id IS NULL) AND\n.is_menu_item_available(a.menu_item_id, i_check_availability) = 'Y'\n\n ORDER BY a.row_order, menu_item_id;\n\n OPEN o_combo FOR\n\n SELECT\n\n mc.*, f.currency_code, (CASE\n\n WHEN blob_id IS NOT NULL THEN 'Y'\n\n ELSE 'N'\n\n END) AS has_image\n\n FROM .menu_combo AS mc, .restaurant AS f\n\n WHERE mc.restaurant_id = i_restaurant_id AND active = 'Y' AND\nmc.restaurant_id = f.restaurant_id AND (menu_item_category_id =\ni_category_id OR i_category_id IS NULL)\n\n ORDER BY combo_id;\n\nEND;\n\n$BODY$;\n\n\n 1. open connection\n 2. set auto commit to false\n 3. create callable statement\n 4. execute the call\n 5. get the results\n 6. set autocommit to true\n 7. close the resultset,callable statement and connection\n\n\n\nOn Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:\n\n> attached is the screenshot of RDS performance insights for AWS and it\n> shows high waiting client writes. The api performance is slow. I read that\n> this might be due to IOPS on RDS. However we have 80k IOPS on this test\n> RDS.\n>\n> Below is the query which is being load tested\n>\n> SELECT\n>\n> a.menu_item_id,\n> a.menu_item_name,\n> a.menu_item_category_id,\n> b.menu_item_category_desc,\n> c.menu_item_variant_id,\n> c.menu_item_variant_type_id,\n> c.price,\n> c.size_id,\n> c.parent_menu_item_variant_id,\n> d.menu_item_variant_type_desc,\n> e.size_desc,\n> f.currency_code,\n> a.image,\n> a.mark_id,\n> m.mark_name\n>\n> FROM .menu_item_category AS b, .menu_item_variant AS\n> c,\n> .menu_item_variant_type AS d, .item_size AS e,\n> .restaurant AS f,\n> .menu_item AS a\n>\n> LEFT OUTER JOIN .mark AS m\n> ON (a.mark_id = m.mark_id)\n>\n> WHERE a.menu_item_category_id =\n> b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND\n> c.menu_item_variant_type_id =\n> d.menu_item_variant_type_id AND d.is_hidden = 'false' AND\n> c.size_id = e.size_id AND a.restaurant_id =\n> f.restaurant_id AND f.restaurant_id = 1528 AND\n> (a.menu_item_category_id = NULL OR NULL IS NULL)\n>\n> AND c.menu_item_variant_id = (SELECT\n> min(menu_item_variant_id)\n> FROM\n> .menu_item_variant\n> WHERE\n> menu_item_id = a.menu_item_id AND deleted = 'N'\n> LIMIT 1) AND\n> a.active = 'Y'\n> AND (CONCAT_WS('', ',', a.hidden_branch_ids,\n> ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR\n> NULL IS NULL)\n> AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y'\n>\n> ORDER BY a.row_order, menu_item_id;\n>\n> --Ayub\n>\n\n\n-- \n--------------------------------------------------------------------\nSun Certified Enterprise Architect 1.5\nSun Certified Java Programmer 1.4\nMicrosoft Certified Systems Engineer 2000\nhttp://in.linkedin.com/pub/ayub-khan/a/811/b81\nmobile:+966-502674604\n----------------------------------------------------------------------\nIt is proved that Hard Work and kowledge will get you close but attitude\nwill get you there. However, it's the Love\nof God that will put you over the top!!\n\nWould it be a cursor issue on postgres, as there seems to be a difference in how cursors are handled in postgres and Oracle database. It seems cursors are returned as buffers to the client side. Below are the steps we take from jdbc sidebelow is the stored procedure code:CREATE OR REPLACE PROCEDURE .\"menu_pkg$get_menu_items_p_new\"(\ti_restaurant_id bigint,\ti_category_id bigint,\ti_check_availability text,\ti_branch_id bigint,\tINOUT o_items refcursor,\tINOUT o_combo refcursor)LANGUAGE 'plpgsql'AS $BODY$BEGIN OPEN o_items FOR SELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM .menu_item_category AS b, .menu_item_variant AS c, .menu_item_variant_type AS d, .item_size AS e, .restaurant AS f, .menu_item AS a LEFT OUTER JOIN .mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = i_restaurant_id AND (a.menu_item_category_id = i_category_id OR i_category_id IS NULL) AND c.menu_item_variant_id = (SELECT MIN(menu_item_variant_id) FROM .menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N') AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,', i_branch_id, ',%') OR i_branch_id IS NULL) AND .is_menu_item_available(a.menu_item_id, i_check_availability) = 'Y' ORDER BY a.row_order, menu_item_id; OPEN o_combo FOR SELECT mc.*, f.currency_code, (CASE WHEN blob_id IS NOT NULL THEN 'Y' ELSE 'N' END) AS has_image FROM .menu_combo AS mc, .restaurant AS f WHERE mc.restaurant_id = i_restaurant_id AND active = 'Y' AND mc.restaurant_id = f.restaurant_id AND (menu_item_category_id = i_category_id OR i_category_id IS NULL) ORDER BY combo_id;END;$BODY$;open connectionset auto commit to falsecreate callable statementexecute the callget the resultsset autocommit to trueclose the resultset,callable statement and connectionOn Wed, Jun 9, 2021 at 5:47 PM Ayub Khan <[email protected]> wrote:attached is the screenshot of RDS performance insights for AWS and it shows high waiting client writes. The api performance is slow. I read that this might be due to IOPS on RDS. However we have 80k IOPS on this test RDS. Below is the query which is being load testedSELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM .menu_item_category AS b, .menu_item_variant AS c, .menu_item_variant_type AS d, .item_size AS e, .restaurant AS f, .menu_item AS a LEFT OUTER JOIN .mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM .menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' LIMIT 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL) AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' ORDER BY a.row_order, menu_item_id;--Ayub\n-- --------------------------------------------------------------------Sun Certified Enterprise Architect 1.5Sun Certified Java Programmer 1.4Microsoft Certified Systems Engineer 2000http://in.linkedin.com/pub/ayub-khan/a/811/b81mobile:+966-502674604----------------------------------------------------------------------It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!",
"msg_date": "Tue, 15 Jun 2021 18:43:16 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: waiting for client write"
},
{
"msg_contents": "On Tue, 15 Jun 2021 at 21:13, Ayub Khan <[email protected]> wrote:\n>\n>\n> Would it be a cursor issue on postgres, as there seems to be a difference\nin how cursors are handled in postgres and Oracle database. It seems\ncursors are returned as buffers to the client side. Below are the steps we\ntake from jdbc side\n\ni did this as well to understand what caused clientwrite wait event.\nopen a cursor, fetch some rows but not all, and not close them. run this\nfor multiple connections.\nAll i got was some client read, but no client write.\nI think i might have to intentionally mangle some response packets from\nserver to client to see if that helps,\nbut I was thinking i am diverting from the main problem.\nunless we have a reproducible dataset to work on, i was not sure it was\nhelping.\n\nIf you can have some sample table(s), and can create a proc on the same\nlines as above to query data, and still get the same issues.\nthat would be helpful to debug further.\n\nelse,\nyou may have to give a stacktrace using pstack or gdb / perf etc to help\nfigure out what is going on at code level.\nProfiling_with_perf <https://wiki.postgresql.org/wiki/Profiling_with_perf>\ntrace-query-processing-internals-with-debugger\n<https://www.highgo.ca/2019/10/03/trace-query-processing-internals-with-debugger/>\n\nThis may / may not help, but it'll help learn to eliminate noise :)\n\nOn Tue, 15 Jun 2021 at 21:13, Ayub Khan <[email protected]> wrote:>>> Would it be a cursor issue on postgres, as there seems to be a difference in how cursors are handled in postgres and Oracle database. It seems cursors are returned as buffers to the client side. Below are the steps we take from jdbc sidei did this as well to understand what caused clientwrite wait event. open a cursor, fetch some rows but not all, and not close them. run this for multiple connections.All i got was some client read, but no client write.I think i might have to intentionally mangle some response packets from server to client to see if that helps,but I was thinking i am diverting from the main problem.unless we have a reproducible dataset to work on, i was not sure it was helping.If you can have some sample table(s), and can create a proc on the same lines as above to query data, and still get the same issues.that would be helpful to debug further.else,you may have to give a stacktrace using pstack or gdb / perf etc to help figure out what is going on at code level.Profiling_with_perftrace-query-processing-internals-with-debuggerThis may / may not help, but it'll help learn to eliminate noise :)",
"msg_date": "Tue, 15 Jun 2021 22:52:23 +0530",
"msg_from": "Vijaykumar Jain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: waiting for client write"
}
] |
[
{
"msg_contents": "Hello Everyone,\n\nI trust that everyone is Keep doing very Well !\n\nWe have installed PostgreSQL V13 on window’s server 2016, where we kept the Ram of the Server is 32 GB and disk size is 270 GB.Later we faced some performance issues regarding the database, after deep dive into it we came up and increased the Shared buffer size to 16 Gb. After the changed I am not sure we are facing that Page file Size reached to critical threshold. Currently the Page File size is 9504MB.\n\nHighly Appreciated, if you guys could recommend/suggest any solution / idea.\n\nBr,\nHaseeb Ahmad\nHello Everyone,I trust that everyone is Keep doing very Well !We have installed PostgreSQL V13 on window’s server 2016, where we kept the Ram of the Server is 32 GB and disk size is 270 GB.Later we faced some performance issues regarding the database, after deep dive into it we came up and increased the Shared buffer size to 16 Gb. After the changed I am not sure we are facing that Page file Size reached to critical threshold. Currently the Page File size is 9504MB.Highly Appreciated, if you guys could recommend/suggest any solution / idea.Br,Haseeb Ahmad",
"msg_date": "Thu, 10 Jun 2021 05:45:45 +0500",
"msg_from": "Haseeb Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Page File Size Reached Critical Threshold PostgreSQL V13 "
},
{
"msg_contents": "On Thu, Jun 10, 2021 at 05:45:45AM +0500, Haseeb Khan wrote:\n> We have installed PostgreSQL V13 on window’s server 2016, where we kept the Ram of the Server is 32 GB and disk size is 270 GB.Later we faced some performance issues regarding the database, after deep dive into it we came up and increased the Shared buffer size to 16 Gb. After the changed I am not sure we are facing that Page file Size reached to critical threshold. Currently the Page File size is 9504MB.\n\nHi,\n\nHow large is your DB ? (Or the \"active set\" of the DB, if parts of it are\naccessed infrequently).\n\nWhat was the original performance issue that led you to increase shared_buffers ?\n\nYou've set shared_buffers to half of your RAM, which may be a \"worst case\"\nsetting, since everything that's read into shared_buffers must first be read\ninto the OS cache. So it may be that many blocks are cached twice, rather than\nrelying on a smaller shared_buffers only for the \"hottest\" blocks, and the\nlarger OS cache for everything else.\n\nThere are exceptions to the guideline - for example, if your DB is 23 GB in\nsize, it might make sense to have the entire thing in 24GB OF shared_buffers.\nBut most DB don't need to fit in shared_buffers, and you shouldn't make that a\ngoal, unless you can measure a performance benefit.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 9 Jun 2021 23:28:53 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Page File Size Reached Critical Threshold PostgreSQL V13"
},
{
"msg_contents": "Hi Justin, \n\nYou mean, So should I request for to increase the System Ram from 32 Gb to 64 Gb and keep the same parameter setting.Is it ?\n \nBr,\nHaseeb Ahmad \n> \n> On 10-Jun-2021, at 9:28 AM, Justin Pryzby <[email protected]> wrote:\n> \n> On Thu, Jun 10, 2021 at 05:45:45AM +0500, Haseeb Khan wrote:\n>> We have installed PostgreSQL V13 on window’s server 2016, where we kept the Ram of the Server is 32 GB and disk size is 270 GB.Later we faced some performance issues regarding the database, after deep dive into it we came up and increased the Shared buffer size to 16 Gb. After the changed I am not sure we are facing that Page file Size reached to critical threshold. Currently the Page File size is 9504MB.\n> \n> Hi,\n> \n> How large is your DB ? (Or the \"active set\" of the DB, if parts of it are\n> accessed infrequently).\n> \n> What was the original performance issue that led you to increase shared_buffers ?\n> \n> You've set shared_buffers to half of your RAM, which may be a \"worst case\"\n> setting, since everything that's read into shared_buffers must first be read\n> into the OS cache. So it may be that many blocks are cached twice, rather than\n> relying on a smaller shared_buffers only for the \"hottest\" blocks, and the\n> larger OS cache for everything else.\n> \n> There are exceptions to the guideline - for example, if your DB is 23 GB in\n> size, it might make sense to have the entire thing in 24GB OF shared_buffers.\n> But most DB don't need to fit in shared_buffers, and you shouldn't make that a\n> goal, unless you can measure a performance benefit.\n> \n> -- \n> Justin\n\n\n",
"msg_date": "Thu, 10 Jun 2021 11:41:09 +0500",
"msg_from": "Haseeb Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Page File Size Reached Critical Threshold PostgreSQL V13"
},
{
"msg_contents": "On Thu, Jun 10, 2021 at 11:41:09AM +0500, Haseeb Khan wrote:\n> You mean, So should I request for to increase the System Ram from 32 Gb to 64 Gb and keep the same parameter setting.Is it ?\n\nNo - I don't know how large your DB is, or the other question that I asked.\nSo I can't possibly make a suggestion to add RAM.\n\nBut I do know that \"half\" is the worst possible setting for many databases.\n\nI suggest to provide some more information, and we can try to suggest a better\nconfiguration.\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nOn 10-Jun-2021, at 9:28 AM, Justin Pryzby <[email protected]> wrote:\n> > On Thu, Jun 10, 2021 at 05:45:45AM +0500, Haseeb Khan wrote:\n> >> We have installed PostgreSQL V13 on window’s server 2016, where we kept the Ram of the Server is 32 GB and disk size is 270 GB.Later we faced some performance issues regarding the database, after deep dive into it we came up and increased the Shared buffer size to 16 Gb. After the changed I am not sure we are facing that Page file Size reached to critical threshold. Currently the Page File size is 9504MB.\n> > \n> > How large is your DB ? (Or the \"active set\" of the DB, if parts of it are\n> > accessed infrequently).\n> > \n> > What was the original performance issue that led you to increase shared_buffers ?\n> > \n> > You've set shared_buffers to half of your RAM, which may be a \"worst case\"\n> > setting, since everything that's read into shared_buffers must first be read\n> > into the OS cache. So it may be that many blocks are cached twice, rather than\n> > relying on a smaller shared_buffers only for the \"hottest\" blocks, and the\n> > larger OS cache for everything else.\n> > \n> > There are exceptions to the guideline - for example, if your DB is 23 GB in\n> > size, it might make sense to have the entire thing in 24GB OF shared_buffers.\n> > But most DB don't need to fit in shared_buffers, and you shouldn't make that a\n> > goal, unless you can measure a performance benefit.\n\n\n",
"msg_date": "Thu, 10 Jun 2021 03:55:40 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Page File Size Reached Critical Threshold PostgreSQL V13"
},
{
"msg_contents": "Please work through the Slow Query wiki page and try to provide as much\ninformation as possible. It's too hard to try to help if each communication\nincludes only a fraction of the requested information.\n\nOn Thu, Jun 10, 2021 at 04:33:51PM +0500, Haseeb Khan wrote:\n> PFB the query and there are many other queries like these\n> \n> select\n> pd.gender,\n> count(1) as dispensation_counts\n> from cdss_wasfaty.dispensation_fact df\n> inner join cdss_wasfaty.patient_dim pd\n> on df.patient_key = pd.patient_key\n> inner join cdss_wasfaty.date_dim date_dim\n> on df.dispensation_date_key = date_dim.date_key\n> and date_dim.year IN (2020)\n> group by pd.gender\n> \n> On Thu, Jun 10, 2021 at 2:48 PM Justin Pryzby <[email protected]> wrote:\n> \n> > Can you give an example of a query that performed poorly?\n> >\n> > Send the query, and its explain (analyze,buffers,settings) for the query,\n> > and schema for the relevant queries.\n> >\n> > > > https://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n-- \nJustin Pryzby\nSystem Administrator\nTelsasoft\n+1-952-707-8581\n\n\n",
"msg_date": "Thu, 10 Jun 2021 09:33:46 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Page File Size Reached Critical Threshold PostgreSQL V13"
}
] |
[
{
"msg_contents": "Hello,\n\nAfter checking doc, only mentioned vm.overcommit_memory=2, but didn't\nmentioned vm.overcommit_ratio recommended value\n\nhttps://www.postgresql.org/docs/11/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT\n\nsome articles mentioned that 80 or 90 configuration in their env\n\nSo is it OK just to configure vm.overcommit_ratio to 90 please?\n\nThank you\n\nHello,After checking doc, only mentioned vm.overcommit_memory=2, but didn't mentioned vm.overcommit_ratio recommended valuehttps://www.postgresql.org/docs/11/kernel-resources.html#LINUX-MEMORY-OVERCOMMITsome articles mentioned that 80 or 90 configuration in their envSo is it OK just to configure vm.overcommit_ratio to 90 please?Thank you",
"msg_date": "Mon, 14 Jun 2021 18:16:35 +0800",
"msg_from": "Yi Sun <[email protected]>",
"msg_from_op": true,
"msg_subject": "overcommit_ratio setting"
},
{
"msg_contents": "On Mon, Jun 14, 2021 at 06:16:35PM +0800, Yi Sun wrote:\n> \n> So is it OK just to configure vm.overcommit_ratio to 90 please?\n\nThis parameter entirely depends on the amount of RAM and swap you have on your\nserver, and how much memory you want to be allocable.\n\nSee https://www.kernel.org/doc/Documentation/vm/overcommit-accounting.\n\nIt's usually a good practice to not allow more than your RAM to be alloced, and\nlet a at least a bit of memory non allocable to make sure that you keep at\nleast some OS cache, but that's up to you.\n\n\n",
"msg_date": "Mon, 14 Jun 2021 22:14:23 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: overcommit_ratio setting"
},
{
"msg_contents": "On Mon, 2021-06-14 at 18:16 +0800, Yi Sun wrote:\n> After checking doc, only mentioned vm.overcommit_memory=2, but didn't mentioned vm.overcommit_ratio recommended value\n> https://www.postgresql.org/docs/11/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT\n> some articles mentioned that 80 or 90 configuration in their env\n> So is it OK just to configure vm.overcommit_ratio to 90 please?\n\nIt depends on the size of RAM and swap space:\n\novercommit_ratio < (RAM - swap) / RAM * 100\n\nHere, RAM is the RAM available to PostgreSQL.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Mon, 14 Jun 2021 18:18:23 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: overcommit_ratio setting"
},
{
"msg_contents": "> On Mon, 2021-06-14 at 18:16 +0800, Yi Sun wrote:\n> > After checking doc, only mentioned vm.overcommit_memory=2, but didn't\n> mentioned vm.overcommit_ratio recommended value\n> >\n> https://www.postgresql.org/docs/11/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT\n> > some articles mentioned that 80 or 90 configuration in their env\n> > So is it OK just to configure vm.overcommit_ratio to 90 please?\n>\n> It depends on the size of RAM and swap space:\n>\n> overcommit_ratio < (RAM - swap) / RAM * 100\n>\n> Here, RAM is the RAM available to PostgreSQL.\n>\n\nThank you for your reply\n\n1. Our env RAM are 4GB, 8 GB, 16 GB... as below url suggestion, could we\nconfigure swap as below?\nhttps://opensource.com/article/18/9/swap-space-linux-systems\n\nRAM swap\n\n2GB – 8GB = RAM\n>8GB 8GB\n\n2. If the RAM is 4GB and 8GB, the formula (RAM - swap) / RAM * 100 result\nwill become to 0, how could we configure overcommit_ratio please?\n\nThank you\n\nOn Mon, 2021-06-14 at 18:16 +0800, Yi Sun wrote:\n> After checking doc, only mentioned vm.overcommit_memory=2, but didn't mentioned vm.overcommit_ratio recommended value\n> https://www.postgresql.org/docs/11/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT\n> some articles mentioned that 80 or 90 configuration in their env\n> So is it OK just to configure vm.overcommit_ratio to 90 please?\n\nIt depends on the size of RAM and swap space:\n\novercommit_ratio < (RAM - swap) / RAM * 100\n\nHere, RAM is the RAM available to PostgreSQL. Thank you for your reply1. Our env RAM are 4GB, 8 GB, 16 GB... as below url suggestion, could we \n\nconfigure swap as below?https://opensource.com/article/18/9/swap-space-linux-systemsRAM swap2GB – 8GB = RAM>8GB 8GB2. If the RAM is 4GB and 8GB, the formula \n\n(RAM - swap) / RAM * 100 result will become to 0, how could we configure overcommit_ratio please?Thank you",
"msg_date": "Tue, 15 Jun 2021 08:34:26 +0800",
"msg_from": "Yi Sun <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: overcommit_ratio setting"
},
{
"msg_contents": "On Tue, 2021-06-15 at 08:34 +0800, Yi Sun wrote:\n> > overcommit_ratio < (RAM - swap) / RAM * 100\n> > \n> > Here, RAM is the RAM available to PostgreSQL.\n> \n> Thank you for your reply\n> \n> 1. Our env RAM are 4GB, 8 GB, 16 GB... as below url suggestion, could we configure swap as below?\n> https://opensource.com/article/18/9/swap-space-linux-systems\n> \n> RAM swap\n> \n> 2GB – 8GB = RAM\n> >8GB 8GB\n\nI wouldn't change the swap space to fit overcommit_ratio, but\nthe other way around.\nWith a properly configured PostgreSQL, you won't need a lot of swap space.\n\n> 2. If the RAM is 4GB and 8GB, the formula (RAM - swap) / RAM * 100 result will become to 0,\n> how could we configure overcommit_ratio please?\n\nYou have to use floating point arithmetic.\n\nThe result will only be 0 if RAM = swap.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Tue, 15 Jun 2021 09:07:00 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: overcommit_ratio setting"
},
{
"msg_contents": "Laurenz Albe <[email protected]> 于2021年6月15日周二 下午3:07写道:\n\n> On Tue, 2021-06-15 at 08:34 +0800, Yi Sun wrote:\n> > > overcommit_ratio < (RAM - swap) / RAM * 100\n> > >\n> > > Here, RAM is the RAM available to PostgreSQL.\n> >\n> > Thank you for your reply\n> >\n> > 1. Our env RAM are 4GB, 8 GB, 16 GB... as below url suggestion, could we\n> configure swap as below?\n> > https://opensource.com/article/18/9/swap-space-linux-systems\n> >\n> > RAM swap\n> >\n> > 2GB – 8GB = RAM\n> > >8GB 8GB\n>\n> I wouldn't change the swap space to fit overcommit_ratio, but\n> the other way around.\n> With a properly configured PostgreSQL, you won't need a lot of swap space.\n>\n> > 2. If the RAM is 4GB and 8GB, the formula (RAM - swap) / RAM * 100\n> result will become to 0,\n> > how could we configure overcommit_ratio please?\n>\n> You have to use floating point arithmetic.\n>\n> The result will only be 0 if RAM = swap.\n>\n\nGot it, so should always use formula overcommit_ratio < (RAM - swap) / RAM\n* 100 regardless of the value\n\nOur prd env RAM are 4GB, 8 GB, 16 GB..., some env configured swap, some env\ndidn't configure swap\n1. Is it OK if prd env didn't configure swap?\n2. The linux OS is CentOS 7, what's the recommended value for swap setting\nbased on different RAM please?\n\n Thank you\n\nLaurenz Albe <[email protected]> 于2021年6月15日周二 下午3:07写道:On Tue, 2021-06-15 at 08:34 +0800, Yi Sun wrote:\n> > overcommit_ratio < (RAM - swap) / RAM * 100\n> > \n> > Here, RAM is the RAM available to PostgreSQL.\n> \n> Thank you for your reply\n> \n> 1. Our env RAM are 4GB, 8 GB, 16 GB... as below url suggestion, could we configure swap as below?\n> https://opensource.com/article/18/9/swap-space-linux-systems\n> \n> RAM swap\n> \n> 2GB – 8GB = RAM\n> >8GB 8GB\n\nI wouldn't change the swap space to fit overcommit_ratio, but\nthe other way around.\nWith a properly configured PostgreSQL, you won't need a lot of swap space.\n\n> 2. If the RAM is 4GB and 8GB, the formula (RAM - swap) / RAM * 100 result will become to 0,\n> how could we configure overcommit_ratio please?\n\nYou have to use floating point arithmetic.\n\nThe result will only be 0 if RAM = swap.Got it, so should always use formula overcommit_ratio < (RAM - swap) / RAM * 100 regardless of the valueOur prd env RAM are 4GB, 8 GB, 16 GB..., some env configured swap, some env didn't configure swap1. Is it OK if prd env didn't configure swap?2. \n\nThe linux OS is CentOS 7, what's the recommended value for swap setting based on different RAM please? Thank you",
"msg_date": "Tue, 15 Jun 2021 15:45:59 +0800",
"msg_from": "Yi Sun <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: overcommit_ratio setting"
}
] |
[
{
"msg_contents": "Hello Everyone !\n\nI trust that you guys are keep doing very Well ! \n\nDoes anyone have complete documentation to configure PostgreSQL V13 Master and Slave on Windows Server\nand also how to test Manual Failover ?\n\nWould highly appreciated, if someone could help in this regard.\n\nBr,\nHaseeb Ahmad\n\nHello Everyone !I trust that you guys are keep doing very Well ! Does anyone have complete documentation to configure PostgreSQL V13 Master and Slave on Windows Serverand also how to test Manual Failover ?Would highly appreciated, if someone could help in this regard.Br,Haseeb Ahmad",
"msg_date": "Tue, 15 Jun 2021 00:36:46 +0500",
"msg_from": "Haseeb Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Master - Slave Replication Window Server"
},
{
"msg_contents": "On 15/06/21, Haseeb Khan ([email protected]) wrote:\n> Does anyone have complete documentation to configure PostgreSQL V13 Master\n> and Slave on Windows Server and also how to test Manual Failover ?\n\nI suggest having a look at https://www.postgresql.org/docs/13/high-availability.html\n\nThe server administration documentation at https://www.postgresql.org/docs/13/admin.html has some Windows-specific guides.\n\nRory\n\n\n",
"msg_date": "Mon, 14 Jun 2021 20:41:57 +0100",
"msg_from": "Rory Campbell-Lange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Master - Slave Replication Window Server"
},
{
"msg_contents": "Thankyou Rory !\n\nBr,\nHaseeb Ahmad\n\n> On 15-Jun-2021, at 12:42 AM, Rory Campbell-Lange <[email protected]> wrote:\n> \n> On 15/06/21, Haseeb Khan ([email protected]) wrote:\n>> Does anyone have complete documentation to configure PostgreSQL V13 Master\n>> and Slave on Windows Server and also how to test Manual Failover ?\n> \n> I suggest having a look at https://www.postgresql.org/docs/13/high-availability.html\n> \n> The server administration documentation at https://www.postgresql.org/docs/13/admin.html has some Windows-specific guides.\n> \n> Rory\n\n\n",
"msg_date": "Tue, 15 Jun 2021 10:42:40 +0500",
"msg_from": "Haseeb Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Master - Slave Replication Window Server"
},
{
"msg_contents": "Hello Rory,\n\nHope you're doing well !\n\nI have confusion below, Should we create an archive path on the standby\nserver and then set it to recovery.conf file ?\n\n\nrestore_command = 'cp /path/to/archive/%f %p'\n\n\n*BR,*\nHaseeb Ahmad\n\n\n\nOn Tue, Jun 15, 2021 at 10:42 AM Haseeb Khan <[email protected]> wrote:\n\n> Thankyou Rory !\n>\n> Br,\n> Haseeb Ahmad\n>\n> > On 15-Jun-2021, at 12:42 AM, Rory Campbell-Lange <\n> [email protected]> wrote:\n> >\n> > On 15/06/21, Haseeb Khan ([email protected]) wrote:\n> >> Does anyone have complete documentation to configure PostgreSQL V13\n> Master\n> >> and Slave on Windows Server and also how to test Manual Failover ?\n> >\n> > I suggest having a look at\n> https://www.postgresql.org/docs/13/high-availability.html\n> >\n> > The server administration documentation at\n> https://www.postgresql.org/docs/13/admin.html has some Windows-specific\n> guides.\n> >\n> > Rory\n>\n\nHello Rory,Hope you're doing well !I have confusion below, Should we create an archive path on the standby server and then set it to recovery.conf file ?restore_command = 'cp /path/to/archive/%f %p'BR,Haseeb AhmadOn Tue, Jun 15, 2021 at 10:42 AM Haseeb Khan <[email protected]> wrote:Thankyou Rory !\n\nBr,\nHaseeb Ahmad\n\n> On 15-Jun-2021, at 12:42 AM, Rory Campbell-Lange <[email protected]> wrote:\n> \n> On 15/06/21, Haseeb Khan ([email protected]) wrote:\n>> Does anyone have complete documentation to configure PostgreSQL V13 Master\n>> and Slave on Windows Server and also how to test Manual Failover ?\n> \n> I suggest having a look at https://www.postgresql.org/docs/13/high-availability.html\n> \n> The server administration documentation at https://www.postgresql.org/docs/13/admin.html has some Windows-specific guides.\n> \n> Rory",
"msg_date": "Tue, 15 Jun 2021 11:17:05 +0500",
"msg_from": "Haseeb Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Master - Slave Replication Window Server"
},
{
"msg_contents": "On 15/06/21, Haseeb Khan ([email protected]) wrote:\n> I have confusion below, Should we create an archive path on the standby\n> server and then set it to recovery.conf file ?\n> \n> restore_command = 'cp /path/to/archive/%f %p'\n\nHi Hasseb\n\nAre you following this procedure?\nhttps://www.postgresql.org/docs/13/continuous-archiving.html\n\nIf so please let us know what problem you are experiencing.\n\nAlso, this is the postgres performance list. Please move this conversation to postgresql general.\n\nCheers\nRory\n\n\n\n\n",
"msg_date": "Tue, 15 Jun 2021 13:05:18 +0100",
"msg_from": "Rory Campbell-Lange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Master - Slave Replication Window Server"
},
{
"msg_contents": "Hello Rory,\n\nYes , I have followed the document and configured each and everything. But I can’t see archive_Wal_segments is copying to the folder which I have created. So the issue I am facing is that where should I create the archive folder should I create on master or slave server ? Might be I am missing something or doing some mistake.\n\nWould appreciated, if you could help in this regard \n\nKindly send me the Postgres general email, so I can raise this issue over there as well\n\nThanks in advance \n\nBr, \nHaseeb Ahmad\n\n> On 15-Jun-2021, at 5:05 PM, Rory Campbell-Lange <[email protected]> wrote:\n> \n> On 15/06/21, Haseeb Khan ([email protected]) wrote:\n>> I have confusion below, Should we create an archive path on the standby\n>> server and then set it to recovery.conf file ?\n>> \n>> restore_command = 'cp /path/to/archive/%f %p'\n> \n> Hi Hasseb\n> \n> Are you following this procedure?\n> https://www.postgresql.org/docs/13/continuous-archiving.html\n> \n> If so please let us know what problem you are experiencing.\n> \n> Also, this is the postgres performance list. Please move this conversation to postgresql general.\n> \n> Cheers\n> Rory\n> \n> \n\nHello Rory,Yes , I have followed the document and configured each and everything. But I can’t see archive_Wal_segments is copying to the folder which I have created. So the issue I am facing is that where should I create the archive folder should I create on master or slave server ? Might be I am missing something or doing some mistake.Would appreciated, if you could help in this regard Kindly send me the Postgres general email, so I can raise this issue over there as wellThanks in advance Br, Haseeb AhmadOn 15-Jun-2021, at 5:05 PM, Rory Campbell-Lange <[email protected]> wrote:On 15/06/21, Haseeb Khan ([email protected]) wrote:I have confusion below, Should we create an archive path on the standbyserver and then set it to recovery.conf file ?restore_command = 'cp /path/to/archive/%f %p'Hi HassebAre you following this procedure?https://www.postgresql.org/docs/13/continuous-archiving.htmlIf so please let us know what problem you are experiencing.Also, this is the postgres performance list. Please move this conversation to postgresql general.CheersRory",
"msg_date": "Tue, 15 Jun 2021 17:29:37 +0500",
"msg_from": "Haseeb Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Master - Slave Replication Window Server"
},
{
"msg_contents": "Hi Haseeb,\n\nI had configured replication on windows and made a document in an easy way.\nI am not expert but I hope it will help you.\n\n\n\n\n\nRegards\nAtul\n\n\n\nOn Tuesday, June 15, 2021, Haseeb Khan <[email protected]> wrote:\n\n> Hello Rory,\n>\n> Yes , I have followed the document and configured each and everything. But\n> I can’t see archive_Wal_segments is copying to the folder which I have\n> created. So the issue I am facing is that where should I create the archive\n> folder should I create on master or slave server ? Might be I am missing\n> something or doing some mistake.\n>\n> Would appreciated, if you could help in this regard\n>\n> Kindly send me the Postgres general email, so I can raise this issue over\n> there as well\n>\n> Thanks in advance\n>\n> *Br*,\n> Haseeb Ahmad\n>\n> On 15-Jun-2021, at 5:05 PM, Rory Campbell-Lange <[email protected]>\n> wrote:\n>\n> On 15/06/21, Haseeb Khan ([email protected]) wrote:\n>\n> I have confusion below, Should we create an archive path on the standby\n>\n> server and then set it to recovery.conf file ?\n>\n>\n> restore_command = 'cp /path/to/archive/%f %p'\n>\n>\n> Hi Hasseb\n>\n> Are you following this procedure?\n> https://www.postgresql.org/docs/13/continuous-archiving.html\n>\n> If so please let us know what problem you are experiencing.\n>\n> Also, this is the postgres performance list. Please move this conversation\n> to postgresql general.\n>\n> Cheers\n> Rory\n>\n>\n>",
"msg_date": "Tue, 15 Jun 2021 21:44:05 +0530",
"msg_from": "Atul Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Master - Slave Replication Window Server"
}
] |
[
{
"msg_contents": "Hello Everyone,\n\nApologies in advance I don't know where to get knowledge regarding this\nissue that's why I posted here, Highly appreciated if someone could help on\nthis regard\n\nWe have installed and configured PostgreSQL V13 Master- Slave streaming\nreplication on Windows server 2016 but replication is not working. I don't\nknow where the error is because we are not facing any error regarding\nreplication and replication also not working.\n\nBelow are the Configuration steps performed on Master and Slave Server\n\n*Master Server postgresql.conf file changes made*\n\nlisten_addresses = '*'\n\nwal_level = replica\n\nwal_writer_delay = 500ms\n\narchive_mode = on\n\narchive_command ='copy %p \\\\server IP\\wal_archive\\%f\"'\n\narchive_timeout = 3600\n\nmax_wal_senders = 6\n\nmax_replication_slots = 6\n\n*pg_hba Master Server File*\n\nhost replication username slave_ip/32 md5\n\nhost replication username master_ip/32 md5\n\n*Below are the Configuration steps performed on Slave Server*\n\n*Note:* I have copied Data directory from master server and paste it on\nSlave server\n\n*Slave Server postgresql.conf file changes made*\n\nlisten_addresses = '*'\n\nwal_level = replica\n\nwal_writer_delay = 500ms\n\n#archive_mode = off\n\nmax_wal_senders = 6\n\nhot_standby = on\n\n*Slave pg_hba.conf file*\n\nhost replication username slave_ip/32 md5\n\nhost replication username master_ip/32 md5\n\n*recovery.conf file*\n\nstandby_mode = 'on'\n\nrestore_command= 'copy \"C:\\wal_archive\\%f\" %p'\n\n<primary_conninfo> = 'host=<MasterIP> port=5432 user=replicator\npassword=*****'\n\nAfter all these changes made when I restart the master and slave server and\nrun the following command on master server to check whether replication is\nworking or not the below mention query return 0 row\n\n*Master Server* select * from pg_stat_replication\" (return 0 row)\n\nWould be highly appreciated, if someone could tell me what exactly the\nissue is or what I am missing in the configuration on both the servers\n(master- slave).Why replication is not working?\n\n*BR,*\nHaseeb Ahmad\n\nHello Everyone,Apologies in advance I don't know where to get knowledge regarding this issue that's why I posted here, Highly appreciated if someone could help on this regard We have installed and configured PostgreSQL V13 Master- Slave streaming replication on Windows server 2016 but replication is not working. I don't know where the error is because we are not facing any error regarding replication and replication also not working.Below are the Configuration steps performed on Master and Slave Server Master Server postgresql.conf file changes madelisten_addresses = '*'wal_level = replicawal_writer_delay = 500msarchive_mode = onarchive_command ='copy %p \\\\server IP\\wal_archive\\%f\"'archive_timeout = 3600max_wal_senders = 6max_replication_slots = 6pg_hba Master Server Filehost replication username slave_ip/32 md5host replication username master_ip/32 md5Below are the Configuration steps performed on Slave ServerNote: I have copied Data directory from master server and paste it on Slave serverSlave Server postgresql.conf file changes madelisten_addresses = '*'wal_level = replicawal_writer_delay = 500ms#archive_mode = offmax_wal_senders = 6hot_standby = onSlave pg_hba.conf filehost replication username slave_ip/32 md5host replication username master_ip/32 md5recovery.conf filestandby_mode = 'on'restore_command= 'copy \"C:\\wal_archive\\%f\" %p'<primary_conninfo> = 'host=<MasterIP> port=5432 user=replicator password=*****'After all these changes made when I restart the master and slave server and run the following command on master server to check whether replication is working or not the below mention query return 0 rowMaster Server select * from pg_stat_replication\" (return 0 row)Would be highly appreciated, if someone could tell me what exactly the issue is or what I am missing in the configuration on both the servers (master- slave).Why replication is not working?BR,Haseeb Ahmad",
"msg_date": "Wed, 16 Jun 2021 21:25:37 +0500",
"msg_from": "Haseeb Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL V13 Replication Issue"
},
{
"msg_contents": "I'm no expert, but it looks like you are reading v9.6 documentation for \na v13 installation.\n\nOn 2021-06-16 09:25, Haseeb Khan wrote:\n>\n> ...\n>\n> hot_standby = on\n>\n> ...\n>\n> *recovery.conf file*\n>\n> standby_mode = 'on'\n>\n\n\n\n\n\n\n I'm no expert, but it looks like you are reading v9.6 documentation\n for a v13 installation.\n\nOn 2021-06-16 09:25, Haseeb Khan wrote:\n\n\n\n\n\n ...\n hot_standby\n = on\n ...\n recovery.conf\n file\nstandby_mode\n = 'on'",
"msg_date": "Wed, 16 Jun 2021 09:30:24 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL V13 Replication Issue"
},
{
"msg_contents": "On Wed, Jun 16, 2021 at 09:25:37PM +0500, Haseeb Khan wrote:\n> Would be highly appreciated, if someone could tell me what exactly the\n> issue is or what I am missing in the configuration on both the servers\n> (master- slave).Why replication is not working?\n\nYour issue is partially here, as recovery.conf is not supported in 13:\n\n> *recovery.conf file*\n\nI would recommend to read this area of the documentation first:\nhttps://www.postgresql.org/docs/13/warm-standby.html#STANDBY-SERVER-SETUP\n\nAll the recovery parameters have been moved to postgresql.conf, and\nsetting up a standby requires an extra file called standby.signal\ncreated at the root of the data folder.\n--\nMichael",
"msg_date": "Thu, 17 Jun 2021 10:02:52 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL V13 Replication Issue"
}
] |
[
{
"msg_contents": "Is this reasonable thinking?\n\nI'd think that one would want a *wal_keep_size* to cover the pending \nupdates while the standby server might be unavailable, however long one \nmight anticipate that would be.\n\nIn my case, I get a complete replacement (in the form of \"|\"- delimited \nASCII files) of one of the SCHEMAs every Sunday. The size of that ASCII \ndata is about 2GB, so I'm thinking of doubling that to 4GB (256 WAL \nfiles) to protect me in the case of the standby being unavailable during \nthe update. Note that a complete loss of both servers is not \ncatastrophic (I have backups); it would just be annoying.\n\n\n\n\n\n\n\n Is this reasonable thinking?\n\n I'd think that one would want a wal_keep_size to cover the\n pending updates while the standby server might be unavailable,\n however long one might anticipate that would be.\n\n In my case, I get a complete replacement (in the form of \"|\"-\n delimited ASCII files) of one of the SCHEMAs every Sunday. The size\n of that ASCII data is about 2GB, so I'm thinking of doubling that to\n 4GB (256 WAL files) to protect me in the case of the standby being\n unavailable during the update. Note that a complete loss of both\n servers is not catastrophic (I have backups); it would just be\n annoying.",
"msg_date": "Wed, 16 Jun 2021 17:36:24 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Estimating wal_keep_size"
},
{
"msg_contents": "On Wed, Jun 16, 2021 at 05:36:24PM -0700, Dean Gibson (DB Administrator) wrote:\n> Is this reasonable thinking?\n> \n> I'd think that one would want a *wal_keep_size* to cover the pending updates\n> while the standby server might be unavailable, however long one might\n> anticipate that would be.\n\nIt's usually a better approach to use a replication slot, to keep all the\nrequired WAL files, and only when needed. See\nhttps://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION-SLOTS\nfor more details.\n\nNote that a replication slot will keep all WAL files, which might eventually\nlead to an outage if the standby doesn't come back before the filesystem\ncontaining the logs get full. You can cap the maximum amount of retained WAL\nfiled since pg 13 using max_slot_wal_keep_size, see\nhttps://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-MAX-SLOT-WAL-KEEP-SIZE.\n\n\n",
"msg_date": "Thu, 17 Jun 2021 09:02:22 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Estimating wal_keep_size"
},
{
"msg_contents": "On 2021-06-16 18:02, Julien Rouhaud wrote:\n> On Wed, Jun 16, 2021 at 05:36:24PM -0700, Dean Gibson (DB Administrator) wrote:\n>> Is this reasonable thinking?\n>>\n>> I'd think that one would want a *wal_keep_size* to cover the pending updates while the standby server might be unavailable, however long one might anticipate that would be.\n> It's usually a better approach to use a replication slot, to keep all the required WAL files, and only when needed. See https://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION-SLOTS for more details.\n>\n> Note that a replication slot will keep all WAL files, which might eventually lead to an outage if the standby doesn't come back before the filesystem containing the logs get full. You can cap the maximum amount of retained WAL files since pg 13 using max_slot_wal_keep_size, see\n> https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-MAX-SLOT-WAL-KEEP-SIZE.\n\nGranted, but the same question arises about the value for \nmax_slot_wal_keep_size. Setting either too low risks data loss, & \nsetting either too high results in unnecessary disk space used. The \nquestion was, is the estimated VALUE reasonable under the circumstances?\n\n\n\n\n\n\nOn 2021-06-16 18:02, Julien Rouhaud\n wrote:\n\n\nOn Wed, Jun 16, 2021 at 05:36:24PM -0700, Dean Gibson (DB Administrator) wrote:\n\n\nIs this reasonable thinking?\n\nI'd think that one would want a *wal_keep_size* to cover the pending updates while the standby server might be unavailable, however long one might anticipate that would be.\n\n\n\nIt's usually a better approach to use a replication slot, to keep all the required WAL files, and only when needed. See https://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION-SLOTS for more details.\n\nNote that a replication slot will keep all WAL files, which might eventually lead to an outage if the standby doesn't come back before the filesystem containing the logs get full. You can cap the maximum amount of retained WAL files since pg 13 using max_slot_wal_keep_size, see\nhttps://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-MAX-SLOT-WAL-KEEP-SIZE.\n\n\n\n Granted, but the same question arises about the value for\n max_slot_wal_keep_size. Setting either too low risks data loss,\n & setting either too high results in unnecessary disk space\n used. The question was, is the estimated VALUE reasonable under the\n circumstances?",
"msg_date": "Fri, 18 Jun 2021 11:13:14 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Estimating wal_keep_size"
},
{
"msg_contents": "Le sam. 19 juin 2021 à 02:13, Dean Gibson (DB Administrator) <\[email protected]> a écrit :\n\n>\n> Granted, but the same question arises about the value for\n> max_slot_wal_keep_size. Setting either too low risks data loss, & setting\n> either too high results in unnecessary disk space used. The question was,\n> is the estimated VALUE reasonable under the circumstances?\n>\n\nit may be, until one day it won't be. and that day usually happens. when\nyou set up this kind of limit you choose service availability over your\ndata, so you have to accept that it may not be enough. if this is a problem\ndon't setup a limit.\n\n>\n\nLe sam. 19 juin 2021 à 02:13, Dean Gibson (DB Administrator) <[email protected]> a écrit :\n\n Granted, but the same question arises about the value for\n max_slot_wal_keep_size. Setting either too low risks data loss,\n & setting either too high results in unnecessary disk space\n used. The question was, is the estimated VALUE reasonable under the\n circumstances?it may be, until one day it won't be. and that day usually happens. when you set up this kind of limit you choose service availability over your data, so you have to accept that it may not be enough. if this is a problem don't setup a limit.",
"msg_date": "Sat, 19 Jun 2021 02:27:44 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Estimating wal_keep_size"
},
{
"msg_contents": "On Fri, 18 Jun 2021 at 23:58, Julien Rouhaud <[email protected]> wrote:\n\n> Le sam. 19 juin 2021 à 02:13, Dean Gibson (DB Administrator) <\n> [email protected]> a écrit :\n>\n>>\n>> Granted, but the same question arises about the value for\n>> max_slot_wal_keep_size. Setting either too low risks data loss, & setting\n>> either too high results in unnecessary disk space used. The question was,\n>> is the estimated VALUE reasonable under the circumstances?\n>>\n>\n> it may be, until one day it won't be. and that day usually happens. when\n> you set up this kind of limit you choose service availability over your\n> data, so you have to accept that it may not be enough. if this is a problem\n> don't setup a limit.\n>\n>>\nyep, that day does come :) and in that case, i used to drop slot (primary\nis high priority) and rebuild the repica. We already had multiple replicas\nunder load balancer, so it was feasible.\n\nAnyways, I used to emit pg_current_wal_lsn to graphite (or any other\ntelemetry monitoring) every minute or so, ot be able to calculate wal\ngrowth over period of time.\nThen used it to estimate how much of the disk would be required for a PITR\nsetup like barman if we use a 7 day WAL post backup.\none can use other math expressions like <time>moving avg to estimate wal\ngrowth over the time duration of an incident with the replica etc.\nofcourse one needs to also calculate how fast the wals would be played\nback, if one has hot_standby_feedback for long running queries on replicas\netc but I think I put my point.\n\n\n-- \nThanks,\nVijay\nMumbai, India\n\nOn Fri, 18 Jun 2021 at 23:58, Julien Rouhaud <[email protected]> wrote:Le sam. 19 juin 2021 à 02:13, Dean Gibson (DB Administrator) <[email protected]> a écrit :\n\n Granted, but the same question arises about the value for\n max_slot_wal_keep_size. Setting either too low risks data loss,\n & setting either too high results in unnecessary disk space\n used. The question was, is the estimated VALUE reasonable under the\n circumstances?it may be, until one day it won't be. and that day usually happens. when you set up this kind of limit you choose service availability over your data, so you have to accept that it may not be enough. if this is a problem don't setup a limit. \n\n\nyep, that day does come :) and in that case, i used to drop slot (primary is high priority) and rebuild the repica. We already had multiple replicas under load balancer, so it was feasible.Anyways, I used to emit pg_current_wal_lsn to graphite (or any other telemetry monitoring) every minute or so, ot be able to calculate wal growth over period of time.Then used it to estimate how much of the disk would be required for a PITR setup like barman if we use a 7 day WAL post backup.one can use other math expressions like <time>moving avg to estimate wal growth over the time duration of an incident with the replica etc. ofcourse one needs to also calculate how fast the wals would be played back, if one has hot_standby_feedback for long running queries on replicas etc but I think I put my point. -- Thanks,VijayMumbai, India",
"msg_date": "Sat, 19 Jun 2021 00:10:08 +0530",
"msg_from": "Vijaykumar Jain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Estimating wal_keep_size"
},
{
"msg_contents": "On 2021-06-16 17:36, Dean Gibson (DB Administrator) wrote:\n> Is this reasonable thinking?\n>\n> I'd think that one would want a *wal_keep_size* to cover the pending \n> updates while the standby server might be unavailable, however long \n> one might anticipate that would be.\n>\n> In my case, I get a complete replacement (in the form of \"|\"- \n> delimited ASCII files) of one of the SCHEMAs every Sunday. The size \n> of that ASCII data is about 2GB, so I'm thinking of doubling that to \n> 4GB (256 WAL files) to protect me in the case of the standby being \n> unavailable during the update. Note that a complete loss of both \n> servers is not catastrophic (I have backups); it would just be annoying.\n>\n\nIn the absence of any clear guidance, I temporarily set wal_keep_size to \n16GB & waited for the Sunday update. That update today created just \nover 6GB of WAL files during the update, so I've set wal_keep_size to \n8GB (512 WAL files).\n\nOh, and wal_keep_size is NOT an upper limit restricting further WAL \nfiles. It's more like a minimum.\n\n\n\n\n\n\nOn 2021-06-16 17:36, Dean Gibson (DB\n Administrator) wrote:\n\n\n\n Is this reasonable thinking?\n\n I'd think that one would want a wal_keep_size to cover the\n pending updates while the standby server might be unavailable,\n however long one might anticipate that would be.\n\n In my case, I get a complete replacement (in the form of \"|\"-\n delimited ASCII files) of one of the SCHEMAs every Sunday. The\n size of that ASCII data is about 2GB, so I'm thinking of doubling\n that to 4GB (256 WAL files) to protect me in the case of the\n standby being unavailable during the update. Note that a complete\n loss of both servers is not catastrophic (I have backups); it\n would just be annoying.\n\n\n\n In the absence of any clear guidance, I temporarily set\n wal_keep_size to 16GB & waited for the Sunday update. That\n update today created just over 6GB of WAL files during the update,\n so I've set wal_keep_size to 8GB (512 WAL files).\n\n Oh, and wal_keep_size is NOT an upper limit restricting further WAL\n files. It's more like a minimum.",
"msg_date": "Sun, 20 Jun 2021 16:53:35 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Estimating wal_keep_size"
}
] |
[
{
"msg_contents": "we have some partitioned tables with inherence and planning to migrate them to the declaration.\n\nTable DDL:\n\nCREATE TABLE c_account_p\n\n(\n\n billing_account_guid character varying(40) NOT NULL,\n\n ingestion_process_id bigint NOT NULL DEFAULT '-1'::integer,\n\n load_dttm timestamp(6) without time zone NOT NULL,\n\n ban integer NOT NULL,\n\n CONSTRAINT billing_account_pkey PRIMARY KEY (billing_account_guid, ban)\n) PARTITION by RANGE(load_dttm);\nWhen I try the create table, it's throwing below error:\n\nERROR: insufficient columns in the PRIMARY KEY constraint definition\n\nDETAIL: PRIMARY KEY constraint on table \"l_billing_account_p\" lacks column \"load_dttm\" which is part of the partition key.\nSQL state: 0A000\n\nIs it mandatory/necessary that the partition column should be a primary key? cause if I include load_dttm as PK then it's working fine.db<>fiddle\n\nIf the partition column should be supposed to be a PK, it's challenging to create a partition by range with the date column, cause the load_dttm column chances to have duplicate if data loaded COPY or INSERT.INSERT INTO c_account_p SELECT * from c_account_p_bkp ON CONFLICT (billing_account_guid,ban,load_dttm) DO UPDATE SET 'some stuff..'\n\nIf I receive billing_account_guid, ban combination with different load_dttm then it will end up with duplicate keys.\n\nCould some please help me to understand this scenario?\nThanks.\n \n\n\n\n\n\nwe have some partitioned tables with inherence and planning to migrate them to the declaration.Table DDL:CREATE TABLE c_account_p( billing_account_guid character varying(40) NOT NULL, ingestion_process_id bigint NOT NULL DEFAULT '-1'::integer, load_dttm timestamp(6) without time zone NOT NULL, ban integer NOT NULL, CONSTRAINT billing_account_pkey PRIMARY KEY (billing_account_guid, ban)) PARTITION by RANGE(load_dttm);When I try the create table, it's throwing below error:ERROR: insufficient columns in the PRIMARY KEY constraint definitionDETAIL: PRIMARY KEY constraint on table \"l_billing_account_p\" lacks column \"load_dttm\" which is part of the partition key.SQL state: 0A000Is it mandatory/necessary that the partition column should be a primary key? cause if I include load_dttm as PK then it's working fine.db<>fiddleIf the partition column should be supposed to be a PK, it's challenging to create a partition by range with the date column, cause the load_dttm column chances to have duplicate if data loaded COPY or INSERT.INSERT INTO c_account_p SELECT * from c_account_p_bkp ON CONFLICT (billing_account_guid,ban,load_dttm) DO UPDATE SET 'some stuff..'If I receive billing_account_guid, ban combination with different load_dttm then it will end up with duplicate keys.Could some please help me to understand this scenario?Thanks.",
"msg_date": "Fri, 25 Jun 2021 03:10:07 +0000 (UTC)",
"msg_from": "Nagaraj Raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partition column should be part of PK"
},
{
"msg_contents": "Declarative partitioning was new in v10\nIn v11, it was allowed to create an index on a partitioned table, including\nunique indexes.\n\nHowever it's not a \"global\" index - instead, it's an \"inherited\" index.\nFor a unique index, uniqueness is enforced within each individual index.\nAnd so global uniqueness is only guaranteed if the partition key is included in\nthe index.\n\nOn Fri, Jun 25, 2021 at 03:10:07AM +0000, Nagaraj Raj wrote:\n> CREATE TABLE c_account_p (\n> ��� billing_account_guid character varying(40)� NOT NULL,\n> ��� ingestion_process_id bigint NOT NULL DEFAULT '-1'::integer,\n> ��� load_dttm timestamp(6) without time zone NOT NULL,\n> ��� ban integer NOT NULL,\n> ��� CONSTRAINT billing_account_pkey PRIMARY KEY (billing_account_guid, ban)\n> ) PARTITION by RANGE(load_dttm);\n> When I try the create table, it's throwing below error:\n> ERROR:� insufficient columns in the PRIMARY KEY constraint definition\n> DETAIL:� PRIMARY KEY constraint on table \"l_billing_account_p\" lacks column \"load_dttm\" which is part of the partition key.\n> \n> If the�partition column�should be supposed to be a�PK, it's challenging to create a partition by range with the date column, cause the�load_dttm�column chances to have duplicate if data loaded�COPY�or�INSERT.INSERT INTO c_account_p SELECT * from c_account_p_bkp ON CONFLICT (billing_account_guid,ban,load_dttm) DO UPDATE SET 'some stuff..'\n> \n> If I receive billing_account_guid, ban combination with different load_dttm then it will end up with duplicate keys.\n\nIt sounds like you want a unique index on (billing_account_guid, ban) to\nsupport INSERT ON CONFLICT. If DO UPDATE SET will never move tuples to a new\npartittion, then you could do the INSERT ON CONFLICT on the partition rather\nthan its parent.\n\nBut it cannot be a unique, \"partitioned\" index, without including load_dttm.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 24 Jun 2021 23:22:28 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition column should be part of PK"
},
{
"msg_contents": "If I'm not wrong, this is the same thing you asked 2 week ago.\n\nIf so, why not continue the conversation on the same thread, and why not\nreference the old thread ?\n\nI went to the effort to find the old conversation.\nhttps://www.postgresql.org/message-id/[email protected]\n\nIf declaratively partitioned tables and partitioned indexes don't do what you\nwanted, then you should consider not using them for this.\n\nOn Fri, Jun 25, 2021 at 03:10:07AM +0000, Nagaraj Raj wrote:\n> we have some partitioned tables with�inherence�and planning to migrate them to the�declaration.\n> Table DDL:\n> CREATE TABLE c_account_p\n> (\n> ��� billing_account_guid character varying(40)� NOT NULL,\n> ��� ingestion_process_id bigint NOT NULL DEFAULT '-1'::integer,\n> ��� load_dttm timestamp(6) without time zone NOT NULL,\n> ��� ban integer NOT NULL,\n> ��� CONSTRAINT billing_account_pkey PRIMARY KEY (billing_account_guid, ban)\n> ) PARTITION by RANGE(load_dttm);\n> When I try the create table, it's throwing below error:\n> ERROR:� insufficient columns in the PRIMARY KEY constraint definition\n> DETAIL:� PRIMARY KEY constraint on table \"l_billing_account_p\" lacks column \"load_dttm\" which is part of the partition key.\n> SQL state: 0A000\n> Is it mandatory/necessary that the�partition column�should be a primary key? cause if I include�load_dttm�as�PK�then it's working fine.db<>fiddle\n> If the�partition column�should be supposed to be a�PK, it's challenging to create a partition by range with the date column, cause the�load_dttm�column chances to have duplicate if data loaded�COPY�or�INSERT.INSERT INTO c_account_p SELECT * from c_account_p_bkp ON CONFLICT (billing_account_guid,ban,load_dttm) DO UPDATE SET 'some stuff..'\n> If I receive billing_account_guid, ban combination with different load_dttm then it will end up with duplicate keys.\n> Could some please help me to understand this scenario?\n> Thanks.\n\n\n",
"msg_date": "Thu, 8 Jul 2021 17:17:03 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition column should be part of PK"
},
{
"msg_contents": "On 2021-Jul-08, Justin Pryzby wrote:\n\n> If I'm not wrong, this is the same thing you asked 2 week ago.\n> \n> If so, why not continue the conversation on the same thread, and why not\n> reference the old thread ?\n> \n> I went to the effort to find the old conversation.\n> https://www.postgresql.org/message-id/[email protected]\n\nActually, it looks like you're replying to the same email you replied to\ntwo weeks ago.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 8 Jul 2021 18:28:39 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition column should be part of PK"
},
{
"msg_contents": "I believe this thread qualifies for the funniest thread of 2021 (so far). And yes, this is a top post. :-)\n\nMike Sofen\n\n-----Original Message-----\nFrom: Alvaro Herrera <[email protected]> \nSent: Thursday, July 08, 2021 3:29 PM\nTo: Justin Pryzby <[email protected]>\nCc: Nagaraj Raj <[email protected]>; [email protected]\nSubject: Re: Partition column should be part of PK\n\nOn 2021-Jul-08, Justin Pryzby wrote:\n\n> If I'm not wrong, this is the same thing you asked 2 week ago.\n> \n> If so, why not continue the conversation on the same thread, and why \n> not reference the old thread ?\n> \n> I went to the effort to find the old conversation.\n> https://www.postgresql.org/message-id/20210625042228.GJ29179@telsasoft\n> .com\n\nActually, it looks like you're replying to the same email you replied to two weeks ago.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n\n\n\n",
"msg_date": "Thu, 8 Jul 2021 16:12:03 -0700",
"msg_from": "\"Mike Sofen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Partition column should be part of PK"
},
{
"msg_contents": "My apologies for making confusion with new thread. Yes its same issue related to earlier post.\nI was trying to figure out how to ensure unique values for columns (billing_account_guid, ban). If i add partition key to constraint , it wont be possible what im looking for.\nMy use case as below \nINSERT INTO t1 SELECT * from t2 ON CONFLICT (billing_account_guid,ban) DO UPDATE SET something…\n\nOr\nINSERT INTO t1 SELECT * from t2 ON CONFLICT constraint (pk or uk)(billing_account_guid,ban) DO UPDATE SET something…\n\nThanks\n\nSent from Yahoo Mail for iPhone\n\n\nOn Thursday, July 8, 2021, 7:12 PM, Mike Sofen <[email protected]> wrote:\n\nI believe this thread qualifies for the funniest thread of 2021 (so far). And yes, this is a top post. :-)\n\nMike Sofen\n\n-----Original Message-----\nFrom: Alvaro Herrera <[email protected]> \nSent: Thursday, July 08, 2021 3:29 PM\nTo: Justin Pryzby <[email protected]>\nCc: Nagaraj Raj <[email protected]>; [email protected]\nSubject: Re: Partition column should be part of PK\n\nOn 2021-Jul-08, Justin Pryzby wrote:\n\n> If I'm not wrong, this is the same thing you asked 2 week ago.\n> \n> If so, why not continue the conversation on the same thread, and why \n> not reference the old thread ?\n> \n> I went to the effort to find the old conversation.\n> https://www.postgresql.org/message-id/20210625042228.GJ29179@telsasoft\n> .com\n\nActually, it looks like you're replying to the same email you replied to two weeks ago.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n\n\n\n\n\n\n\n\n\nMy apologies for making confusion with new thread. Yes its same issue related to earlier post.I was trying to figure out how to ensure unique values for columns (billing_account_guid, ban). If i add partition key to constraint , it wont be possible what im looking for.My use case as below INSERT INTO t1 SELECT * from t2 ON CONFLICT (billing_account_guid,ban) DO UPDATE SET something…OrINSERT INTO t1 SELECT * from t2 ON CONFLICT constraint (pk or uk)(billing_account_guid,ban) DO UPDATE SET something…ThanksSent from Yahoo Mail for iPhoneOn Thursday, July 8, 2021, 7:12 PM, Mike Sofen <[email protected]> wrote:I believe this thread qualifies for the funniest thread of 2021 (so far). And yes, this is a top post. :-)Mike Sofen-----Original Message-----From: Alvaro Herrera <[email protected]> Sent: Thursday, July 08, 2021 3:29 PMTo: Justin Pryzby <[email protected]>Cc: Nagaraj Raj <[email protected]>; [email protected]: Re: Partition column should be part of PKOn 2021-Jul-08, Justin Pryzby wrote:> If I'm not wrong, this is the same thing you asked 2 week ago.> > If so, why not continue the conversation on the same thread, and why > not reference the old thread ?> > I went to the effort to find the old conversation.> https://www.postgresql.org/message-id/20210625042228.GJ29179@telsasoft> .comActually, it looks like you're replying to the same email you replied to two weeks ago.-- Álvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/",
"msg_date": "Fri, 9 Jul 2021 03:32:46 +0000 (UTC)",
"msg_from": "Nagaraj Raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partition column should be part of PK"
},
{
"msg_contents": "\n\n> On Jul 8, 2021, at 20:32, Nagaraj Raj <[email protected]> wrote:\n> \n> My apologies for making confusion with new thread. Yes its same issue related to earlier post.\n> \n> I was trying to figure out how to ensure unique values for columns (billing_account_guid, ban). If i add partition key to constraint , it wont be possible what im looking for.\n> \n> My use case as below \n> \n> INSERT INTO t1 SELECT * from t2 ON CONFLICT (billing_account_guid,ban) DO UPDATE SET something…\n> \n> Or\n> \n> INSERT INTO t1 SELECT * from t2 ON CONFLICT constraint (pk or uk)(billing_account_guid,ban) DO UPDATE SET something…\n\nRight now, PostgreSQL does not support unique indexes on partitioned tables (that operate across all partitions) unless the partition key is included in the index definition. If it didn't have that requirement, it would have to independently (and in a concurrency-supported way) scan every partition individually to see if there is a duplicate key violation in any of the partitions, and the machinery to do that does not exist right now.\n\nIf the goal is to make sure there is only one (billing_account_guid, ban, date) combination across the entire partition set, you can create an index unique index on the partitioned set as (billing_account_guid, ban, date), and INSERT ... ON CONFLICT DO NOTHING works properly then.\n\nIf the goal is to make sure there is only one (billing_account_uid, ban) in any partition regardless of date, you'll need to do something more sophisticated to make sure that two sessions don't insert an (billing_account_uid, ban) value into two different partitions. This isn't a great fit for table partitioning, and you might want to reconsider if partitioning the table is the right answer here. If you *must* have table partitioning, a possible algorithm is:\n\n-- Start a transaction\n-- Hash the (billing_account_uid, ban) key into a 64 bit value.\n-- Use that 64 bit value as a key to a call to pg_advisory_xact_lock() [1] to, in essence, create a signal to any other transaction attempting to insert that pair that it is being modified.\n-- SELECT on that pair to make sure one does not already exist.\n-- If one does not, do the INSERT.\n-- Commit, which releases the advisory lock.\n\nThis doesn't provide quite the same level of uniqueness that a cross-partition index would, but if this is the only code path that does the INSERT, it should keep duplicate from showing up in different partitions.\n\n[1] https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADVISORY-LOCKS\n\n",
"msg_date": "Thu, 8 Jul 2021 20:52:21 -0700",
"msg_from": "Christophe Pettus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition column should be part of PK"
},
{
"msg_contents": "personally, I feel this design is very bad compared to other DB servers.\n> If the goal is to make sure there is only one (billing_account_uid, ban) in any partition regardless of date, you'll need to do something more > sophisticated to make sure that two sessions don't insert an (billing_account_uid, ban) value into two different partitions. This isn't a great fit > for table partitioning, and you might want to reconsider if partitioning the table is the right answer here. If you *must* have table partitioning, a > possible algorithm is:\nyes, this is my use case.\ncan I use some trigger on the partition table before inserting the call that function this one handle conflict? \n\nCREATE or replace FUNCTION insert_trigger() RETURNS trigger LANGUAGE 'plpgsql' COST 100 VOLATILE NOT LEAKPROOFAS $BODY$DECLARE conn_name text; c_table TEXT; t_schema text; c_table1 text; m_table1 text; BEGIN c_table1 := TG_TABLE_NAME; t_schema := TG_TABLE_SCHEMA; m_table1 := t_schema||'.'||TG_TABLE_NAME; SELECT conname FROM pg_constraint WHERE conrelid = TG_TABLE_NAME ::regclass::oid and contype = 'u' into conn_name; execute 'insert into '|| m_table1 || ' values ' || new.* || ' on conflict on constraint ' || conn_name || ' do nothing -- or somthing'; RETURN null; end; $BODY$;\nCREATE TRIGGER insert BEFORE INSERT ON t4 FOR EACH ROW WHEN (pg_trigger_depth() < 1) EXECUTE FUNCTION insert_trigger(); CREATE TRIGGER insert BEFORE INSERT ON t3 FOR EACH ROW WHEN (pg_trigger_depth() < 1) EXECUTE FUNCTION insert_trigger(); .. so on .. \n\n\nhttps://dbfiddle.uk/?rdbms=postgres_11&fiddle=bcfdfc26685ffb498bf82e6d50da95e3\n\n\nPlease suggest.\n\nThanks,Rj\n On Thursday, July 8, 2021, 08:52:35 PM PDT, Christophe Pettus <[email protected]> wrote: \n \n \n\n> On Jul 8, 2021, at 20:32, Nagaraj Raj <[email protected]> wrote:\n> \n> My apologies for making confusion with new thread. Yes its same issue related to earlier post.\n> \n> I was trying to figure out how to ensure unique values for columns (billing_account_guid, ban). If i add partition key to constraint , it wont be possible what im looking for.\n> \n> My use case as below \n> \n> INSERT INTO t1 SELECT * from t2 ON CONFLICT (billing_account_guid,ban) DO UPDATE SET something…\n> \n> Or\n> \n> INSERT INTO t1 SELECT * from t2 ON CONFLICT constraint (pk or uk)(billing_account_guid,ban) DO UPDATE SET something…\n\nRight now, PostgreSQL does not support unique indexes on partitioned tables (that operate across all partitions) unless the partition key is included in the index definition. If it didn't have that requirement, it would have to independently (and in a concurrency-supported way) scan every partition individually to see if there is a duplicate key violation in any of the partitions, and the machinery to do that does not exist right now.\n\nIf the goal is to make sure there is only one (billing_account_guid, ban, date) combination across the entire partition set, you can create an index unique index on the partitioned set as (billing_account_guid, ban, date), and INSERT ... ON CONFLICT DO NOTHING works properly then.\n\nIf the goal is to make sure there is only one (billing_account_uid, ban) in any partition regardless of date, you'll need to do something more sophisticated to make sure that two sessions don't insert an (billing_account_uid, ban) value into two different partitions. This isn't a great fit for table partitioning, and you might want to reconsider if partitioning the table is the right answer here. If you *must* have table partitioning, a possible algorithm is:\n\n-- Start a transaction\n-- Hash the (billing_account_uid, ban) key into a 64 bit value.\n-- Use that 64 bit value as a key to a call to pg_advisory_xact_lock() [1] to, in essence, create a signal to any other transaction attempting to insert that pair that it is being modified.\n-- SELECT on that pair to make sure one does not already exist.\n-- If one does not, do the INSERT.\n-- Commit, which releases the advisory lock.\n\nThis doesn't provide quite the same level of uniqueness that a cross-partition index would, but if this is the only code path that does the INSERT, it should keep duplicate from showing up in different partitions.\n\n[1] https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADVISORY-LOCKS\n\n \n\npersonally, I feel this design is very bad compared to other DB servers.> If the goal is to make sure there is only one (billing_account_uid, ban) in any partition regardless of date, you'll need to do something more > sophisticated to make sure that two sessions don't insert an (billing_account_uid, ban) value into two different partitions. This isn't a great fit > for table partitioning, and you might want to reconsider if partitioning the table is the right answer here. If you *must* have table partitioning, a > possible algorithm is:yes, this is my use case.can I use some trigger on the partition table before inserting the call that function this one handle conflict? CREATE or replace FUNCTION insert_trigger() RETURNS trigger LANGUAGE 'plpgsql' COST 100 VOLATILE NOT LEAKPROOFAS $BODY$DECLARE conn_name text; c_table TEXT; t_schema text; c_table1 text; m_table1 text; BEGIN c_table1 := TG_TABLE_NAME; t_schema := TG_TABLE_SCHEMA; m_table1 := t_schema||'.'||TG_TABLE_NAME; SELECT conname FROM pg_constraint WHERE conrelid = TG_TABLE_NAME ::regclass::oid and contype = 'u' into conn_name; execute 'insert into '|| m_table1 || ' values ' || new.* || ' on conflict on constraint ' || conn_name || ' do nothing -- or somthing'; RETURN null; end; $BODY$;CREATE TRIGGER insert BEFORE INSERT ON t4 FOR EACH ROW WHEN (pg_trigger_depth() < 1) EXECUTE FUNCTION insert_trigger(); CREATE TRIGGER insert BEFORE INSERT ON t3 FOR EACH ROW WHEN (pg_trigger_depth() < 1) EXECUTE FUNCTION insert_trigger(); .. so on .. https://dbfiddle.uk/?rdbms=postgres_11&fiddle=bcfdfc26685ffb498bf82e6d50da95e3Please suggest.Thanks,Rj\n\n\n\n On Thursday, July 8, 2021, 08:52:35 PM PDT, Christophe Pettus <[email protected]> wrote:\n \n\n\n> On Jul 8, 2021, at 20:32, Nagaraj Raj <[email protected]> wrote:> > My apologies for making confusion with new thread. Yes its same issue related to earlier post.> > I was trying to figure out how to ensure unique values for columns (billing_account_guid, ban). If i add partition key to constraint , it wont be possible what im looking for.> > My use case as below > > INSERT INTO t1 SELECT * from t2 ON CONFLICT (billing_account_guid,ban) DO UPDATE SET something…> > Or> > INSERT INTO t1 SELECT * from t2 ON CONFLICT constraint (pk or uk)(billing_account_guid,ban) DO UPDATE SET something…Right now, PostgreSQL does not support unique indexes on partitioned tables (that operate across all partitions) unless the partition key is included in the index definition. If it didn't have that requirement, it would have to independently (and in a concurrency-supported way) scan every partition individually to see if there is a duplicate key violation in any of the partitions, and the machinery to do that does not exist right now.If the goal is to make sure there is only one (billing_account_guid, ban, date) combination across the entire partition set, you can create an index unique index on the partitioned set as (billing_account_guid, ban, date), and INSERT ... ON CONFLICT DO NOTHING works properly then.If the goal is to make sure there is only one (billing_account_uid, ban) in any partition regardless of date, you'll need to do something more sophisticated to make sure that two sessions don't insert an (billing_account_uid, ban) value into two different partitions. This isn't a great fit for table partitioning, and you might want to reconsider if partitioning the table is the right answer here. If you *must* have table partitioning, a possible algorithm is:-- Start a transaction-- Hash the (billing_account_uid, ban) key into a 64 bit value.-- Use that 64 bit value as a key to a call to pg_advisory_xact_lock() [1] to, in essence, create a signal to any other transaction attempting to insert that pair that it is being modified.-- SELECT on that pair to make sure one does not already exist.-- If one does not, do the INSERT.-- Commit, which releases the advisory lock.This doesn't provide quite the same level of uniqueness that a cross-partition index would, but if this is the only code path that does the INSERT, it should keep duplicate from showing up in different partitions.[1] https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADVISORY-LOCKS",
"msg_date": "Mon, 12 Jul 2021 00:36:51 +0000 (UTC)",
"msg_from": "Nagaraj Raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partition column should be part of PK"
},
{
"msg_contents": "On Fri, Jul 09, 2021 at 03:32:46AM +0000, Nagaraj Raj wrote:\n> My apologies for making confusion with new thread. Yes its same issue related to earlier post.\n> I was trying to figure out how to ensure unique values for columns (billing_account_guid, ban). If i add partition key to constraint , it wont be possible what im looking for.\n> My use case as below \n> INSERT INTO t1 SELECT * from t2 ON CONFLICT (billing_account_guid,ban) DO UPDATE SET something…\n> \n> Or\n> INSERT INTO t1 SELECT * from t2 ON CONFLICT constraint (pk or uk)(billing_account_guid,ban) DO UPDATE SET something…\n\nI'm not sure, but did you see the 2nd half of what I wrote in June ?\n\nlightly edited:\n> It sounds like you want a unique index on (billing_account_guid, ban) to\n> support INSERT ON CONFLICT. If DO UPDATE SET will never move tuples to a new\n> partition, then you could do INSERT ON CONFLICT to a partition rather\n> than its parent.\n> \n> But it cannot be a unique, \"partitioned\" index, without including load_dttm.\n\nYou could try something that doesn't use a parent/partitioned index (as I was\nsuggesting). Otherwise I think you'd have to partition by something else\ninvolving the unique columns, or not use declarative partitioning, or not use\ninsert on conflict.\n\nJustin\n\nPS, I'm sorry for my own confusion, but happy if people found it amusing.\n\n\n",
"msg_date": "Sun, 11 Jul 2021 19:38:46 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition column should be part of PK"
},
{
"msg_contents": "\n\n> On Jul 11, 2021, at 17:36, Nagaraj Raj <[email protected]> wrote:\n> \n> personally, I feel this design is very bad compared to other DB servers.\n\nPatches accepted. The issue is that in order to have a partition-set-wide unique index, the system would have to lock the unique index entries in *all* partitions, not just the target one. This logic does not currently exist, and it's not trivial to implement efficiently.\n\n> can I use some trigger on the partition table before inserting the call that function this one handle conflict? \n\nThat doesn't handle the core problem, which is ensuring that two different sessions do not insert the same (billing_account_uid, ban) into two different partitions. That requires some kind of higher-level lock. The example you give isn't required; PostgreSQL will perfectly happily accept a unique constraint on (billing_account_uid, ban) on each partition, and handle attempts to insert a duplicate row correctly (either by returning an error or processing an ON CONFLICT) clause. What that does not prevent is a duplicate (billing_account_uid, ban) in two different partitions.\n\nThere's another issue here, which is this design implies that once a particular (billing_account_uid, ban) row is created in the partitioned table, it is never deleted. This means older partitions are never dropped, which means the number of partitions in the table will row unbounded. This is not going to scale well as the number of partitions starts getting very large.\n\nYou might consider, instead, hash-partitioning on one of billing_account_uid or ban, or reconsider if partitioning is the right solution here.\n\n",
"msg_date": "Sun, 11 Jul 2021 17:47:38 -0700",
"msg_from": "Christophe Pettus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition column should be part of PK"
},
{
"msg_contents": "On Mon, 12 Jul 2021 at 12:37, Nagaraj Raj <[email protected]> wrote:\n> personally, I feel this design is very bad compared to other DB servers.\n\nI'm not sure exactly what you're referring to here as you didn't quote\nit, but my guess is you mean our lack of global index support.\n\nGenerally, there's not all that much consensus in the community that\nthis would be a good feature to have. Why do people want to use\npartitioning? Many people do it so that they can quickly remove data\nthat's no longer required with a simple DETACH operation. This is\nmetadata only and is generally very fast. Another set of people\npartition as their tables are very large and they become much easier\nto manage when broken down into parts. There's also a group of people\nwho do it for the improved data locality. Unfortunately, if we had a\nglobal index feature then that requires building a single index over\nall partitions. DETACH is no longer a metadata-only operation as we\nmust somehow invalidate or remove tuples that belong to the detached\npartition. The group of people who partitioned to get away from very\nlarge tables now have a very large index. Maybe the only group to get\noff lightly here are the data locality group. They'll still have the\nsame data locality on the heap.\n\nSo in short, many of the benefits of partitioning disappear when you\nhave a global index.\n\nSo, why did you partition your data in the first place? If you feel\nlike you wouldn't mind having a large global index over all partitions\nthen maybe you're better off just using a non-partitioned table to\nstore this data.\n\nDavid\n\n\n",
"msg_date": "Mon, 12 Jul 2021 12:57:13 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition column should be part of PK"
},
{
"msg_contents": "\nDavid Rowley schrieb am 12.07.2021 um 02:57:\n> Generally, there's not all that much consensus in the community that\n> this would be a good feature to have. Why do people want to use\n> partitioning? Many people do it so that they can quickly remove data\n> that's no longer required with a simple DETACH operation. This is\n> metadata only and is generally very fast. Another set of people\n> partition as their tables are very large and they become much easier\n> to manage when broken down into parts. There's also a group of people\n> who do it for the improved data locality. Unfortunately, if we had a\n> global index feature then that requires building a single index over\n> all partitions. DETACH is no longer a metadata-only operation as we\n> must somehow invalidate or remove tuples that belong to the detached\n> partition. The group of people who partitioned to get away from very\n> large tables now have a very large index. Maybe the only group to get\n> off lightly here are the data locality group. They'll still have the\n> same data locality on the heap.\n>\n> So in short, many of the benefits of partitioning disappear when you\n> have a global index.\n\nThe situations where this is useful are large tables where partitioning\nwould turn Seq Scans of the whole table into Seq Scans of a partition,\nor where it would allow for partition wise joins and still have\nforeign keys referencing the partitioned table.\n\nI agree they do have downsides. I only know Oracle as one of those systems\nwhere this is possible, and in general global indexes are somewhat\navoided but there are still situations where they are useful.\nE.g. if you want to have foreign keys referencing your partitioned\ntable and including the partition key in the primary key makes no\nsense.\n\nEven though they have disadvantages, I think it would be nice to\nhave the option to create them.\n\nI know that in the Oracle world, they are used seldomly (precisely\nbecause of the disadvantages you mentioned) but they do have a place.\n\nThomas\n\n\n",
"msg_date": "Mon, 12 Jul 2021 08:00:47 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition column should be part of PK"
},
{
"msg_contents": "Hi all,\nI think that global indexes could be useful sometimes. That is why Oracle implements them.\nJust to mention two benefits that could be required by a lot of people:\n- Global uniqueness which shouldn't be in conflict with partitioning\n- Performance! Well, when index is on a column which is not the partitioning key. A global index would be better for performance...\n\nNevertheless, this doesn't go without any price and you have described this very well. That is why Oracle invalidates global indexes when some partitioning maintenance operations are achieved. These indexes have to be rebuilt. But, anyway, such operations could be done \"concurrently\" or \"online\"...\n\nMichel SALAIS\n\n-----Message d'origine-----\nDe : David Rowley <[email protected]> \nEnvoyé : lundi 12 juillet 2021 02:57\nÀ : Nagaraj Raj <[email protected]>\nCc : Christophe Pettus <[email protected]>; [email protected]\nObjet : Re: Partition column should be part of PK\n\nOn Mon, 12 Jul 2021 at 12:37, Nagaraj Raj <[email protected]> wrote:\n> personally, I feel this design is very bad compared to other DB servers.\n\nI'm not sure exactly what you're referring to here as you didn't quote it, but my guess is you mean our lack of global index support.\n\nGenerally, there's not all that much consensus in the community that this would be a good feature to have. Why do people want to use partitioning? Many people do it so that they can quickly remove data that's no longer required with a simple DETACH operation. This is metadata only and is generally very fast. Another set of people partition as their tables are very large and they become much easier to manage when broken down into parts. There's also a group of people\nwho do it for the improved data locality. Unfortunately, if we had a\nglobal index feature then that requires building a single index over all partitions. DETACH is no longer a metadata-only operation as we must somehow invalidate or remove tuples that belong to the detached partition. The group of people who partitioned to get away from very large tables now have a very large index. Maybe the only group to get off lightly here are the data locality group. They'll still have the same data locality on the heap.\n\nSo in short, many of the benefits of partitioning disappear when you have a global index.\n\nSo, why did you partition your data in the first place? If you feel like you wouldn't mind having a large global index over all partitions then maybe you're better off just using a non-partitioned table to store this data.\n\nDavid\n\n\n\n\n",
"msg_date": "Mon, 12 Jul 2021 11:56:40 +0200",
"msg_from": "\"Michel SALAIS\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Partition column should be part of PK"
},
{
"msg_contents": ">\n> Dear all,\n>\nWe are planning to migrate Oracle exadata database to postgresql and db\nsize ranges from 1 tb to 60 TB.\n\nWill the PG support this with the performance matching to that of exadata\napplince?\nIf anyone could point me in the right direction where i xan get the\nbenchmarking done for these two databases either on prime or any cloud\nwould be great.\n\nThanks all in advance.\n\nManish\n\n>\n\nDear all,We are planning to migrate Oracle exadata database to postgresql and db size ranges from 1 tb to 60 TB. Will the PG support this with the performance matching to that of exadata applince? If anyone could point me in the right direction where i xan get the benchmarking done for these two databases either on prime or any cloud would be great. Thanks all in advance. Manish",
"msg_date": "Mon, 19 Jul 2021 15:39:54 +0530",
"msg_from": "Manish Lad <[email protected]>",
"msg_from_op": false,
"msg_subject": "Performance benchmark of PG"
},
{
"msg_contents": "On Mon, 2021-07-19 at 15:39 +0530, Manish Lad wrote:\n> We are planning to migrate Oracle exadata database to postgresql and db size ranges from 1 tb to 60 TB. \n> \n> Will the PG support this with the performance matching to that of exadata applince? \n> If anyone could point me in the right direction where i xan get the benchmarking done\n> for these two databases either on prime or any cloud would be great. \n\nYou won't find any trustworthy benchmarks anywhere, because Oracle expressedly\nforbids publishing of benchmark results in its license, unless Oracle has given\nits permission.\n\nThe question cannot be answered, because performance depends on your workload,\nconfiguration, software and hardware. Perhaps PostgreSQL will be faster, perhaps not.\n\nTest and see.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Mon, 19 Jul 2021 13:04:35 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance benchmark of PG"
},
{
"msg_contents": "Yes you are right. I also experienced same in one such migration from db2\nto PG which had read faster but the write was not meeting the need.\n\nWe then noticed the differences in disk types.\n\nOnce changed it matched the source.\n\nThanks and Regards\n\nManish\n\nOn Mon, 19 Jul 2021, 16:34 Laurenz Albe, <[email protected]> wrote:\n\n> On Mon, 2021-07-19 at 15:39 +0530, Manish Lad wrote:\n> > We are planning to migrate Oracle exadata database to postgresql and db\n> size ranges from 1 tb to 60 TB.\n> >\n> > Will the PG support this with the performance matching to that of\n> exadata applince?\n> > If anyone could point me in the right direction where i xan get the\n> benchmarking done\n> > for these two databases either on prime or any cloud would be great.\n>\n> You won't find any trustworthy benchmarks anywhere, because Oracle\n> expressedly\n> forbids publishing of benchmark results in its license, unless Oracle has\n> given\n> its permission.\n>\n> The question cannot be answered, because performance depends on your\n> workload,\n> configuration, software and hardware. Perhaps PostgreSQL will be faster,\n> perhaps not.\n>\n> Test and see.\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n>\n\nYes you are right. I also experienced same in one such migration from db2 to PG which had read faster but the write was not meeting the need. We then noticed the differences in disk types. Once changed it matched the source. Thanks and RegardsManish On Mon, 19 Jul 2021, 16:34 Laurenz Albe, <[email protected]> wrote:On Mon, 2021-07-19 at 15:39 +0530, Manish Lad wrote:\n> We are planning to migrate Oracle exadata database to postgresql and db size ranges from 1 tb to 60 TB. \n> \n> Will the PG support this with the performance matching to that of exadata applince? \n> If anyone could point me in the right direction where i xan get the benchmarking done\n> for these two databases either on prime or any cloud would be great. \n\nYou won't find any trustworthy benchmarks anywhere, because Oracle expressedly\nforbids publishing of benchmark results in its license, unless Oracle has given\nits permission.\n\nThe question cannot be answered, because performance depends on your workload,\nconfiguration, software and hardware. Perhaps PostgreSQL will be faster, perhaps not.\n\nTest and see.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com",
"msg_date": "Mon, 19 Jul 2021 16:39:30 +0530",
"msg_from": "Manish Lad <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance benchmark of PG"
},
{
"msg_contents": "Hi,\n\nThe question can not be answered in a proper way. Because, in PostgreSQL,\nperformance(response time in query execution events) depends on\n\n1. Your disk/storage hardware. The performance can vary between SSD and HDD\nfor example.\n2. Your PostgreSQL configurations. In other words, configuration parameters\ncan change your performance metrics. But you have to define your\nqueries,data size that a query can SELECT each time and queries that\nINSERTS/UPDATES to database.\n3. Your CPU and MEMORY hardwares can also change your performance metrics.\nYou have to compare your hardware infrastructure with Exadata appliances.\n4. You also have to consider the connection pooling part in your\napplication part. PostgreSQL can suffer from performance problems because\nof lack of connection pooling.\n\nRegards.\n\n\nManish Lad <[email protected]>, 19 Tem 2021 Pzt, 14:09 tarihinde şunu\nyazdı:\n\n> Yes you are right. I also experienced same in one such migration from db2\n> to PG which had read faster but the write was not meeting the need.\n>\n> We then noticed the differences in disk types.\n>\n> Once changed it matched the source.\n>\n> Thanks and Regards\n>\n> Manish\n>\n> On Mon, 19 Jul 2021, 16:34 Laurenz Albe, <[email protected]> wrote:\n>\n>> On Mon, 2021-07-19 at 15:39 +0530, Manish Lad wrote:\n>> > We are planning to migrate Oracle exadata database to postgresql and db\n>> size ranges from 1 tb to 60 TB.\n>> >\n>> > Will the PG support this with the performance matching to that of\n>> exadata applince?\n>> > If anyone could point me in the right direction where i xan get the\n>> benchmarking done\n>> > for these two databases either on prime or any cloud would be great.\n>>\n>> You won't find any trustworthy benchmarks anywhere, because Oracle\n>> expressedly\n>> forbids publishing of benchmark results in its license, unless Oracle has\n>> given\n>> its permission.\n>>\n>> The question cannot be answered, because performance depends on your\n>> workload,\n>> configuration, software and hardware. Perhaps PostgreSQL will be faster,\n>> perhaps not.\n>>\n>> Test and see.\n>>\n>> Yours,\n>> Laurenz Albe\n>> --\n>> Cybertec | https://www.cybertec-postgresql.com\n>>\n>>\n\n-- \nHüseyin Demir\n\nSenior Database Platform Engineer\n\nTwitter: https://twitter.com/d3rh5n\nLinkedin: hseyindemir\n<https://www.linkedin.com/in/h%C3%BCseyin-demir-4020699b/>\nGithub: https://github.com/hseyindemir\nGitlab: https://gitlab.com/demirhuseyinn.94\nMedium: https://demirhuseyinn-94.medium.com/\n\nHi,The question can not be answered in a proper way. Because, in PostgreSQL, performance(response time in query execution events) depends on 1. Your disk/storage hardware. The performance can vary between SSD and HDD for example.2. Your PostgreSQL configurations. In other words, configuration parameters can change your performance metrics. But you have to define your queries,data size that a query can SELECT each time and queries that INSERTS/UPDATES to database.3. Your CPU and MEMORY hardwares can also change your performance metrics. You have to compare your hardware infrastructure with Exadata appliances. 4. You also have to consider the connection pooling part in your application part. PostgreSQL can suffer from performance problems because of lack of connection pooling. Regards.Manish Lad <[email protected]>, 19 Tem 2021 Pzt, 14:09 tarihinde şunu yazdı:Yes you are right. I also experienced same in one such migration from db2 to PG which had read faster but the write was not meeting the need. We then noticed the differences in disk types. Once changed it matched the source. Thanks and RegardsManish On Mon, 19 Jul 2021, 16:34 Laurenz Albe, <[email protected]> wrote:On Mon, 2021-07-19 at 15:39 +0530, Manish Lad wrote:\n> We are planning to migrate Oracle exadata database to postgresql and db size ranges from 1 tb to 60 TB. \n> \n> Will the PG support this with the performance matching to that of exadata applince? \n> If anyone could point me in the right direction where i xan get the benchmarking done\n> for these two databases either on prime or any cloud would be great. \n\nYou won't find any trustworthy benchmarks anywhere, because Oracle expressedly\nforbids publishing of benchmark results in its license, unless Oracle has given\nits permission.\n\nThe question cannot be answered, because performance depends on your workload,\nconfiguration, software and hardware. Perhaps PostgreSQL will be faster, perhaps not.\n\nTest and see.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n-- Hüseyin DemirSenior Database Platform EngineerTwitter: https://twitter.com/d3rh5nLinkedin: hseyindemirGithub: https://github.com/hseyindemirGitlab: https://gitlab.com/demirhuseyinn.94Medium: https://demirhuseyinn-94.medium.com/",
"msg_date": "Mon, 19 Jul 2021 14:17:51 +0300",
"msg_from": "=?UTF-8?Q?H=C3=BCseyin_Demir?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance benchmark of PG"
},
{
"msg_contents": "Manish Lad schrieb am 19.07.2021 um 12:09:\n> We are planning to migrate Oracle exadata database to postgresql and\n> db size ranges from 1 tb to 60 TB.\n>\n> Will the PG support this with the performance matching to that of\n> exadata applince? If anyone could point me in the right direction\n> where i xan get the benchmarking done for these two databases either\n> on prime or any cloud would be great.\n\n\nAs already pointed out, you won't find such a benchmark.\n\nYou will have to run such a benchmark yourself. Ideally with a workload\nthat represents your use case. Or maybe with something like HammerDB.\n\nBut Exadata isn't only software, it's also hardware especially designed\nto work together with Oracle's enterprise edition.\n\nSo if you want to get any reasonable results, you will at least have to\nbuy hardware that matches the Exadata HW specifications.\n\nSo if you run your own tests, make sure you buy comparable HW for\nPostgres as well (lots of RAM and many fast server grade NVMes)\n\n\n\n",
"msg_date": "Mon, 19 Jul 2021 13:19:19 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance benchmark of PG"
},
{
"msg_contents": "Thank you all for your swift response.\n\nThank you again.\n\nManish\n\nOn Mon, 19 Jul 2021, 15:39 Manish Lad, <[email protected]> wrote:\n\n> Dear all,\n>>\n> We are planning to migrate Oracle exadata database to postgresql and db\n> size ranges from 1 tb to 60 TB.\n>\n> Will the PG support this with the performance matching to that of exadata\n> applince?\n> If anyone could point me in the right direction where i xan get the\n> benchmarking done for these two databases either on prime or any cloud\n> would be great.\n>\n> Thanks all in advance.\n>\n> Manish\n>\n>>\n\nThank you all for your swift response. Thank you again.ManishOn Mon, 19 Jul 2021, 15:39 Manish Lad, <[email protected]> wrote:Dear all,We are planning to migrate Oracle exadata database to postgresql and db size ranges from 1 tb to 60 TB. Will the PG support this with the performance matching to that of exadata applince? If anyone could point me in the right direction where i xan get the benchmarking done for these two databases either on prime or any cloud would be great. Thanks all in advance. Manish",
"msg_date": "Mon, 19 Jul 2021 16:53:39 +0530",
"msg_from": "Manish Lad <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance benchmark of PG"
},
{
"msg_contents": "As Thomas rightly pointed about the feasibility of benchmarking. You may\nstill compare performance of queries on both Exadata as well as PostgreSQL.\nIMO, it may not be on par, but it must be acceptable.\n\nIn the contemporary world, 60TB isn't really a huge database. So, I hardly\nthink you should find any performance issues on PostgreSQL.\n\nAll the best.\n\n\nRegards,\nNinad Shah\n\n\nOn Mon, 19 Jul 2021 at 16:54, Manish Lad <[email protected]> wrote:\n\n> Thank you all for your swift response.\n>\n> Thank you again.\n>\n> Manish\n>\n> On Mon, 19 Jul 2021, 15:39 Manish Lad, <[email protected]> wrote:\n>\n>> Dear all,\n>>>\n>> We are planning to migrate Oracle exadata database to postgresql and db\n>> size ranges from 1 tb to 60 TB.\n>>\n>> Will the PG support this with the performance matching to that of exadata\n>> applince?\n>> If anyone could point me in the right direction where i xan get the\n>> benchmarking done for these two databases either on prime or any cloud\n>> would be great.\n>>\n>> Thanks all in advance.\n>>\n>> Manish\n>>\n>>>\n\nAs Thomas rightly pointed about the feasibility of benchmarking. You may still compare performance of queries on both Exadata as well as PostgreSQL. IMO, it may not be on par, but it must be acceptable.In the contemporary world, 60TB isn't really a huge database. So, I hardly think you should find any performance issues on PostgreSQL.All the best.Regards,Ninad ShahOn Mon, 19 Jul 2021 at 16:54, Manish Lad <[email protected]> wrote:Thank you all for your swift response. Thank you again.ManishOn Mon, 19 Jul 2021, 15:39 Manish Lad, <[email protected]> wrote:Dear all,We are planning to migrate Oracle exadata database to postgresql and db size ranges from 1 tb to 60 TB. Will the PG support this with the performance matching to that of exadata applince? If anyone could point me in the right direction where i xan get the benchmarking done for these two databases either on prime or any cloud would be great. Thanks all in advance. Manish",
"msg_date": "Mon, 19 Jul 2021 22:18:24 +0530",
"msg_from": "Ninad Shah <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance benchmark of PG"
},
{
"msg_contents": "Thanks a lot.\n\nOn Mon, 19 Jul 2021, 22:18 Ninad Shah, <[email protected]> wrote:\n\n> As Thomas rightly pointed about the feasibility of benchmarking. You may\n> still compare performance of queries on both Exadata as well as PostgreSQL.\n> IMO, it may not be on par, but it must be acceptable.\n>\n> In the contemporary world, 60TB isn't really a huge database. So, I hardly\n> think you should find any performance issues on PostgreSQL.\n>\n> All the best.\n>\n>\n> Regards,\n> Ninad Shah\n>\n>\n> On Mon, 19 Jul 2021 at 16:54, Manish Lad <[email protected]> wrote:\n>\n>> Thank you all for your swift response.\n>>\n>> Thank you again.\n>>\n>> Manish\n>>\n>> On Mon, 19 Jul 2021, 15:39 Manish Lad, <[email protected]> wrote:\n>>\n>>> Dear all,\n>>>>\n>>> We are planning to migrate Oracle exadata database to postgresql and db\n>>> size ranges from 1 tb to 60 TB.\n>>>\n>>> Will the PG support this with the performance matching to that of\n>>> exadata applince?\n>>> If anyone could point me in the right direction where i xan get the\n>>> benchmarking done for these two databases either on prime or any cloud\n>>> would be great.\n>>>\n>>> Thanks all in advance.\n>>>\n>>> Manish\n>>>\n>>>>\n\nThanks a lot. On Mon, 19 Jul 2021, 22:18 Ninad Shah, <[email protected]> wrote:As Thomas rightly pointed about the feasibility of benchmarking. You may still compare performance of queries on both Exadata as well as PostgreSQL. IMO, it may not be on par, but it must be acceptable.In the contemporary world, 60TB isn't really a huge database. So, I hardly think you should find any performance issues on PostgreSQL.All the best.Regards,Ninad ShahOn Mon, 19 Jul 2021 at 16:54, Manish Lad <[email protected]> wrote:Thank you all for your swift response. Thank you again.ManishOn Mon, 19 Jul 2021, 15:39 Manish Lad, <[email protected]> wrote:Dear all,We are planning to migrate Oracle exadata database to postgresql and db size ranges from 1 tb to 60 TB. Will the PG support this with the performance matching to that of exadata applince? If anyone could point me in the right direction where i xan get the benchmarking done for these two databases either on prime or any cloud would be great. Thanks all in advance. Manish",
"msg_date": "Tue, 20 Jul 2021 13:26:58 +0530",
"msg_from": "Manish Lad <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance benchmark of PG"
}
] |
[
{
"msg_contents": "I am using postgresql 12 and using cursors in a stored procedure,\nexecuting procedure which has cursor is slowing down the call. However if I\ndo not use the cursor and just execute the queries using JDBC (Java client)\nit's fast.\n\nIs there any setting which needs to be modified to improve the performance\nof cursors. Also facing slow response with reading blobs (images) from db.\nNot an ideal way for storing images in db but this is a legacy application\nand wanted to check if there a quick tweak which can improve the\nperformance while reading blob data from db.\n\n--Ayub\n\nI am using postgresql 12 and using cursors in a stored procedure, executing procedure which has cursor is slowing down the call. However if I do not use the cursor and just execute the queries using JDBC (Java client) it's fast.Is there any setting which needs to be modified to improve the performance of cursors. Also facing slow response with reading blobs (images) from db. Not an ideal way for storing images in db but this is a legacy application and wanted to check if there a quick tweak which can improve the performance while reading blob data from db.--Ayub",
"msg_date": "Fri, 25 Jun 2021 19:09:31 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow performance with cursor"
},
{
"msg_contents": "On Fri, Jun 25, 2021 at 07:09:31PM +0300, Ayub Khan wrote:\n> I am using postgresql 12 and using cursors in a stored procedure,\n> executing procedure which has cursor is slowing down the call. However if I\n> do not use the cursor and just execute the queries using JDBC (Java client)\n> it's fast.\n\nIs the query slower, or is it slow to transfer tuples ?\nI expect there would be a very high overhead if you read a large number of\ntuples one at a time.\n\n> Is there any setting which needs to be modified to improve the performance\n> of cursors. Also facing slow response with reading blobs (images) from db.\n> Not an ideal way for storing images in db but this is a legacy application\n> and wanted to check if there a quick tweak which can improve the\n> performance while reading blob data from db.\n\nIs the slowness between the client-server or on the server side ?\nProvide some details ?\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 25 Jun 2021 11:21:06 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow performance with cursor"
},
{
"msg_contents": "slowness is on the database side as I see the CPU goes high for procedures\nreturning the result using cursors. If the same query is executed as a\nprepared statement from Java client there is no slowness.\n\nfor example there are 84 rows returning all are text data from a query. If\nthe result is returned by cursor from the database, the cpu is high on the\ndb.\n\nstored procedure A executes query Q and returns cursor1, this process has\nhigh cpu on the database.\n\ncode changed in Java client to execute the same query as the prepared\nstatement and get back the resultset from the database, this does not\ncreate a high cpu on the database.\n\n--Ayub\n\n\nOn Fri, Jun 25, 2021 at 7:09 PM Ayub Khan <[email protected]> wrote:\n\n>\n> I am using postgresql 12 and using cursors in a stored procedure,\n> executing procedure which has cursor is slowing down the call. However if I\n> do not use the cursor and just execute the queries using JDBC (Java client)\n> it's fast.\n>\n> Is there any setting which needs to be modified to improve the performance\n> of cursors. Also facing slow response with reading blobs (images) from db.\n> Not an ideal way for storing images in db but this is a legacy application\n> and wanted to check if there a quick tweak which can improve the\n> performance while reading blob data from db.\n>\n> --Ayub\n>\n\n\n-- \n--------------------------------------------------------------------\nSun Certified Enterprise Architect 1.5\nSun Certified Java Programmer 1.4\nMicrosoft Certified Systems Engineer 2000\nhttp://in.linkedin.com/pub/ayub-khan/a/811/b81\nmobile:+966-502674604\n----------------------------------------------------------------------\nIt is proved that Hard Work and kowledge will get you close but attitude\nwill get you there. However, it's the Love\nof God that will put you over the top!!\n\nslowness is on the database side as I see the CPU goes high for procedures returning the result using cursors. If the same query is executed as a prepared statement from Java client there is no slowness.for example there are 84 rows returning all are text data from a query. If the result is returned by cursor from the database, the cpu is high on the db. stored procedure A executes query Q and returns cursor1, this process has high cpu on the database.code changed in Java client to execute the same query as the prepared statement and get back the resultset from the database, this does not create a high cpu on the database.--AyubOn Fri, Jun 25, 2021 at 7:09 PM Ayub Khan <[email protected]> wrote:I am using postgresql 12 and using cursors in a stored procedure, executing procedure which has cursor is slowing down the call. However if I do not use the cursor and just execute the queries using JDBC (Java client) it's fast.Is there any setting which needs to be modified to improve the performance of cursors. Also facing slow response with reading blobs (images) from db. Not an ideal way for storing images in db but this is a legacy application and wanted to check if there a quick tweak which can improve the performance while reading blob data from db.--Ayub\n-- --------------------------------------------------------------------Sun Certified Enterprise Architect 1.5Sun Certified Java Programmer 1.4Microsoft Certified Systems Engineer 2000http://in.linkedin.com/pub/ayub-khan/a/811/b81mobile:+966-502674604----------------------------------------------------------------------It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!",
"msg_date": "Fri, 25 Jun 2021 19:37:53 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow performance with cursor"
},
{
"msg_contents": "Ayub Khan <[email protected]> writes:\n> Is there any setting which needs to be modified to improve the performance\n> of cursors. Also facing slow response with reading blobs (images) from db.\n> Not an ideal way for storing images in db but this is a legacy application\n> and wanted to check if there a quick tweak which can improve the\n> performance while reading blob data from db.\n\nPossibly twiddling cursor_tuple_fraction would help. The default setting\ntends to encourage fast-start plans, which might be counterproductive\nif you're always fetching the entire result in one go.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 25 Jun 2021 14:04:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow performance with cursor"
},
{
"msg_contents": "I set the cursor_tuple_fraction to 1 now I am seeing high cpu for fetach\nall in\n\nThe number of rows returned is less than 200. Why is the high cpu being\nshown for fetch all\n\n-Ayub\n\nOn Fri, 25 Jun 2021, 19:09 Ayub Khan, <[email protected]> wrote:\n\n>\n> I am using postgresql 12 and using cursors in a stored procedure,\n> executing procedure which has cursor is slowing down the call. However if I\n> do not use the cursor and just execute the queries using JDBC (Java client)\n> it's fast.\n>\n> Is there any setting which needs to be modified to improve the performance\n> of cursors. Also facing slow response with reading blobs (images) from db.\n> Not an ideal way for storing images in db but this is a legacy application\n> and wanted to check if there a quick tweak which can improve the\n> performance while reading blob data from db.\n>\n> --Ayub\n>\n\nI set the cursor_tuple_fraction to 1 now I am seeing high cpu for fetach all in The number of rows returned is less than 200. Why is the high cpu being shown for fetch all -AyubOn Fri, 25 Jun 2021, 19:09 Ayub Khan, <[email protected]> wrote:I am using postgresql 12 and using cursors in a stored procedure, executing procedure which has cursor is slowing down the call. However if I do not use the cursor and just execute the queries using JDBC (Java client) it's fast.Is there any setting which needs to be modified to improve the performance of cursors. Also facing slow response with reading blobs (images) from db. Not an ideal way for storing images in db but this is a legacy application and wanted to check if there a quick tweak which can improve the performance while reading blob data from db.--Ayub",
"msg_date": "Thu, 1 Jul 2021 19:29:31 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow performance with cursor"
},
{
"msg_contents": "On Fri, 25 Jun 2021, 19:09 Ayub Khan, <[email protected]> wrote:\n> I am using postgresql 12 and using cursors in a stored procedure,\n> executing procedure which has cursor is slowing down the call. However if I\n> do not use the cursor and just execute the queries using JDBC (Java client)\n> it's fast.\n>\n> Is there any setting which needs to be modified to improve the performance\n> of cursors. Also facing slow response with reading blobs (images) from db.\n> Not an ideal way for storing images in db but this is a legacy application\n> and wanted to check if there a quick tweak which can improve the\n> performance while reading blob data from db.\n\nOn Thu, Jul 01, 2021 at 07:29:31PM +0300, Ayub Khan wrote:\n> I set the cursor_tuple_fraction to 1 now I am seeing high cpu for fetach\n> all in\n> \n> The number of rows returned is less than 200. Why is the high cpu being\n> shown for fetch all\n\nIt seems like you're asking for help, but need to show the stored procedure\nyou're asking for help with.\n\n\n",
"msg_date": "Thu, 1 Jul 2021 11:32:31 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow performance with cursor"
},
{
"msg_contents": "Justin,\n\nBelow is the stored procedure, is there any scope for improvement?\n\nCREATE OR REPLACE PROCEDURE \"new_api_pkg$get_menu_details_p\"(\ni_user_id bigint,\ni_menu_item_id bigint,\nINOUT o_menu refcursor,\nINOUT o_item refcursor,\nINOUT o_choice refcursor)\nLANGUAGE 'plpgsql'\nAS $BODY$\nBEGIN\n IF i_user_id IS NOT NULL THEN\n OPEN o_menu FOR\n SELECT\n mi.menu_item_id, mi.menu_item_name, mi.menu_item_title,\nmi.restaurant_id, case when mi.image !=null then 'Y' when mi.image is null\nthen 'N' end as has_image,\n0.0 AS rating, 0 AS votes, 0 AS own_rating\n FROM menu_item AS mi\n WHERE mi.menu_item_id = i_menu_item_id AND mi.active = 'Y';\n ELSE\n OPEN o_menu FOR\n SELECT mi.menu_item_id, mi.menu_item_name, mi.menu_item_title,\nmi.restaurant_id, case when mi.image !=null then 'Y' when mi.image is null\nthen 'N' end as has_image,\n0.0 AS rating, 0 AS votes, 0 AS own_rating\n FROM menu_item AS mi\n WHERE mi.menu_item_id = i_menu_item_id AND mi.active = 'Y';\n END IF;\n OPEN o_item FOR\n SELECT\n c.menu_item_variant_id, c.menu_item_variant_type_id,\nc.package_type_code, c.packages_only, c.price,\n CASE\n WHEN c.package_type_code = 'P' THEN\n (SELECT SUM(miv1.calories) FROM package_component AS\npkg_cpm1\n INNER JOIN menu_item_variant AS miv1 ON\npkg_cpm1.component_id = miv1.menu_item_variant_id WHERE pkg_cpm1.package_id\n= c.menu_item_variant_id)\n ELSE c.calories\n END AS calories, c.size_id, c.parent_menu_item_variant_id,\nd.menu_item_variant_type_desc, d.menu_item_variant_type_desc_ar,\ne.size_desc, e.size_desc_ar,15 AS preparation_time,\n\n (SELECT STRING_AGG(CONCAT_WS('', mi.menu_item_name, ' ',\ns.size_desc), ' + '::TEXT ORDER BY pc.component_id)\n FROM package_component AS pc, menu_item_variant AS miv,\nmenu_item AS mi, menu_item_variant_type AS mivt, item_size AS s\n WHERE pc.component_id = miv.menu_item_variant_id AND\nmiv.menu_item_id = mi.menu_item_id AND miv.size_id = s.size_id\n AND pc.package_id = c.menu_item_variant_id AND mivt.is_hidden\n= 'false' AND mivt.menu_item_variant_type_id = miv.menu_item_variant_type_id\n GROUP BY pc.package_id) AS package_name\n FROM menu_item AS a, menu_item_variant AS c, menu_item_variant_type\nAS d, item_size AS e\n WHERE a.menu_item_id = c.menu_item_id AND\nc.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden =\n'false'\n AND c.size_id = e.size_id AND a.menu_item_id = i_menu_item_id AND\na.active = 'Y' AND c.deleted = 'N'\n ORDER BY c.menu_item_variant_id;\n OPEN o_choice FOR\n SELECT\n c.choice_id, c.choice_name, c.choice_name_ar, c.calories\n FROM choice AS c, menu_item_choice AS mc, menu_item AS mi\n WHERE c.choice_id = mc.choice_id AND mc.menu_item_id =\nmi.menu_item_id AND mc.menu_item_id = i_menu_item_id AND mi.active = 'Y';\nEND;\n$BODY$;\n\n\nOn Fri, Jun 25, 2021 at 7:09 PM Ayub Khan <[email protected]> wrote:\n\n>\n> I am using postgresql 12 and using cursors in a stored procedure,\n> executing procedure which has cursor is slowing down the call. However if I\n> do not use the cursor and just execute the queries using JDBC (Java client)\n> it's fast.\n>\n> Is there any setting which needs to be modified to improve the performance\n> of cursors. Also facing slow response with reading blobs (images) from db.\n> Not an ideal way for storing images in db but this is a legacy application\n> and wanted to check if there a quick tweak which can improve the\n> performance while reading blob data from db.\n>\n> --Ayub\n>\n\n\n-- \n--------------------------------------------------------------------\nSun Certified Enterprise Architect 1.5\nSun Certified Java Programmer 1.4\nMicrosoft Certified Systems Engineer 2000\nhttp://in.linkedin.com/pub/ayub-khan/a/811/b81\nmobile:+966-502674604\n----------------------------------------------------------------------\nIt is proved that Hard Work and kowledge will get you close but attitude\nwill get you there. However, it's the Love\nof God that will put you over the top!!\n\nJustin,Below is the stored procedure, is there any scope for improvement?CREATE OR REPLACE PROCEDURE \"new_api_pkg$get_menu_details_p\"(\ti_user_id bigint,\ti_menu_item_id bigint,\tINOUT o_menu refcursor,\tINOUT o_item refcursor,\tINOUT o_choice refcursor)LANGUAGE 'plpgsql'AS $BODY$BEGIN IF i_user_id IS NOT NULL THEN OPEN o_menu FOR SELECT mi.menu_item_id, mi.menu_item_name, mi.menu_item_title, mi.restaurant_id, case when mi.image !=null then 'Y' when mi.image is null then 'N' end as has_image, \t\t\t\t0.0 AS rating, 0 AS votes, 0 AS own_rating FROM menu_item AS mi WHERE mi.menu_item_id = i_menu_item_id AND mi.active = 'Y'; ELSE OPEN o_menu FOR SELECT mi.menu_item_id, mi.menu_item_name, mi.menu_item_title, mi.restaurant_id, case when mi.image !=null then 'Y' when mi.image is null then 'N' end as has_image, \t\t\t\t0.0 AS rating, 0 AS votes, 0 AS own_rating FROM menu_item AS mi WHERE mi.menu_item_id = i_menu_item_id AND mi.active = 'Y'; END IF; OPEN o_item FOR SELECT c.menu_item_variant_id, c.menu_item_variant_type_id, c.package_type_code, c.packages_only, c.price, CASE WHEN c.package_type_code = 'P' THEN (SELECT SUM(miv1.calories) FROM package_component AS pkg_cpm1 INNER JOIN menu_item_variant AS miv1 ON pkg_cpm1.component_id = miv1.menu_item_variant_id WHERE pkg_cpm1.package_id = c.menu_item_variant_id) ELSE c.calories END AS calories, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, d.menu_item_variant_type_desc_ar, e.size_desc, e.size_desc_ar,15 AS preparation_time, (SELECT STRING_AGG(CONCAT_WS('', mi.menu_item_name, ' ', s.size_desc), ' + '::TEXT ORDER BY pc.component_id) FROM package_component AS pc, menu_item_variant AS miv, menu_item AS mi, menu_item_variant_type AS mivt, item_size AS s WHERE pc.component_id = miv.menu_item_variant_id AND miv.menu_item_id = mi.menu_item_id AND miv.size_id = s.size_id AND pc.package_id = c.menu_item_variant_id AND mivt.is_hidden = 'false' AND mivt.menu_item_variant_type_id = miv.menu_item_variant_type_id GROUP BY pc.package_id) AS package_name FROM menu_item AS a, menu_item_variant AS c, menu_item_variant_type AS d, item_size AS e WHERE a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.menu_item_id = i_menu_item_id AND a.active = 'Y' AND c.deleted = 'N' ORDER BY c.menu_item_variant_id; OPEN o_choice FOR SELECT c.choice_id, c.choice_name, c.choice_name_ar, c.calories FROM choice AS c, menu_item_choice AS mc, menu_item AS mi WHERE c.choice_id = mc.choice_id AND mc.menu_item_id = mi.menu_item_id AND mc.menu_item_id = i_menu_item_id AND mi.active = 'Y';END;$BODY$;On Fri, Jun 25, 2021 at 7:09 PM Ayub Khan <[email protected]> wrote:I am using postgresql 12 and using cursors in a stored procedure, executing procedure which has cursor is slowing down the call. However if I do not use the cursor and just execute the queries using JDBC (Java client) it's fast.Is there any setting which needs to be modified to improve the performance of cursors. Also facing slow response with reading blobs (images) from db. Not an ideal way for storing images in db but this is a legacy application and wanted to check if there a quick tweak which can improve the performance while reading blob data from db.--Ayub\n-- --------------------------------------------------------------------Sun Certified Enterprise Architect 1.5Sun Certified Java Programmer 1.4Microsoft Certified Systems Engineer 2000http://in.linkedin.com/pub/ayub-khan/a/811/b81mobile:+966-502674604----------------------------------------------------------------------It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!",
"msg_date": "Thu, 1 Jul 2021 23:25:13 +0300",
"msg_from": "Ayub Khan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow performance with cursor"
},
{
"msg_contents": "On 7/1/21 10:25 PM, Ayub Khan wrote:\n> Justin,\n> \n> Below is the stored procedure, is there any scope for improvement?\n> \n\nHard to say, based on just the stored procedure source code. The queries\nare not too complex, but we don't know which of them gets selected for\neach cursor, and which of them is the slow one.\n\nI suggest you identify which of the cursors is the most problematic one,\nand focus on investigating it alone. Show us the explain analyze for\nthat query with different cursor_tuple_fraction values and without the\ncursort, and so on.\n\nAs Tom said, for a cursor the optimizer may be picking a plan with low\nstartup cost, on the basis that that's good for a cursor. But if you're\nalways consuming all the tuples, that may be inefficient. It's often an\nissue for queries with LIMIT, but none of the queries you include that\nclause, so who knows ...\n\nTry identifying which of the cursors is causing the issues, show us the\nexplain analyze for that query (with and without the cursor), and that\nshould tell us more.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 2 Jul 2021 16:30:13 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow performance with cursor"
}
] |
[
{
"msg_contents": "Hello guys, I'm facing a problem. Currently I'm working on a Data\ntransformation Pipeline on Postgres. The strategy is,\n\nWe select every tables in a given schema ( 50 tables ), we apply some case\nwhen, translation, enum and load it into a different new schema with a\nCREATE TABLE SCHEMA_2.table_1 AS SELECT * FROM SCHEMA_1.TABLE_1, then we do\nit again about 3 more times and everytime it’s a new schema, new table. We\nonly keep and don’t drop the schema1.\n\nTo orchestrate the whole, we've got a bunch of .sql files that we run by\nusing psql directly. That's our \"strategy\".\n\nSo we're copying a lot of data, but it allows us to debug, and investigate\nbusiness bugs, because we can plug us into schema 2,3 and search why it's\nan issue.\n\nAll is fine, and can work great.\nBut sometimes, some queries that used to take about 20 secs to complete can\nsuddenly end in 5mins.\nImportant all queries have the same shape -> CREATE TABLE SELECT AS *(a bit\nof transform) FROM TABLE). No update, nothing, it’s dead simple.\n\nWe are just trying to copy a table from schema1, to schema2, to schema3 and\nfinally schema3. That’s it.\nThe thing to understand here is schema2, schema3 are dropped at every\npipeline transformation, so everytime we run the script, it drops\neverything from schema2 to the final stage.\n\nWe tuned the config a little bit, and we tried kind of everything (\nsynchronous_commit, wal, vacuum )\nNothing works, it’s very random, some query won’t simply work ( even after\nhours ).\n\nWe use different machines, different config, and different datasets.\n\nThe only thing that makes it work every time, in 100% cases, is to put a\nsleep(10sec) between each schema.\nSo we select 50 tables, we create a new schema with it, then we sleep 10\nsec then we do again the same query but with the freshly created schema and\nwe create a third schema, sleep 10s and again..\n\nAnd that makes the whole pipeline successful each time.\n\nSo, It seems it's a background process inside postgres, that should ingest\na lot of data, and we have to give him time to take a rest, like a\nbg_writers or something else ?\nI disabled autovacuum=off . Same.\nWhy does the query never end even after hours ? Why there is no log about\nwhere the query is stuck.\nTo be clear, if I kill the stuck query and run again it will work.\n\nI don't know much about what's going on inside Postgres, which randomly\ntakes a lot of time, with the same code, same data.\n\nPostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled\nby gcc (Debian 8.3.0-6) 8.3.0, 64-bit\n\nThank you so much for your time..\n\nHello guys, I'm facing a problem. Currently I'm working on a Data transformation Pipeline on Postgres. The strategy is,We select every tables in a given schema ( 50 tables ), we apply some case when, translation, enum and load it into a different new schema with a CREATE TABLE SCHEMA_2.table_1 AS SELECT * FROM SCHEMA_1.TABLE_1, then we do it again about 3 more times and everytime it’s a new schema, new table. We only keep and don’t drop the schema1. To orchestrate the whole, we've got a bunch of .sql files that we run by using psql directly. That's our \"strategy\".So we're copying a lot of data, but it allows us to debug, and investigate business bugs, because we can plug us into schema 2,3 and search why it's an issue.All is fine, and can work great.But sometimes, some queries that used to take about 20 secs to complete can suddenly end in 5mins. Important all queries have the same shape -> CREATE TABLE SELECT AS *(a bit of transform) FROM TABLE). No update, nothing, it’s dead simple.We are just trying to copy a table from schema1, to schema2, to schema3 and finally schema3. That’s it.The thing to understand here is schema2, schema3 are dropped at every pipeline transformation, so everytime we run the script, it drops everything from schema2 to the final stage.We tuned the config a little bit, and we tried kind of everything ( synchronous_commit, wal, vacuum )Nothing works, it’s very random, some query won’t simply work ( even after hours ).We use different machines, different config, and different datasets.The only thing that makes it work every time, in 100% cases, is to put a sleep(10sec) between each schema.So we select 50 tables, we create a new schema with it, then we sleep 10 sec then we do again the same query but with the freshly created schema and we create a third schema, sleep 10s and again..And that makes the whole pipeline successful each time.So, It seems it's a background process inside postgres, that should ingest a lot of data, and we have to give him time to take a rest, like a bg_writers or something else ? I disabled autovacuum=off . Same.Why does the query never end even after hours ? Why there is no log about where the query is stuck.To be clear, if I kill the stuck query and run again it will work.I don't know much about what's going on inside Postgres, which randomly takes a lot of time, with the same code, same data. PostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bitThank you so much for your time..",
"msg_date": "Thu, 8 Jul 2021 01:00:28 +0200",
"msg_from": "Allan Barrielle <[email protected]>",
"msg_from_op": true,
"msg_subject": "ETL - sql orchestrator is stuck when there is not sleep() between\n queries"
},
{
"msg_contents": "On Thu, Jul 08, 2021 at 01:00:28AM +0200, Allan Barrielle wrote:\n> All is fine, and can work great.\n> But sometimes, some queries that used to take about 20 secs to complete can\n> suddenly end in 5mins.\n> Important all queries have the same shape -> CREATE TABLE SELECT AS *(a bit\n> of transform) FROM TABLE). No update, nothing, it’s dead simple.\n> \n> Nothing works, it’s very random, some query won’t simply work ( even after\n> hours ).\n\nWhen it doesn't work, you could check SELECT * FROM pg_stat_activity, and\nSELECT pg_blocking_pids(pid), * FROM pg_locks, to see what's going on.\n\n> Important all queries have the same shape -> CREATE TABLE SELECT AS *(a bit\n> of transform) FROM TABLE). No update, nothing, it’s dead simple.\n> We are just trying to copy a table from schema1, to schema2, to schema3 and\n> finally schema3. That’s it.\n\nIs it true that the SELECTs have no joins in them ?\n\nDid this ever work better or differently under different versions of postgres ?\n\n> Why does the query never end even after hours ? Why there is no log about\n> where the query is stuck.\n\nPlease send your nondefault config.\nhttps://wiki.postgresql.org/wiki/Server_Configuration\n\nAlso enable logging (I just added this to the wiki).\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#Enable_Logging\n\nIt'd be very useful to get \"explain analyze\" for a working query and for a\nstuck query. It sound like the stuck query never finishes, so maybe the second\npart is impossible (?)\n\nBut it'd be good to get at least \"explain\" output. You'd have to edit your sql\nscript to run an \"explain\" before each query, and run it, logging the ouput,\nuntil you capture the plan for a stuck query. Save the output and send here,\nalong with the query plan for a working query.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 8 Jul 2021 07:33:20 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ETL - sql orchestrator is stuck when there is not sleep()\n between queries"
},
{
"msg_contents": "> We use different machines, different config, and different datasets.\n> ...\n> PostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu,\ncompiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit\n\nIs It possible to upgrade and test with PG 12.7?\n\nIMHO: lot of changes:\n* https://www.postgresql.org/docs/12/release-12-5.html\n* https://www.postgresql.org/docs/12/release-12-6.html\n* https://www.postgresql.org/docs/12/release-12-7.html\n\nJust rule out the possibility that it has been already fixed\n\nRegards,\n Imre\n\n\nAllan Barrielle <[email protected]> ezt írta (időpont: 2021. júl.\n8., Cs, 11:26):\n\n> Hello guys, I'm facing a problem. Currently I'm working on a Data\n> transformation Pipeline on Postgres. The strategy is,\n>\n> We select every tables in a given schema ( 50 tables ), we apply some case\n> when, translation, enum and load it into a different new schema with a\n> CREATE TABLE SCHEMA_2.table_1 AS SELECT * FROM SCHEMA_1.TABLE_1, then we do\n> it again about 3 more times and everytime it’s a new schema, new table. We\n> only keep and don’t drop the schema1.\n>\n> To orchestrate the whole, we've got a bunch of .sql files that we run by\n> using psql directly. That's our \"strategy\".\n>\n> So we're copying a lot of data, but it allows us to debug, and investigate\n> business bugs, because we can plug us into schema 2,3 and search why it's\n> an issue.\n>\n> All is fine, and can work great.\n> But sometimes, some queries that used to take about 20 secs to complete\n> can suddenly end in 5mins.\n> Important all queries have the same shape -> CREATE TABLE SELECT AS *(a\n> bit of transform) FROM TABLE). No update, nothing, it’s dead simple.\n>\n> We are just trying to copy a table from schema1, to schema2, to schema3\n> and finally schema3. That’s it.\n> The thing to understand here is schema2, schema3 are dropped at every\n> pipeline transformation, so everytime we run the script, it drops\n> everything from schema2 to the final stage.\n>\n> We tuned the config a little bit, and we tried kind of everything (\n> synchronous_commit, wal, vacuum )\n> Nothing works, it’s very random, some query won’t simply work ( even after\n> hours ).\n>\n> We use different machines, different config, and different datasets.\n>\n> The only thing that makes it work every time, in 100% cases, is to put a\n> sleep(10sec) between each schema.\n> So we select 50 tables, we create a new schema with it, then we sleep 10\n> sec then we do again the same query but with the freshly created schema and\n> we create a third schema, sleep 10s and again..\n>\n> And that makes the whole pipeline successful each time.\n>\n> So, It seems it's a background process inside postgres, that should ingest\n> a lot of data, and we have to give him time to take a rest, like a\n> bg_writers or something else ?\n> I disabled autovacuum=off . Same.\n> Why does the query never end even after hours ? Why there is no log about\n> where the query is stuck.\n> To be clear, if I kill the stuck query and run again it will work.\n>\n> I don't know much about what's going on inside Postgres, which randomly\n> takes a lot of time, with the same code, same data.\n>\n> PostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled\n> by gcc (Debian 8.3.0-6) 8.3.0, 64-bit\n>\n> Thank you so much for your time..\n>\n>\n>\n\n> We use different machines, different config, and different datasets.> ...> PostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bitIs It possible to upgrade and test with PG 12.7?IMHO: lot of changes:* https://www.postgresql.org/docs/12/release-12-5.html* https://www.postgresql.org/docs/12/release-12-6.html* https://www.postgresql.org/docs/12/release-12-7.htmlJust rule out the possibility that it has been already fixedRegards, ImreAllan Barrielle <[email protected]> ezt írta (időpont: 2021. júl. 8., Cs, 11:26):Hello guys, I'm facing a problem. Currently I'm working on a Data transformation Pipeline on Postgres. The strategy is,We select every tables in a given schema ( 50 tables ), we apply some case when, translation, enum and load it into a different new schema with a CREATE TABLE SCHEMA_2.table_1 AS SELECT * FROM SCHEMA_1.TABLE_1, then we do it again about 3 more times and everytime it’s a new schema, new table. We only keep and don’t drop the schema1. To orchestrate the whole, we've got a bunch of .sql files that we run by using psql directly. That's our \"strategy\".So we're copying a lot of data, but it allows us to debug, and investigate business bugs, because we can plug us into schema 2,3 and search why it's an issue.All is fine, and can work great.But sometimes, some queries that used to take about 20 secs to complete can suddenly end in 5mins. Important all queries have the same shape -> CREATE TABLE SELECT AS *(a bit of transform) FROM TABLE). No update, nothing, it’s dead simple.We are just trying to copy a table from schema1, to schema2, to schema3 and finally schema3. That’s it.The thing to understand here is schema2, schema3 are dropped at every pipeline transformation, so everytime we run the script, it drops everything from schema2 to the final stage.We tuned the config a little bit, and we tried kind of everything ( synchronous_commit, wal, vacuum )Nothing works, it’s very random, some query won’t simply work ( even after hours ).We use different machines, different config, and different datasets.The only thing that makes it work every time, in 100% cases, is to put a sleep(10sec) between each schema.So we select 50 tables, we create a new schema with it, then we sleep 10 sec then we do again the same query but with the freshly created schema and we create a third schema, sleep 10s and again..And that makes the whole pipeline successful each time.So, It seems it's a background process inside postgres, that should ingest a lot of data, and we have to give him time to take a rest, like a bg_writers or something else ? I disabled autovacuum=off . Same.Why does the query never end even after hours ? Why there is no log about where the query is stuck.To be clear, if I kill the stuck query and run again it will work.I don't know much about what's going on inside Postgres, which randomly takes a lot of time, with the same code, same data. PostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bitThank you so much for your time..",
"msg_date": "Thu, 8 Jul 2021 15:47:15 +0200",
"msg_from": "Imre Samu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ETL - sql orchestrator is stuck when there is not sleep() between\n queries"
},
{
"msg_contents": "Hello,\n\n> Is it true that the SELECTs have no joins in them ?\n\nYes there is a lot of LEFT JOIN.\n\n> When it doesn't work, you could check SELECT * FROM pg_stat_activity, and\n>SELECT pg_blocking_pids(pid), * FROM pg_locks, to see what's going on.\n\nI can't see any blocking queries blocking pg_locks, pg_blocking_pids.\n\n> It'd be very useful to get \"explain analyze\" for a working query and for a\n> stuck query. It sound like the stuck query never finishes, so maybe the\nsecond\n> part is impossible (?)\n\nWe run an explain analysis and we see some very interesting stuff going on.\nIt seems without explicitly adding a `ANALYZE`, the query has a cost of\nover billions, so the query is not stuck but took forever.\nWhen I run the same scripts with an ANALYZE right before running the query,\nthe query is exec is 50secondes and the cost is normal\n\nExplain analyze WITHOUT ANALYZE https://explain.depesz.com/s/RaSr\nExplain analyze same query WITH ANALYZE BEFORE\nhttps://explain.depesz.com/s/tYVl\n\nThe configuration is tuned by aws aurora, but this issue happens also with\na default config.\n\nallow_system_table_mods,off\napplication_name,DataGrip 2021.1.3\narchive_command,(disabled)\narchive_mode,off\narchive_timeout,5min\narray_nulls,on\nauthentication_timeout,1min\nautovacuum,on\nautovacuum_analyze_scale_factor,0.05\nautovacuum_analyze_threshold,50\nautovacuum_freeze_max_age,200000000\nautovacuum_max_workers,12\nautovacuum_multixact_freeze_max_age,400000000\nautovacuum_naptime,5s\nautovacuum_vacuum_cost_delay,1ms\nautovacuum_vacuum_cost_limit,1200\nautovacuum_vacuum_scale_factor,0.1\nautovacuum_vacuum_threshold,50\nautovacuum_work_mem,-1\nbackend_flush_after,0\nbackslash_quote,safe_encoding\nbgwriter_delay,200ms\nbgwriter_flush_after,0\nbgwriter_lru_maxpages,100\nbgwriter_lru_multiplier,2\nbonjour,off\nbytea_output,hex\ncheck_function_bodies,on\ncheckpoint_completion_target,0.9\ncheckpoint_flush_after,0\ncheckpoint_timeout,15min\ncheckpoint_warning,30s\nclient_encoding,UTF8\nclient_min_messages,notice\ncommit_delay,0\ncommit_siblings,5\nconstraint_exclusion,partition\ncpu_index_tuple_cost,0.005\ncpu_operator_cost,0.0025\ncpu_tuple_cost,0.01\ncursor_tuple_fraction,0.1\nDateStyle,\"ISO, MDY\"\ndb_user_namespace,off\ndeadlock_timeout,1s\ndebug_pretty_print,on\ndebug_print_parse,off\ndebug_print_plan,off\ndebug_print_rewritten,off\ndefault_statistics_target,500\ndefault_text_search_config,pg_catalog.simple\ndefault_transaction_deferrable,off\ndefault_transaction_isolation,read committed\ndefault_transaction_read_only,off\ndynamic_library_path,$libdir\neffective_cache_size,4GB\neffective_io_concurrency,600\nenable_bitmapscan,on\nenable_gathermerge,on\nenable_hashagg,on\nenable_hashjoin,on\nenable_indexonlyscan,on\nenable_indexscan,on\nenable_material,on\nenable_mergejoin,on\nenable_nestloop,on\nenable_parallel_append,on\nenable_parallel_hash,on\nenable_partition_pruning,on\nenable_partitionwise_aggregate,off\nenable_partitionwise_join,off\nenable_seqscan,on\nenable_sort,on\nenable_tidscan,on\nescape_string_warning,on\nevent_source,PostgreSQL\nexit_on_error,off\nextra_float_digits,3\nforce_parallel_mode,off\nfrom_collapse_limit,8\nfsync,off\nfull_page_writes,off\ngeqo,on\ngeqo_effort,5\ngeqo_generations,0\ngeqo_pool_size,0\ngeqo_seed,0\ngeqo_selection_bias,2\ngeqo_threshold,12\ngin_fuzzy_search_limit,0\ngin_pending_list_limit,4MB\nhot_standby,off\nhot_standby_feedback,on\nhuge_pages,try\nidle_in_transaction_session_timeout,25min\nignore_checksum_failure,off\nignore_system_indexes,off\nIntervalStyle,postgres\njit,off\njit_above_cost,100000\njit_debugging_support,off\njit_dump_bitcode,off\njit_expressions,on\njit_inline_above_cost,500000\njit_optimize_above_cost,500000\njit_profiling_support,off\njit_provider,llvmjit\njit_tuple_deforming,on\njoin_collapse_limit,8\nlc_monetary,C\nlc_numeric,C\nlc_time,C\nlisten_addresses,*\nlock_timeout,0\nlo_compat_privileges,off\nmaintenance_work_mem,2GB\nmax_connections,100\nmax_files_per_process,1000\nmax_locks_per_transaction,256\nmax_logical_replication_workers,4\nmax_parallel_maintenance_workers,12\nmax_parallel_workers,12\nmax_parallel_workers_per_gather,6\nmax_pred_locks_per_page,2\nmax_pred_locks_per_relation,-2\nmax_pred_locks_per_transaction,64\nmax_prepared_transactions,0\nmax_replication_slots,10\nmax_stack_depth,6MB\nmax_standby_archive_delay,30s\nmax_standby_streaming_delay,14s\nmax_sync_workers_per_subscription,2\nmax_wal_senders,0\nmax_wal_size,8GB\nmax_worker_processes,12\nmin_parallel_index_scan_size,512kB\nmin_parallel_table_scan_size,8MB\nmin_wal_size,2GB\nold_snapshot_threshold,-1\noperator_precedence_warning,off\nparallel_leader_participation,off\nparallel_setup_cost,1000\nparallel_tuple_cost,0.1\npassword_encryption,md5\nport,5432\npost_auth_delay,0\npre_auth_delay,0\nquote_all_identifiers,off\nrandom_page_cost,1.1\nrestart_after_crash,on\nrow_security,on\nsearch_path,public\nseq_page_cost,1\nsession_replication_role,origin\nshared_buffers,1GB\nstandard_conforming_strings,on\nstatement_timeout,0\nsuperuser_reserved_connections,3\nsynchronize_seqscans,on\nsynchronous_commit,on\nsyslog_facility,local0\nsyslog_ident,postgres\nsyslog_sequence_numbers,on\nsyslog_split_messages,on\ntcp_keepalives_count,9\ntcp_keepalives_idle,7200\ntcp_keepalives_interval,75\ntemp_buffers,8MB\ntemp_file_limit,-1\nTimeZone,UTC\ntrace_notify,off\ntrace_recovery_messages,log\ntrace_sort,off\ntrack_activities,on\ntrack_activity_query_size,4kB\ntrack_commit_timestamp,off\ntrack_counts,on\ntrack_functions,none\ntrack_io_timing,off\ntransform_null_equals,off\nupdate_process_title,on\nvacuum_cleanup_index_scale_factor,0.1\nvacuum_cost_delay,0\nvacuum_cost_limit,200\nvacuum_cost_page_dirty,20\nvacuum_cost_page_hit,1\nvacuum_cost_page_miss,0\nvacuum_defer_cleanup_age,0\nvacuum_freeze_min_age,50000000\nvacuum_freeze_table_age,150000000\nvacuum_multixact_freeze_min_age,5000000\nvacuum_multixact_freeze_table_age,150000000\nwal_buffers,16MB\nwal_compression,off\nwal_level,minimal\nwal_log_hints,off\nwal_receiver_status_interval,10s\nwal_receiver_timeout,30s\nwal_retrieve_retry_interval,5s\nwal_sender_timeout,1min\nwal_sync_method,fdatasync\nwal_writer_delay,200ms\nwal_writer_flush_after,1MB\nwork_mem,2GB\nxmlbinary,base64\nxmloption,content\nzero_damaged_pages,off\n\nOn Thu, Jul 8, 2021 at 2:33 PM Justin Pryzby <[email protected]> wrote:\n\n> On Thu, Jul 08, 2021 at 01:00:28AM +0200, Allan Barrielle wrote:\n> > All is fine, and can work great.\n> > But sometimes, some queries that used to take about 20 secs to complete\n> can\n> > suddenly end in 5mins.\n> > Important all queries have the same shape -> CREATE TABLE SELECT AS *(a\n> bit\n> > of transform) FROM TABLE). No update, nothing, it’s dead simple.\n> >\n> > Nothing works, it’s very random, some query won’t simply work ( even\n> after\n> > hours ).\n>\n> When it doesn't work, you could check SELECT * FROM pg_stat_activity, and\n> SELECT pg_blocking_pids(pid), * FROM pg_locks, to see what's going on.\n>\n> > Important all queries have the same shape -> CREATE TABLE SELECT AS *(a\n> bit\n> > of transform) FROM TABLE). No update, nothing, it’s dead simple.\n> > We are just trying to copy a table from schema1, to schema2, to schema3\n> and\n> > finally schema3. That’s it.\n>\n> Is it true that the SELECTs have no joins in them ?\n>\n> Did this ever work better or differently under different versions of\n> postgres ?\n>\n> > Why does the query never end even after hours ? Why there is no log about\n> > where the query is stuck.\n>\n> Please send your nondefault config.\n> https://wiki.postgresql.org/wiki/Server_Configuration\n>\n> Also enable logging (I just added this to the wiki).\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions#Enable_Logging\n>\n> It'd be very useful to get \"explain analyze\" for a working query and for a\n> stuck query. It sound like the stuck query never finishes, so maybe the\n> second\n> part is impossible (?)\n>\n> But it'd be good to get at least \"explain\" output. You'd have to edit\n> your sql\n> script to run an \"explain\" before each query, and run it, logging the\n> ouput,\n> until you capture the plan for a stuck query. Save the output and send\n> here,\n> along with the query plan for a working query.\n>\n> --\n> Justin\n>\n\nHello, > Is it true that the SELECTs have no joins in them ?Yes there is a lot of LEFT JOIN.> When it doesn't work, you could check SELECT * FROM pg_stat_activity, and>SELECT pg_blocking_pids(pid), * FROM pg_locks, to see what's going on.I can't see any blocking queries blocking pg_locks, pg_blocking_pids.> It'd be very useful to get \"explain analyze\" for a working query and for a> stuck query. It sound like the stuck query never finishes, so maybe the second> part is impossible (?)We run an explain analysis and we see some very interesting stuff going on.It seems without explicitly adding a `ANALYZE`, the query has a cost of over billions, so the query is not stuck but took forever.When I run the same scripts with an ANALYZE right before running the query, the query is exec is 50secondes and the cost is normalExplain analyze WITHOUT ANALYZE https://explain.depesz.com/s/RaSrExplain analyze same query WITH ANALYZE BEFORE https://explain.depesz.com/s/tYVlThe configuration is tuned by aws aurora, but this issue happens also with a default config.allow_system_table_mods,offapplication_name,DataGrip 2021.1.3archive_command,(disabled)archive_mode,offarchive_timeout,5minarray_nulls,onauthentication_timeout,1minautovacuum,onautovacuum_analyze_scale_factor,0.05autovacuum_analyze_threshold,50autovacuum_freeze_max_age,200000000autovacuum_max_workers,12autovacuum_multixact_freeze_max_age,400000000autovacuum_naptime,5sautovacuum_vacuum_cost_delay,1msautovacuum_vacuum_cost_limit,1200autovacuum_vacuum_scale_factor,0.1autovacuum_vacuum_threshold,50autovacuum_work_mem,-1backend_flush_after,0backslash_quote,safe_encodingbgwriter_delay,200msbgwriter_flush_after,0bgwriter_lru_maxpages,100bgwriter_lru_multiplier,2bonjour,offbytea_output,hexcheck_function_bodies,oncheckpoint_completion_target,0.9checkpoint_flush_after,0checkpoint_timeout,15mincheckpoint_warning,30sclient_encoding,UTF8client_min_messages,noticecommit_delay,0commit_siblings,5constraint_exclusion,partitioncpu_index_tuple_cost,0.005cpu_operator_cost,0.0025cpu_tuple_cost,0.01cursor_tuple_fraction,0.1DateStyle,\"ISO, MDY\"db_user_namespace,offdeadlock_timeout,1sdebug_pretty_print,ondebug_print_parse,offdebug_print_plan,offdebug_print_rewritten,offdefault_statistics_target,500default_text_search_config,pg_catalog.simpledefault_transaction_deferrable,offdefault_transaction_isolation,read committeddefault_transaction_read_only,offdynamic_library_path,$libdireffective_cache_size,4GBeffective_io_concurrency,600enable_bitmapscan,onenable_gathermerge,onenable_hashagg,onenable_hashjoin,onenable_indexonlyscan,onenable_indexscan,onenable_material,onenable_mergejoin,onenable_nestloop,onenable_parallel_append,onenable_parallel_hash,onenable_partition_pruning,onenable_partitionwise_aggregate,offenable_partitionwise_join,offenable_seqscan,onenable_sort,onenable_tidscan,onescape_string_warning,onevent_source,PostgreSQLexit_on_error,offextra_float_digits,3force_parallel_mode,offfrom_collapse_limit,8fsync,offfull_page_writes,offgeqo,ongeqo_effort,5geqo_generations,0geqo_pool_size,0geqo_seed,0geqo_selection_bias,2geqo_threshold,12gin_fuzzy_search_limit,0gin_pending_list_limit,4MBhot_standby,offhot_standby_feedback,onhuge_pages,tryidle_in_transaction_session_timeout,25minignore_checksum_failure,offignore_system_indexes,offIntervalStyle,postgresjit,offjit_above_cost,100000jit_debugging_support,offjit_dump_bitcode,offjit_expressions,onjit_inline_above_cost,500000jit_optimize_above_cost,500000jit_profiling_support,offjit_provider,llvmjitjit_tuple_deforming,onjoin_collapse_limit,8lc_monetary,Clc_numeric,Clc_time,Clisten_addresses,*lock_timeout,0lo_compat_privileges,offmaintenance_work_mem,2GBmax_connections,100max_files_per_process,1000max_locks_per_transaction,256max_logical_replication_workers,4max_parallel_maintenance_workers,12max_parallel_workers,12max_parallel_workers_per_gather,6max_pred_locks_per_page,2max_pred_locks_per_relation,-2max_pred_locks_per_transaction,64max_prepared_transactions,0max_replication_slots,10max_stack_depth,6MBmax_standby_archive_delay,30smax_standby_streaming_delay,14smax_sync_workers_per_subscription,2max_wal_senders,0max_wal_size,8GBmax_worker_processes,12min_parallel_index_scan_size,512kBmin_parallel_table_scan_size,8MBmin_wal_size,2GBold_snapshot_threshold,-1operator_precedence_warning,offparallel_leader_participation,offparallel_setup_cost,1000parallel_tuple_cost,0.1password_encryption,md5port,5432post_auth_delay,0pre_auth_delay,0quote_all_identifiers,offrandom_page_cost,1.1restart_after_crash,onrow_security,onsearch_path,publicseq_page_cost,1session_replication_role,originshared_buffers,1GBstandard_conforming_strings,onstatement_timeout,0superuser_reserved_connections,3synchronize_seqscans,onsynchronous_commit,onsyslog_facility,local0syslog_ident,postgressyslog_sequence_numbers,onsyslog_split_messages,ontcp_keepalives_count,9tcp_keepalives_idle,7200tcp_keepalives_interval,75temp_buffers,8MBtemp_file_limit,-1TimeZone,UTCtrace_notify,offtrace_recovery_messages,logtrace_sort,offtrack_activities,ontrack_activity_query_size,4kBtrack_commit_timestamp,offtrack_counts,ontrack_functions,nonetrack_io_timing,offtransform_null_equals,offupdate_process_title,onvacuum_cleanup_index_scale_factor,0.1vacuum_cost_delay,0vacuum_cost_limit,200vacuum_cost_page_dirty,20vacuum_cost_page_hit,1vacuum_cost_page_miss,0vacuum_defer_cleanup_age,0vacuum_freeze_min_age,50000000vacuum_freeze_table_age,150000000vacuum_multixact_freeze_min_age,5000000vacuum_multixact_freeze_table_age,150000000wal_buffers,16MBwal_compression,offwal_level,minimalwal_log_hints,offwal_receiver_status_interval,10swal_receiver_timeout,30swal_retrieve_retry_interval,5swal_sender_timeout,1minwal_sync_method,fdatasyncwal_writer_delay,200mswal_writer_flush_after,1MBwork_mem,2GBxmlbinary,base64xmloption,contentzero_damaged_pages,offOn Thu, Jul 8, 2021 at 2:33 PM Justin Pryzby <[email protected]> wrote:On Thu, Jul 08, 2021 at 01:00:28AM +0200, Allan Barrielle wrote:\n> All is fine, and can work great.\n> But sometimes, some queries that used to take about 20 secs to complete can\n> suddenly end in 5mins.\n> Important all queries have the same shape -> CREATE TABLE SELECT AS *(a bit\n> of transform) FROM TABLE). No update, nothing, it’s dead simple.\n> \n> Nothing works, it’s very random, some query won’t simply work ( even after\n> hours ).\n\nWhen it doesn't work, you could check SELECT * FROM pg_stat_activity, and\nSELECT pg_blocking_pids(pid), * FROM pg_locks, to see what's going on.\n\n> Important all queries have the same shape -> CREATE TABLE SELECT AS *(a bit\n> of transform) FROM TABLE). No update, nothing, it’s dead simple.\n> We are just trying to copy a table from schema1, to schema2, to schema3 and\n> finally schema3. That’s it.\n\nIs it true that the SELECTs have no joins in them ?\n\nDid this ever work better or differently under different versions of postgres ?\n\n> Why does the query never end even after hours ? Why there is no log about\n> where the query is stuck.\n\nPlease send your nondefault config.\nhttps://wiki.postgresql.org/wiki/Server_Configuration\n\nAlso enable logging (I just added this to the wiki).\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#Enable_Logging\n\nIt'd be very useful to get \"explain analyze\" for a working query and for a\nstuck query. It sound like the stuck query never finishes, so maybe the second\npart is impossible (?)\n\nBut it'd be good to get at least \"explain\" output. You'd have to edit your sql\nscript to run an \"explain\" before each query, and run it, logging the ouput,\nuntil you capture the plan for a stuck query. Save the output and send here,\nalong with the query plan for a working query.\n\n-- \nJustin",
"msg_date": "Thu, 8 Jul 2021 15:49:12 +0200",
"msg_from": "Allan Barrielle <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ETL - sql orchestrator is stuck when there is not sleep() between\n queries"
},
{
"msg_contents": "On a different machine, we use 12.7. Still same issue\n\nOn Thu, Jul 8, 2021 at 3:49 PM Allan Barrielle <[email protected]>\nwrote:\n\n> Hello,\n>\n> > Is it true that the SELECTs have no joins in them ?\n>\n> Yes there is a lot of LEFT JOIN.\n>\n> > When it doesn't work, you could check SELECT * FROM pg_stat_activity,\n> and\n> >SELECT pg_blocking_pids(pid), * FROM pg_locks, to see what's going on.\n>\n> I can't see any blocking queries blocking pg_locks, pg_blocking_pids.\n>\n> > It'd be very useful to get \"explain analyze\" for a working query and for\n> a\n> > stuck query. It sound like the stuck query never finishes, so maybe the\n> second\n> > part is impossible (?)\n>\n> We run an explain analysis and we see some very interesting stuff going on.\n> It seems without explicitly adding a `ANALYZE`, the query has a cost of\n> over billions, so the query is not stuck but took forever.\n> When I run the same scripts with an ANALYZE right before running the\n> query, the query is exec is 50secondes and the cost is normal\n>\n> Explain analyze WITHOUT ANALYZE https://explain.depesz.com/s/RaSr\n> Explain analyze same query WITH ANALYZE BEFORE\n> https://explain.depesz.com/s/tYVl\n>\n> The configuration is tuned by aws aurora, but this issue happens also with\n> a default config.\n>\n> allow_system_table_mods,off\n> application_name,DataGrip 2021.1.3\n> archive_command,(disabled)\n> archive_mode,off\n> archive_timeout,5min\n> array_nulls,on\n> authentication_timeout,1min\n> autovacuum,on\n> autovacuum_analyze_scale_factor,0.05\n> autovacuum_analyze_threshold,50\n> autovacuum_freeze_max_age,200000000\n> autovacuum_max_workers,12\n> autovacuum_multixact_freeze_max_age,400000000\n> autovacuum_naptime,5s\n> autovacuum_vacuum_cost_delay,1ms\n> autovacuum_vacuum_cost_limit,1200\n> autovacuum_vacuum_scale_factor,0.1\n> autovacuum_vacuum_threshold,50\n> autovacuum_work_mem,-1\n> backend_flush_after,0\n> backslash_quote,safe_encoding\n> bgwriter_delay,200ms\n> bgwriter_flush_after,0\n> bgwriter_lru_maxpages,100\n> bgwriter_lru_multiplier,2\n> bonjour,off\n> bytea_output,hex\n> check_function_bodies,on\n> checkpoint_completion_target,0.9\n> checkpoint_flush_after,0\n> checkpoint_timeout,15min\n> checkpoint_warning,30s\n> client_encoding,UTF8\n> client_min_messages,notice\n> commit_delay,0\n> commit_siblings,5\n> constraint_exclusion,partition\n> cpu_index_tuple_cost,0.005\n> cpu_operator_cost,0.0025\n> cpu_tuple_cost,0.01\n> cursor_tuple_fraction,0.1\n> DateStyle,\"ISO, MDY\"\n> db_user_namespace,off\n> deadlock_timeout,1s\n> debug_pretty_print,on\n> debug_print_parse,off\n> debug_print_plan,off\n> debug_print_rewritten,off\n> default_statistics_target,500\n> default_text_search_config,pg_catalog.simple\n> default_transaction_deferrable,off\n> default_transaction_isolation,read committed\n> default_transaction_read_only,off\n> dynamic_library_path,$libdir\n> effective_cache_size,4GB\n> effective_io_concurrency,600\n> enable_bitmapscan,on\n> enable_gathermerge,on\n> enable_hashagg,on\n> enable_hashjoin,on\n> enable_indexonlyscan,on\n> enable_indexscan,on\n> enable_material,on\n> enable_mergejoin,on\n> enable_nestloop,on\n> enable_parallel_append,on\n> enable_parallel_hash,on\n> enable_partition_pruning,on\n> enable_partitionwise_aggregate,off\n> enable_partitionwise_join,off\n> enable_seqscan,on\n> enable_sort,on\n> enable_tidscan,on\n> escape_string_warning,on\n> event_source,PostgreSQL\n> exit_on_error,off\n> extra_float_digits,3\n> force_parallel_mode,off\n> from_collapse_limit,8\n> fsync,off\n> full_page_writes,off\n> geqo,on\n> geqo_effort,5\n> geqo_generations,0\n> geqo_pool_size,0\n> geqo_seed,0\n> geqo_selection_bias,2\n> geqo_threshold,12\n> gin_fuzzy_search_limit,0\n> gin_pending_list_limit,4MB\n> hot_standby,off\n> hot_standby_feedback,on\n> huge_pages,try\n> idle_in_transaction_session_timeout,25min\n> ignore_checksum_failure,off\n> ignore_system_indexes,off\n> IntervalStyle,postgres\n> jit,off\n> jit_above_cost,100000\n> jit_debugging_support,off\n> jit_dump_bitcode,off\n> jit_expressions,on\n> jit_inline_above_cost,500000\n> jit_optimize_above_cost,500000\n> jit_profiling_support,off\n> jit_provider,llvmjit\n> jit_tuple_deforming,on\n> join_collapse_limit,8\n> lc_monetary,C\n> lc_numeric,C\n> lc_time,C\n> listen_addresses,*\n> lock_timeout,0\n> lo_compat_privileges,off\n> maintenance_work_mem,2GB\n> max_connections,100\n> max_files_per_process,1000\n> max_locks_per_transaction,256\n> max_logical_replication_workers,4\n> max_parallel_maintenance_workers,12\n> max_parallel_workers,12\n> max_parallel_workers_per_gather,6\n> max_pred_locks_per_page,2\n> max_pred_locks_per_relation,-2\n> max_pred_locks_per_transaction,64\n> max_prepared_transactions,0\n> max_replication_slots,10\n> max_stack_depth,6MB\n> max_standby_archive_delay,30s\n> max_standby_streaming_delay,14s\n> max_sync_workers_per_subscription,2\n> max_wal_senders,0\n> max_wal_size,8GB\n> max_worker_processes,12\n> min_parallel_index_scan_size,512kB\n> min_parallel_table_scan_size,8MB\n> min_wal_size,2GB\n> old_snapshot_threshold,-1\n> operator_precedence_warning,off\n> parallel_leader_participation,off\n> parallel_setup_cost,1000\n> parallel_tuple_cost,0.1\n> password_encryption,md5\n> port,5432\n> post_auth_delay,0\n> pre_auth_delay,0\n> quote_all_identifiers,off\n> random_page_cost,1.1\n> restart_after_crash,on\n> row_security,on\n> search_path,public\n> seq_page_cost,1\n> session_replication_role,origin\n> shared_buffers,1GB\n> standard_conforming_strings,on\n> statement_timeout,0\n> superuser_reserved_connections,3\n> synchronize_seqscans,on\n> synchronous_commit,on\n> syslog_facility,local0\n> syslog_ident,postgres\n> syslog_sequence_numbers,on\n> syslog_split_messages,on\n> tcp_keepalives_count,9\n> tcp_keepalives_idle,7200\n> tcp_keepalives_interval,75\n> temp_buffers,8MB\n> temp_file_limit,-1\n> TimeZone,UTC\n> trace_notify,off\n> trace_recovery_messages,log\n> trace_sort,off\n> track_activities,on\n> track_activity_query_size,4kB\n> track_commit_timestamp,off\n> track_counts,on\n> track_functions,none\n> track_io_timing,off\n> transform_null_equals,off\n> update_process_title,on\n> vacuum_cleanup_index_scale_factor,0.1\n> vacuum_cost_delay,0\n> vacuum_cost_limit,200\n> vacuum_cost_page_dirty,20\n> vacuum_cost_page_hit,1\n> vacuum_cost_page_miss,0\n> vacuum_defer_cleanup_age,0\n> vacuum_freeze_min_age,50000000\n> vacuum_freeze_table_age,150000000\n> vacuum_multixact_freeze_min_age,5000000\n> vacuum_multixact_freeze_table_age,150000000\n> wal_buffers,16MB\n> wal_compression,off\n> wal_level,minimal\n> wal_log_hints,off\n> wal_receiver_status_interval,10s\n> wal_receiver_timeout,30s\n> wal_retrieve_retry_interval,5s\n> wal_sender_timeout,1min\n> wal_sync_method,fdatasync\n> wal_writer_delay,200ms\n> wal_writer_flush_after,1MB\n> work_mem,2GB\n> xmlbinary,base64\n> xmloption,content\n> zero_damaged_pages,off\n>\n> On Thu, Jul 8, 2021 at 2:33 PM Justin Pryzby <[email protected]> wrote:\n>\n>> On Thu, Jul 08, 2021 at 01:00:28AM +0200, Allan Barrielle wrote:\n>> > All is fine, and can work great.\n>> > But sometimes, some queries that used to take about 20 secs to complete\n>> can\n>> > suddenly end in 5mins.\n>> > Important all queries have the same shape -> CREATE TABLE SELECT AS *(a\n>> bit\n>> > of transform) FROM TABLE). No update, nothing, it’s dead simple.\n>> >\n>> > Nothing works, it’s very random, some query won’t simply work ( even\n>> after\n>> > hours ).\n>>\n>> When it doesn't work, you could check SELECT * FROM pg_stat_activity, and\n>> SELECT pg_blocking_pids(pid), * FROM pg_locks, to see what's going on.\n>>\n>> > Important all queries have the same shape -> CREATE TABLE SELECT AS *(a\n>> bit\n>> > of transform) FROM TABLE). No update, nothing, it’s dead simple.\n>> > We are just trying to copy a table from schema1, to schema2, to schema3\n>> and\n>> > finally schema3. That’s it.\n>>\n>> Is it true that the SELECTs have no joins in them ?\n>>\n>> Did this ever work better or differently under different versions of\n>> postgres ?\n>>\n>> > Why does the query never end even after hours ? Why there is no log\n>> about\n>> > where the query is stuck.\n>>\n>> Please send your nondefault config.\n>> https://wiki.postgresql.org/wiki/Server_Configuration\n>>\n>> Also enable logging (I just added this to the wiki).\n>> https://wiki.postgresql.org/wiki/Slow_Query_Questions#Enable_Logging\n>>\n>> It'd be very useful to get \"explain analyze\" for a working query and for a\n>> stuck query. It sound like the stuck query never finishes, so maybe the\n>> second\n>> part is impossible (?)\n>>\n>> But it'd be good to get at least \"explain\" output. You'd have to edit\n>> your sql\n>> script to run an \"explain\" before each query, and run it, logging the\n>> ouput,\n>> until you capture the plan for a stuck query. Save the output and send\n>> here,\n>> along with the query plan for a working query.\n>>\n>> --\n>> Justin\n>>\n>\n\nOn a different machine, we use 12.7. Still same issueOn Thu, Jul 8, 2021 at 3:49 PM Allan Barrielle <[email protected]> wrote:Hello, > Is it true that the SELECTs have no joins in them ?Yes there is a lot of LEFT JOIN.> When it doesn't work, you could check SELECT * FROM pg_stat_activity, and>SELECT pg_blocking_pids(pid), * FROM pg_locks, to see what's going on.I can't see any blocking queries blocking pg_locks, pg_blocking_pids.> It'd be very useful to get \"explain analyze\" for a working query and for a> stuck query. It sound like the stuck query never finishes, so maybe the second> part is impossible (?)We run an explain analysis and we see some very interesting stuff going on.It seems without explicitly adding a `ANALYZE`, the query has a cost of over billions, so the query is not stuck but took forever.When I run the same scripts with an ANALYZE right before running the query, the query is exec is 50secondes and the cost is normalExplain analyze WITHOUT ANALYZE https://explain.depesz.com/s/RaSrExplain analyze same query WITH ANALYZE BEFORE https://explain.depesz.com/s/tYVlThe configuration is tuned by aws aurora, but this issue happens also with a default config.allow_system_table_mods,offapplication_name,DataGrip 2021.1.3archive_command,(disabled)archive_mode,offarchive_timeout,5minarray_nulls,onauthentication_timeout,1minautovacuum,onautovacuum_analyze_scale_factor,0.05autovacuum_analyze_threshold,50autovacuum_freeze_max_age,200000000autovacuum_max_workers,12autovacuum_multixact_freeze_max_age,400000000autovacuum_naptime,5sautovacuum_vacuum_cost_delay,1msautovacuum_vacuum_cost_limit,1200autovacuum_vacuum_scale_factor,0.1autovacuum_vacuum_threshold,50autovacuum_work_mem,-1backend_flush_after,0backslash_quote,safe_encodingbgwriter_delay,200msbgwriter_flush_after,0bgwriter_lru_maxpages,100bgwriter_lru_multiplier,2bonjour,offbytea_output,hexcheck_function_bodies,oncheckpoint_completion_target,0.9checkpoint_flush_after,0checkpoint_timeout,15mincheckpoint_warning,30sclient_encoding,UTF8client_min_messages,noticecommit_delay,0commit_siblings,5constraint_exclusion,partitioncpu_index_tuple_cost,0.005cpu_operator_cost,0.0025cpu_tuple_cost,0.01cursor_tuple_fraction,0.1DateStyle,\"ISO, MDY\"db_user_namespace,offdeadlock_timeout,1sdebug_pretty_print,ondebug_print_parse,offdebug_print_plan,offdebug_print_rewritten,offdefault_statistics_target,500default_text_search_config,pg_catalog.simpledefault_transaction_deferrable,offdefault_transaction_isolation,read committeddefault_transaction_read_only,offdynamic_library_path,$libdireffective_cache_size,4GBeffective_io_concurrency,600enable_bitmapscan,onenable_gathermerge,onenable_hashagg,onenable_hashjoin,onenable_indexonlyscan,onenable_indexscan,onenable_material,onenable_mergejoin,onenable_nestloop,onenable_parallel_append,onenable_parallel_hash,onenable_partition_pruning,onenable_partitionwise_aggregate,offenable_partitionwise_join,offenable_seqscan,onenable_sort,onenable_tidscan,onescape_string_warning,onevent_source,PostgreSQLexit_on_error,offextra_float_digits,3force_parallel_mode,offfrom_collapse_limit,8fsync,offfull_page_writes,offgeqo,ongeqo_effort,5geqo_generations,0geqo_pool_size,0geqo_seed,0geqo_selection_bias,2geqo_threshold,12gin_fuzzy_search_limit,0gin_pending_list_limit,4MBhot_standby,offhot_standby_feedback,onhuge_pages,tryidle_in_transaction_session_timeout,25minignore_checksum_failure,offignore_system_indexes,offIntervalStyle,postgresjit,offjit_above_cost,100000jit_debugging_support,offjit_dump_bitcode,offjit_expressions,onjit_inline_above_cost,500000jit_optimize_above_cost,500000jit_profiling_support,offjit_provider,llvmjitjit_tuple_deforming,onjoin_collapse_limit,8lc_monetary,Clc_numeric,Clc_time,Clisten_addresses,*lock_timeout,0lo_compat_privileges,offmaintenance_work_mem,2GBmax_connections,100max_files_per_process,1000max_locks_per_transaction,256max_logical_replication_workers,4max_parallel_maintenance_workers,12max_parallel_workers,12max_parallel_workers_per_gather,6max_pred_locks_per_page,2max_pred_locks_per_relation,-2max_pred_locks_per_transaction,64max_prepared_transactions,0max_replication_slots,10max_stack_depth,6MBmax_standby_archive_delay,30smax_standby_streaming_delay,14smax_sync_workers_per_subscription,2max_wal_senders,0max_wal_size,8GBmax_worker_processes,12min_parallel_index_scan_size,512kBmin_parallel_table_scan_size,8MBmin_wal_size,2GBold_snapshot_threshold,-1operator_precedence_warning,offparallel_leader_participation,offparallel_setup_cost,1000parallel_tuple_cost,0.1password_encryption,md5port,5432post_auth_delay,0pre_auth_delay,0quote_all_identifiers,offrandom_page_cost,1.1restart_after_crash,onrow_security,onsearch_path,publicseq_page_cost,1session_replication_role,originshared_buffers,1GBstandard_conforming_strings,onstatement_timeout,0superuser_reserved_connections,3synchronize_seqscans,onsynchronous_commit,onsyslog_facility,local0syslog_ident,postgressyslog_sequence_numbers,onsyslog_split_messages,ontcp_keepalives_count,9tcp_keepalives_idle,7200tcp_keepalives_interval,75temp_buffers,8MBtemp_file_limit,-1TimeZone,UTCtrace_notify,offtrace_recovery_messages,logtrace_sort,offtrack_activities,ontrack_activity_query_size,4kBtrack_commit_timestamp,offtrack_counts,ontrack_functions,nonetrack_io_timing,offtransform_null_equals,offupdate_process_title,onvacuum_cleanup_index_scale_factor,0.1vacuum_cost_delay,0vacuum_cost_limit,200vacuum_cost_page_dirty,20vacuum_cost_page_hit,1vacuum_cost_page_miss,0vacuum_defer_cleanup_age,0vacuum_freeze_min_age,50000000vacuum_freeze_table_age,150000000vacuum_multixact_freeze_min_age,5000000vacuum_multixact_freeze_table_age,150000000wal_buffers,16MBwal_compression,offwal_level,minimalwal_log_hints,offwal_receiver_status_interval,10swal_receiver_timeout,30swal_retrieve_retry_interval,5swal_sender_timeout,1minwal_sync_method,fdatasyncwal_writer_delay,200mswal_writer_flush_after,1MBwork_mem,2GBxmlbinary,base64xmloption,contentzero_damaged_pages,offOn Thu, Jul 8, 2021 at 2:33 PM Justin Pryzby <[email protected]> wrote:On Thu, Jul 08, 2021 at 01:00:28AM +0200, Allan Barrielle wrote:\n> All is fine, and can work great.\n> But sometimes, some queries that used to take about 20 secs to complete can\n> suddenly end in 5mins.\n> Important all queries have the same shape -> CREATE TABLE SELECT AS *(a bit\n> of transform) FROM TABLE). No update, nothing, it’s dead simple.\n> \n> Nothing works, it’s very random, some query won’t simply work ( even after\n> hours ).\n\nWhen it doesn't work, you could check SELECT * FROM pg_stat_activity, and\nSELECT pg_blocking_pids(pid), * FROM pg_locks, to see what's going on.\n\n> Important all queries have the same shape -> CREATE TABLE SELECT AS *(a bit\n> of transform) FROM TABLE). No update, nothing, it’s dead simple.\n> We are just trying to copy a table from schema1, to schema2, to schema3 and\n> finally schema3. That’s it.\n\nIs it true that the SELECTs have no joins in them ?\n\nDid this ever work better or differently under different versions of postgres ?\n\n> Why does the query never end even after hours ? Why there is no log about\n> where the query is stuck.\n\nPlease send your nondefault config.\nhttps://wiki.postgresql.org/wiki/Server_Configuration\n\nAlso enable logging (I just added this to the wiki).\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#Enable_Logging\n\nIt'd be very useful to get \"explain analyze\" for a working query and for a\nstuck query. It sound like the stuck query never finishes, so maybe the second\npart is impossible (?)\n\nBut it'd be good to get at least \"explain\" output. You'd have to edit your sql\nscript to run an \"explain\" before each query, and run it, logging the ouput,\nuntil you capture the plan for a stuck query. Save the output and send here,\nalong with the query plan for a working query.\n\n-- \nJustin",
"msg_date": "Thu, 8 Jul 2021 15:51:48 +0200",
"msg_from": "Allan Barrielle <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ETL - sql orchestrator is stuck when there is not sleep() between\n queries"
},
{
"msg_contents": "On Thu, Jul 08, 2021 at 03:49:12PM +0200, Allan Barrielle wrote:\n> > Is it true that the SELECTs have no joins in them ?\n> \n> Yes there is a lot of LEFT JOIN.\n> \n> > It'd be very useful to get \"explain analyze\" for a working query and for a\n> > stuck query. It sound like the stuck query never finishes, so maybe the second\n> > part is impossible (?)\n> \n> We run an explain analysis and we see some very interesting stuff going on.\n> It seems without explicitly adding a `ANALYZE`, the query has a cost of\n> over billions, so the query is not stuck but took forever.\n> When I run the same scripts with an ANALYZE right before running the query,\n> the query is exec is 50secondes and the cost is normal\n\nIt sounds like sometimes autoanalyze processes important tables being queried,\nbut sometimes it doesn't.\n\nSince there are JOINs involved, you should analyze the tables after populating\nthem and before querying them. The same as if it were a temp table, or\nanything else.\n\n> The configuration is tuned by aws aurora, [...]\n\n> fsync,off\n> full_page_writes,off\n\nreally?\n\n> vacuum_cleanup_index_scale_factor,0.1\n\nalso interesting\n\n\n",
"msg_date": "Thu, 8 Jul 2021 09:06:03 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ETL - sql orchestrator is stuck when there is not sleep()\n between queries"
},
{
"msg_contents": "fsync is off and full_page_writes is off because the script works one time.\nWe create the db, we load the data, then we dump the data and kill the db.\nNo need to handle servers crashed or anything like that.\n\n0.1 vacuum_cleanup_index_scale_factor is the default value.\n\nOn Thu, Jul 8, 2021 at 4:06 PM Justin Pryzby <[email protected]> wrote:\n\n> On Thu, Jul 08, 2021 at 03:49:12PM +0200, Allan Barrielle wrote:\n> > > Is it true that the SELECTs have no joins in them ?\n> >\n> > Yes there is a lot of LEFT JOIN.\n> >\n> > > It'd be very useful to get \"explain analyze\" for a working query and\n> for a\n> > > stuck query. It sound like the stuck query never finishes, so maybe\n> the second\n> > > part is impossible (?)\n> >\n> > We run an explain analysis and we see some very interesting stuff going\n> on.\n> > It seems without explicitly adding a `ANALYZE`, the query has a cost of\n> > over billions, so the query is not stuck but took forever.\n> > When I run the same scripts with an ANALYZE right before running the\n> query,\n> > the query is exec is 50secondes and the cost is normal\n>\n> It sounds like sometimes autoanalyze processes important tables being\n> queried,\n> but sometimes it doesn't.\n>\n> Since there are JOINs involved, you should analyze the tables after\n> populating\n> them and before querying them. The same as if it were a temp table, or\n> anything else.\n>\n> > The configuration is tuned by aws aurora, [...]\n>\n> > fsync,off\n> > full_page_writes,off\n>\n> really?\n>\n> > vacuum_cleanup_index_scale_factor,0.1\n>\n> also interesting\n>\n\nfsync is off and full_page_writes is off because the script works one time. We create the db, we load the data, then we dump the data and kill the db.No need to handle servers crashed or anything like that.0.1 vacuum_cleanup_index_scale_factor is the default value. On Thu, Jul 8, 2021 at 4:06 PM Justin Pryzby <[email protected]> wrote:On Thu, Jul 08, 2021 at 03:49:12PM +0200, Allan Barrielle wrote:\n> > Is it true that the SELECTs have no joins in them ?\n> \n> Yes there is a lot of LEFT JOIN.\n> \n> > It'd be very useful to get \"explain analyze\" for a working query and for a\n> > stuck query. It sound like the stuck query never finishes, so maybe the second\n> > part is impossible (?)\n> \n> We run an explain analysis and we see some very interesting stuff going on.\n> It seems without explicitly adding a `ANALYZE`, the query has a cost of\n> over billions, so the query is not stuck but took forever.\n> When I run the same scripts with an ANALYZE right before running the query,\n> the query is exec is 50secondes and the cost is normal\n\nIt sounds like sometimes autoanalyze processes important tables being queried,\nbut sometimes it doesn't.\n\nSince there are JOINs involved, you should analyze the tables after populating\nthem and before querying them. The same as if it were a temp table, or\nanything else.\n\n> The configuration is tuned by aws aurora, [...]\n\n> fsync,off\n> full_page_writes,off\n\nreally?\n\n> vacuum_cleanup_index_scale_factor,0.1\n\nalso interesting",
"msg_date": "Thu, 8 Jul 2021 16:12:07 +0200",
"msg_from": "Allan Barrielle <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ETL - sql orchestrator is stuck when there is not sleep() between\n queries"
}
] |
[
{
"msg_contents": "Hi, \n\n\n\nOn my production environment (PostgreSQL 13.3), one of my queries runs very slow, about 2 minutes.\n\nI noticed that it does not use an execution plan that I anticapited it would.\n\n\n\nThe query is :\n\n\n\nSELECT t.*\n\nFROM test t \n\nWHERE t.\"existe\" IS true\n\nand t.json_data\" @> '{\"book\":{\"title\":\"In Search of Lost Time\"}}'\n\nORDER BY t.\"id\" DESC \n\nLIMIT 100 OFFSET 0\n\n\n\nI know PostgreSQL is not very good at performing well with pagination and offsets but all my queries must end with \"LIMIT 100 OFFSET 0\", \"LIMIT 100 OFFSET 1\", ...\n\n\n\nIf I display actual Execution Plan, I get this :\n\n\n\nLimit (cost=0.43..1164.55 rows=100 width=632) (actual time=7884.056..121297.756 rows=1 loops=1)\n\n Buffers: shared hit=5311835 read=585741 dirtied=32\n\n -> Index Scan Backward using test_pk on test (cost=0.43..141104.29 rows=12121 width=632) (actual time=7884.053..121297.734 rows=1 loops=1)\n\n Filter: ((existe IS TRUE) AND (json_data @> '{\"book\": {\"title\": \"In Search of Lost Time\"}}'::jsonb))\n\n Rows Removed by Filter: 1215681\n\n Buffers: shared hit=5311835 read=585741 dirtied=32\n\nPlanning:\n\n Buffers: shared hit=1\n\nPlanning Time: 0.283 ms\n\nExecution Time: 121297.878 ms\n\n\n\nThe query runs very slow from limit 1 to 1147. \n\nIf I change limit value to 1148, this query runs quite fast ( 0.190 ms) with a nice execution plan :\n\n\n\nSELECT t.*\n\nFROM test t \n\nWHERE t.\"existe\" IS true\n\nand t.json_data\" @> '{\"book\":{\"title\":\"In Search of Lost Time\"}}'\n\nORDER BY t.\"id\" DESC \n\nLIMIT 1148 OFFSET 0\n\n\n\nLimit (cost=13220.53..13223.40 rows=1148 width=632) (actual time=0.138..0.140 rows=1 loops=1)\n\n Buffers: shared hit=17\n\n -> Sort (cost=13220.53..13250.84 rows=12121 width=632) (actual time=0.137..0.138 rows=1 loops=1)\n\n Sort Key: id DESC\n\n Sort Method: quicksort Memory: 27kB\n\n Buffers: shared hit=17\n\n -> Bitmap Heap Scan on test (cost=119.73..12543.88 rows=12121 width=632) (actual time=0.125..0.127 rows=1 loops=1)\n\n Recheck Cond: (json_data @> '{\"book\": {\"title\": \"In Search of Lost Time\"}}'::jsonb)\n\n Filter: (existe IS TRUE)\n\n Heap Blocks: exact=1\n\n Buffers: shared hit=17\n\n -> Bitmap Index Scan on test_json_data_idx (cost=0.00..116.70 rows=12187 width=0) (actual time=0.112..0.113 rows=1 loops=1)\n\n Index Cond: (json_data @> '{\"book\": {\"title\": \"In Search of Lost Time\"}}'::jsonb)\n\n Buffers: shared hit=16\n\nPlanning:\n\n Buffers: shared hit=1\n\nPlanning Time: 0.296 ms\n\nExecution Time: 0.190 ms\n\n\n\nWould you have any suggestions why Postgres chooses a so bad query plan ?\n\n\n\n\n\nServer :\n\n----------------------------------------------------------------------\n\nCPU Model : AMD EPYC 7281 16-Core Processor\n\nCPU Cores : 4\n\nCPU Frequency : 2096.060 MHz\n\nCPU Cache : 512 KB\n\nTotal Disk : 888.1 GB (473.0 GB Used)\n\nTotal Mem : 11973 MB (4922 MB Used)\n\nTotal Swap : 0 MB (0 MB Used)\n\nOS : Debian GNU/Linux 10\n\nArch : x86_64 (64 Bit)\n\nKernel : 5.10.28\n\nVirtualization : Dedicated\n\n----------------------------------------------------------------------\n\nI/O Speed(1st run) : 132 MB/s\n\nI/O Speed(2nd run) : 204 MB/s\n\nI/O Speed(3rd run) : 197 MB/s\n\nAverage I/O speed : 177.7 MB/s\n\n\n\n\n\nPostgresql.conf :\n\nmax_connections = 100\n\nshared_buffers = 3840MB\n\nhuge_pages = on\n\nwork_mem = 9830kB\n\nmaintenance_work_mem = 960MB\n\neffective_io_concurrency = 200\n\nmax_worker_processes = 3\n\nmax_parallel_maintenance_workers = 2\n\nmax_parallel_workers_per_gather = 2\n\nmax_parallel_workers = 3\n\nmax_wal_size = 4GB\n\nmin_wal_size = 1GB\n\ncheckpoint_completion_target = 0.9\n\neffective_cache_size = 11520MB\n\ndefault_statistics_target = 100\n\nshared_preload_libraries = 'pg_stat_statements'\n\npg_stat_statements.max = 10000\n\npg_stat_statements.track = all\n\n\n\n\n\nTable test : I have just over 1.2 million records on this table\n\nCREATE TABLE test (\n\n\"source\" varchar NOT NULL,\n\nexiste bool NULL,\n\njson_data jsonb NULL\n\nrow_updated timestamp NOT NULL DEFAULT clock_timestamp(),\n\nrow_inserted timestamp NOT NULL DEFAULT clock_timestamp(),\n\nid uuid NOT NULL,\n\nCONSTRAINT test_pk PRIMARY KEY (id)\n\n);\n\nCREATE INDEX test_existe_idx ON test USING btree (existe);\n\nCREATE INDEX test_id_idx ON test USING btree (id);\n\nCREATE INDEX test_json_datae_idx ON test USING gin (json_data jsonb_path_ops);\n\nCREATE INDEX test_row_inserted_idx ON test USING btree (row_inserted);\n\nCREATE INDEX test_row_updated_idx ON production.test USING btree (row_updated);\n\nCREATE INDEX test_source_idx ON production.test USING btree (source);\n\n\n\n\n\nselect * from pg_stat_all_tables where relname = 'test' :\n\n\n\nrelid|schemaname|relname|seq_scan|seq_tup_read|idx_scan|idx_tup_fetch|n_tup_ins|n_tup_upd|n_tup_del\n\n16692|dev |test |1816 |724038305 |31413 |36863713 |1215682 |23127 |0 \n\n\n\nn_tup_hot_upd|n_live_tup|n_dead_tup|n_mod_since_analyze|n_ins_since_vacuum|last_vacuum \n\n0 |1219730 |30 |121 |91 |2021-07-07 14:43:13\n\n\n\nlast_autovacuum |last_analyze |last_autoanalyze |vacuum_count|autovacuum_count|analyze_count|autoanalyze_count\n\n2021-06-10 09:29:54|2021-07-07 14:43:52|2021-06-10 09:31:32|2 |1 |1 |1\n\n\n\nhttp://www.car-expresso.com\n\n\nJoël Frid\n\nDirecteur Général\n\n+33 6 14 46 37 68 \n\nhttps://www.car-expresso.com/",
"msg_date": "Thu, 08 Jul 2021 10:14:36 +0200",
"msg_from": "Joel Frid <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strange execution plan"
}
] |
[
{
"msg_contents": "Hi, \n\n\n\nOn my production environment (PostgreSQL 13.3), one of my queries runs very slow, about 2 minutes.\n\nI noticed that it does not use an execution plan that I anticapited it would.\n\n\n\nThe query is :\n\n\n\nSELECT t.*\n\nFROM test t \n\nWHERE t.\"existe\" IS true\n\nand t.json_data\" @> '{\"book\":{\"title\":\"In Search of Lost Time\"}}'\n\nORDER BY t.\"id\" DESC \n\nLIMIT 100 OFFSET 0\n\n\n\nI know PostgreSQL is not very good at performing well with pagination and offsets but all my queries must end with \"LIMIT 100 OFFSET 0\", \"LIMIT 100 OFFSET 1\", ...\n\n\n\nIf I display actual Execution Plan, I get this :\n\n\n\nLimit (cost=0.43..1164.55 rows=100 width=632) (actual time=7884.056..121297.756 rows=1 loops=1)\n\n Buffers: shared hit=5311835 read=585741 dirtied=32\n\n -> Index Scan Backward using test_pk on test (cost=0.43..141104.29 rows=12121 width=632) (actual time=7884.053..121297.734 rows=1 loops=1)\n\n Filter: ((existe IS TRUE) AND (json_data @> '{\"book\": {\"title\": \"In Search of Lost Time\"}}'::jsonb))\n\n Rows Removed by Filter: 1215681\n\n Buffers: shared hit=5311835 read=585741 dirtied=32\n\nPlanning:\n\n Buffers: shared hit=1\n\nPlanning Time: 0.283 ms\n\nExecution Time: 121297.878 ms\n\n\n\nThe query runs very slow from limit 1 to 1147. \n\nIf I change limit value to 1148, this query runs quite fast ( 0.190 ms) with a nice execution plan :\n\n\n\nSELECT t.*\n\nFROM test t \n\nWHERE t.\"existe\" IS true\n\nand t.json_data\" @> '{\"book\":{\"title\":\"In Search of Lost Time\"}}'\n\nORDER BY t.\"id\" DESC \n\nLIMIT 1148 OFFSET 0\n\n\n\nLimit (cost=13220.53..13223.40 rows=1148 width=632) (actual time=0.138..0.140 rows=1 loops=1)\n\n Buffers: shared hit=17\n\n -> Sort (cost=13220.53..13250.84 rows=12121 width=632) (actual time=0.137..0.138 rows=1 loops=1)\n\n Sort Key: id DESC\n\n Sort Method: quicksort Memory: 27kB\n\n Buffers: shared hit=17\n\n -> Bitmap Heap Scan on test (cost=119.73..12543.88 rows=12121 width=632) (actual time=0.125..0.127 rows=1 loops=1)\n\n Recheck Cond: (json_data @> '{\"book\": {\"title\": \"In Search of Lost Time\"}}'::jsonb)\n\n Filter: (existe IS TRUE)\n\n Heap Blocks: exact=1\n\n Buffers: shared hit=17\n\n -> Bitmap Index Scan on test_json_data_idx (cost=0.00..116.70 rows=12187 width=0) (actual time=0.112..0.113 rows=1 loops=1)\n\n Index Cond: (json_data @> '{\"book\": {\"title\": \"In Search of Lost Time\"}}'::jsonb)\n\n Buffers: shared hit=16\n\nPlanning:\n\n Buffers: shared hit=1\n\nPlanning Time: 0.296 ms\n\nExecution Time: 0.190 ms\n\n\n\nWould you have any suggestions why Postgres chooses a so bad query plan ?\n\n\n\n\n\nServer :\n\n----------------------------------------------------------------------\n\nCPU Model : AMD EPYC 7281 16-Core Processor\n\nCPU Cores : 4\n\nCPU Frequency : 2096.060 MHz\n\nCPU Cache : 512 KB\n\nTotal Disk : 888.1 GB (473.0 GB Used)\n\nTotal Mem : 11973 MB (4922 MB Used)\n\nTotal Swap : 0 MB (0 MB Used)\n\nOS : Debian GNU/Linux 10\n\nArch : x86_64 (64 Bit)\n\nKernel : 5.10.28\n\nVirtualization : Dedicated\n\n----------------------------------------------------------------------\n\nI/O Speed(1st run) : 132 MB/s\n\nI/O Speed(2nd run) : 204 MB/s\n\nI/O Speed(3rd run) : 197 MB/s\n\nAverage I/O speed : 177.7 MB/s\n\n\n\n\n\nPostgresql.conf :\n\nmax_connections = 100\n\nshared_buffers = 3840MB\n\nhuge_pages = on\n\nwork_mem = 9830kB\n\nmaintenance_work_mem = 960MB\n\neffective_io_concurrency = 200\n\nmax_worker_processes = 3\n\nmax_parallel_maintenance_workers = 2\n\nmax_parallel_workers_per_gather = 2\n\nmax_parallel_workers = 3\n\nmax_wal_size = 4GB\n\nmin_wal_size = 1GB\n\ncheckpoint_completion_target = 0.9\n\neffective_cache_size = 11520MB\n\ndefault_statistics_target = 100\n\nshared_preload_libraries = 'pg_stat_statements'\n\npg_stat_statements.max = 10000\n\npg_stat_statements.track = all\n\n\n\n\n\nTable test : I have just over 1.2 million records on this table\n\nCREATE TABLE test (\n\n\"source\" varchar NOT NULL,\n\nexiste bool NULL,\n\njson_data jsonb NULL\n\nrow_updated timestamp NOT NULL DEFAULT clock_timestamp(),\n\nrow_inserted timestamp NOT NULL DEFAULT clock_timestamp(),\n\nid uuid NOT NULL,\n\nCONSTRAINT test_pk PRIMARY KEY (id)\n\n);\n\nCREATE INDEX test_existe_idx ON test USING btree (existe);\n\nCREATE INDEX test_id_idx ON test USING btree (id);\n\nCREATE INDEX test_json_datae_idx ON test USING gin (json_data jsonb_path_ops);\n\nCREATE INDEX test_row_inserted_idx ON test USING btree (row_inserted);\n\nCREATE INDEX test_row_updated_idx ON production.test USING btree (row_updated);\n\nCREATE INDEX test_source_idx ON production.test USING btree (source);\n\n\n\n\n\nselect * from pg_stat_all_tables where relname = 'test' :\n\n\n\nrelid|schemaname|relname|seq_scan|seq_tup_read|idx_scan|idx_tup_fetch|n_tup_ins|n_tup_upd|n_tup_del\n\n16692|dev |test |1816 |724038305 |31413 |36863713 |1215682 |23127 |0 \n\n\n\nn_tup_hot_upd|n_live_tup|n_dead_tup|n_mod_since_analyze|n_ins_since_vacuum|last_vacuum \n\n0 |1219730 |30 |121 |91 |2021-07-07 14:43:13\n\n\n\nlast_autovacuum |last_analyze |last_autoanalyze |vacuum_count|autovacuum_count|analyze_count|autoanalyze_count\n\n2021-06-10 09:29:54|2021-07-07 14:43:52|2021-06-10 09:31:32|2 |1 |1 |1\n\n\n\nhttp://www.car-expresso.com\n\n\nJoël Frid\n\nDirecteur Général\n\n+33 6 14 46 37 68 \n\nhttps://www.car-expresso.com/\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nhttp://www.car-expresso.com\n\n\nJoël Frid\n\nDirecteur Général\n\n+33 6 14 46 37 68 \n\nhttps://www.car-expresso.com/",
"msg_date": "Thu, 08 Jul 2021 10:26:46 +0200",
"msg_from": "Joel Frid <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strange execution plan"
},
{
"msg_contents": "\n\n> On 08-07-2021, at 04:26, Joel Frid <[email protected]> wrote:\n> \n> Hi, \n> \n> On my production environment (PostgreSQL 13.3), one of my queries runs very slow, about 2 minutes.\n> I noticed that it does not use an execution plan that I anticapited it would.\n> \n> The query is :\n> \n> SELECT t.*\n> FROM test t \n> WHERE t.\"existe\" IS true\n> and t.json_data\" @> '{\"book\":{\"title\":\"In Search of Lost Time\"}}'\n> ORDER BY t.\"id\" DESC \n> LIMIT 100 OFFSET 0\n> \n> I know PostgreSQL is not very good at performing well with pagination and offsets but all my queries must end with \"LIMIT 100 OFFSET 0\", \"LIMIT 100 OFFSET 1\", ...\n\nI don't think any database performs well with LIMIT+OFFSET.\nUsing OFFSET requires doing a linear scan discarding all rows up to\nthe row in position OFFSET, then the scan continues for LIMIT rows.\nThe greater the value of OFFSET, the slowest the query will perform,\nin general.\nI'd recommend you using cursors for pagination in general (I know it\nmay not be possible for you, just wanted to explain as it could be\nuseful).\n\n> If I display actual Execution Plan, I get this :\n> \n> Limit (cost=0.43..1164.55 rows=100 width=632) (actual time=7884.056..121297.756 rows=1 loops=1)\n> Buffers: shared hit=5311835 read=585741 dirtied=32\n> -> Index Scan Backward using test_pk on test (cost=0.43..141104.29 rows=12121 width=632) (actual time=7884.053..121297.734 rows=1 loops=1)\n> Filter: ((existe IS TRUE) AND (json_data @> '{\"book\": {\"title\": \"In Search of Lost Time\"}}'::jsonb))\n> Rows Removed by Filter: 1215681\n> Buffers: shared hit=5311835 read=585741 dirtied=32\n> Planning:\n> Buffers: shared hit=1\n> Planning Time: 0.283 ms\n> Execution Time: 121297.878 ms\n> \n> The query runs very slow from limit 1 to 1147. \n> If I change limit value to 1148, this query runs quite fast ( 0.190 ms) with a nice execution plan :\n> \n> SELECT t.*\n> FROM test t \n> WHERE t.\"existe\" IS true\n> and t.json_data\" @> '{\"book\":{\"title\":\"In Search of Lost Time\"}}'\n> ORDER BY t.\"id\" DESC \n> LIMIT 1148 OFFSET 0\n> \n> Limit (cost=13220.53..13223.40 rows=1148 width=632) (actual time=0.138..0.140 rows=1 loops=1)\n> Buffers: shared hit=17\n> -> Sort (cost=13220.53..13250.84 rows=12121 width=632) (actual time=0.137..0.138 rows=1 loops=1)\n> Sort Key: id DESC\n> Sort Method: quicksort Memory: 27kB\n> Buffers: shared hit=17\n> -> Bitmap Heap Scan on test (cost=119.73..12543.88 rows=12121 width=632) (actual time=0.125..0.127 rows=1 loops=1)\n> Recheck Cond: (json_data @> '{\"book\": {\"title\": \"In Search of Lost Time\"}}'::jsonb)\n> Filter: (existe IS TRUE)\n> Heap Blocks: exact=1\n> Buffers: shared hit=17\n> -> Bitmap Index Scan on test_json_data_idx (cost=0.00..116.70 rows=12187 width=0) (actual time=0.112..0.113 rows=1 loops=1)\n> Index Cond: (json_data @> '{\"book\": {\"title\": \"In Search of Lost Time\"}}'::jsonb)\n> Buffers: shared hit=16\n> Planning:\n> Buffers: shared hit=1\n> Planning Time: 0.296 ms\n> Execution Time: 0.190 ms\n> \n> Would you have any suggestions why Postgres chooses a so bad query plan ?\n\nI can guess a bit about this,\n\nOne of the perks of the statistics collector is that it doesn't\ncollect too many statistics to properly estimate the number of rows\nthat will match the \"@>\" operator, as that is quite hard to do. In\ngeneral it over-estimates how many rows will match.\nAs you can see on both plans it estimates that 12121 rows will match\nthe \"@>\" clause, even if only 1 actually match.\n\nThis means that the planner is estimating that the cost of using\ntest_json_data_idx and then executing top-k heapsort (to get the\nlatest 100 rows ordered by id) is far greater than iterating over the\ntest_pk index where the first 100 rows that match will be the ones you\nneed and already sorted.\nIn practice, iterating the test_pk index has to read over a lot of\nrows that didn't match the \"@>\" operator, as the actual number of rows\nthat match isn't as large as initially expected.\n\nOnce you increase the LIMIT value to a number high enough the cost of\niterating over any of the indexes becomes similar to the planner,\nafter that threshold it chooses to switch the plan.\n\n\nSo, some suggestions to improve the execution of that query:\n\nOption A: Use a common table expression to \"force\" the usage of\ntest_json_data_idx\n\n WITH json_matching_rows AS (\n SELECT t.*\n FROM test ti\n WHERE t.json_data @> '{\"book\":{\"title\":\"In Search of Lost Time\"}}'\n )\n SELECT t.*\n FROM json_matching_rows t\n WHERE t.\"existe\" IS true\n ORDER BY t.\"id\" DESC\n LIMIT 100 OFFSET 0;\n\n\nOption B: Use the extension pg_hint_plan to hint the usage of\ntest_json_data_idx\n\n\nOption C: Create a functional index for the book title and tweak the\nquery to use it.\nThis can also be a composite index (to have the values sorted by id\nalready) and partial (to only include rows where \"existe\" is true)\n\n CREATE INDEX test_json_data_composite_idx\n ON test\n USING BTREE ((json_data->'book'->>'title'), id DESC)\n WHERE (existe);\n\n SELECT t.*\n FROM test t \n WHERE t.\"existe\" IS true\n and t.json_data->'book'->>'title' = 'In Search of Lost Time'\n ORDER BY t.\"id\" DESC\n LIMIT 100 OFFSET 0;\n\nBe aware that partial indexes don't support HOT updates.\n\n\nI hope this reply helps you.\n\n\nBest regards,\nManuel Weitzman\n\n\n\n\n",
"msg_date": "Thu, 8 Jul 2021 17:13:07 -0400",
"msg_from": "Manuel Weitzman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange execution plan"
},
{
"msg_contents": "\n\n> On 08-07-2021, at 17:13, Manuel Weitzman <[email protected]> wrote:\n> \n> I'd recommend you using cursors for pagination in general (I know it\n> may not be possible for you, just wanted to explain as it could be\n> useful).\n\nBy the way, I mean cursor pagination as the general concept.\nI'm not talking about Postgres cursors.\n\nBest regards,\nManuel Weitzman\n\n\n",
"msg_date": "Thu, 8 Jul 2021 17:19:48 -0400",
"msg_from": "Manuel Weitzman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange execution plan"
},
{
"msg_contents": "\n\n> On 08-07-2021, at 17:13, Manuel Weitzman <[email protected]> wrote:\n> \n> Option A: Use a common table expression to \"force\" the usage of\n> test_json_data_idx\n> \n> WITH json_matching_rows AS (\n> SELECT t.*\n> FROM test ti\n> WHERE t.json_data @> '{\"book\":{\"title\":\"In Search of Lost Time\"}}'\n> )\n> SELECT t.*\n> FROM json_matching_rows t\n> WHERE t.\"existe\" IS true\n> ORDER BY t.\"id\" DESC\n> LIMIT 100 OFFSET 0;\n> \n\nThe first query line should be\n WITH MATERIALIZED json_matching_rows AS (\n\nI had forgotten that Postgres 12 removed the optimization barrier on\ncommon table expressions.\nTo introduce it again the MATERIALIZED clause is needed.\n\nApparently I need to work on reviewing my emails properly before\nsending them.\n\n\nBest regards,\nManuel Weitzman\n\n\n",
"msg_date": "Thu, 8 Jul 2021 22:00:54 -0400",
"msg_from": "Manuel Weitzman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange execution plan"
}
] |
[
{
"msg_contents": "Hello everyone,\nI have a scenario where wanted to add PK on partition to make sure to monitor unique values for two columns values. but as PG required to partition column should be part of PK. How can we make sure actual two columns need to be unique values.\nand also while insert into table need be use 'on conflict'.\ncreate table t (id int, pid int , name name , dt date) partition by range(dt);--create unique index on t(id,pid);--alter table t add constraint uk unique (id);--create unique index on t(id,pid);alter table t add constraint uk unique (id,pid,dt);\ncreate table t1 partition of t for values from ('2020-01-01') to ('2020-02-01');alter table t1 add constraint uk1 unique (id,pid);create table t2 partition of t for values from ('2020-02-01') to ('2020-03-01');alter table t2 add constraint uk2 unique (id,pid);create table t4 partition of t for values from ('2020-03-01') to ('2020-04-01');alter table t4 add constraint uk3 unique (id,pid);create table t3 partition of t for values from ('2020-04-01') to ('2020-05-01');alter table t3 add constraint uk4 unique (id,pid);\ninsert into t(id,pid,name,dt) values (1,2,'raj','2020-01-01')on conflict (id,pid) do nothing;\nERROR: there is no unique or exclusion constraint matching the ON CONFLICT specification\n\nhttps://dbfiddle.uk/?rdbms=postgres_11&fiddle=36b3eb0d51f8bff4b5d445a77d688d88\n\n\nThanks,Rj\nHello everyone,I have a scenario where wanted to add PK on partition to make sure to monitor unique values for two columns values. but as PG required to partition column should be part of PK. How can we make sure actual two columns need to be unique values.and also while insert into table need be use 'on conflict'.create table t (id int, pid int , name name , dt date) partition by range(dt);--create unique index on t(id,pid);--alter table t add constraint uk unique (id);--create unique index on t(id,pid);alter table t add constraint uk unique (id,pid,dt);create table t1 partition of t for values from ('2020-01-01') to ('2020-02-01');alter table t1 add constraint uk1 unique (id,pid);create table t2 partition of t for values from ('2020-02-01') to ('2020-03-01');alter table t2 add constraint uk2 unique (id,pid);create table t4 partition of t for values from ('2020-03-01') to ('2020-04-01');alter table t4 add constraint uk3 unique (id,pid);create table t3 partition of t for values from ('2020-04-01') to ('2020-05-01');alter table t3 add constraint uk4 unique (id,pid);insert into t(id,pid,name,dt) values (1,2,'raj','2020-01-01')on conflict (id,pid) do nothing;ERROR: there is no unique or exclusion constraint matching the ON CONFLICT specificationhttps://dbfiddle.uk/?rdbms=postgres_11&fiddle=36b3eb0d51f8bff4b5d445a77d688d88Thanks,Rj",
"msg_date": "Thu, 8 Jul 2021 21:01:45 +0000 (UTC)",
"msg_from": "Nagaraj Raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "ERROR: there is no unique or exclusion constraint matching the ON\n CONFLICT specification"
}
] |
[
{
"msg_contents": "Hi all,\n\nI got a question about PG log lines with temporary file info like this:\n\ncase 1: log line with no contextual info\n2021-07-07 20:28:15 UTC:10.100.11.95(50274):myapp@mydb:[35200]:LOG: \ntemporary file: path \"base/pgsql_tmp/pgsql_tmp35200.0\", size 389390336\n\ncase 2: log line with contextual info\n2021-07-07 20:56:18 UTC:172.16.193.118(56080):myapp@mydb:[22418]:LOG: \ntemporary file: path \"base/pgsql_tmp/pgsql_tmp22418.0\", size 1048576000\n2021-07-07 20:56:18 \nUTC:172.16.193.118(56080):myapp@mydb:[22418]:CONTEXT: PL/pgSQL function \nmemory.f_memory_usage(boolean) line 13 at RETURN QUERY\n\nThere are at least 2 cases where stuff can spill over to disk:\n* queries that don't fit in work_mem, and\n* temporary tables that don't fit in temp_buffers\n\nQuestion, if log_temp_files is turned on (=0), then how can you tell \nfrom where the temporary log line comes from?\nI see a pattern where work_mem spill overs have a CONTEXT line that \nimmediately follows the LOG LINE with keyword, temporary. See case 2 above.\n\nFor other LOG lines with keyword, temporary, there is no such pattern. \nCould those be the ones caused by temp_buffer spill overs to disk? case \n1 above.\n\nI really want to tune temp_buffers, but I would like to be able to \ndetect when temporary tables are spilling over to disk, so that I can \nincrease temp_buffers.\n\nAny help would be appreciated.\n\nRegards,\nMichael Vitale\n\n\n\n",
"msg_date": "Thu, 8 Jul 2021 17:22:56 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": true,
"msg_subject": "temporary file log lines"
},
{
"msg_contents": "On Thu, 2021-07-08 at 17:22 -0400, MichaelDBA wrote:\n> I got a question about PG log lines with temporary file info like this:\n> \n> case 1: log line with no contextual info\n> 2021-07-07 20:28:15 UTC:10.100.11.95(50274):myapp@mydb:[35200]:LOG: \n> temporary file: path \"base/pgsql_tmp/pgsql_tmp35200.0\", size 389390336\n> \n> case 2: log line with contextual info\n> 2021-07-07 20:56:18 UTC:172.16.193.118(56080):myapp@mydb:[22418]:LOG: \n> temporary file: path \"base/pgsql_tmp/pgsql_tmp22418.0\", size 1048576000\n> 2021-07-07 20:56:18 \n> UTC:172.16.193.118(56080):myapp@mydb:[22418]:CONTEXT: PL/pgSQL function \n> memory.f_memory_usage(boolean) line 13 at RETURN QUERY\n> \n> There are at least 2 cases where stuff can spill over to disk:\n> * queries that don't fit in work_mem, and\n> * temporary tables that don't fit in temp_buffers\n> \n> Question, if log_temp_files is turned on (=0), then how can you tell \n> from where the temporary log line comes from?\n> I see a pattern where work_mem spill overs have a CONTEXT line that \n> immediately follows the LOG LINE with keyword, temporary. See case 2 above.\n> \n> For other LOG lines with keyword, temporary, there is no such pattern. \n> Could those be the ones caused by temp_buffer spill overs to disk? case \n> 1 above.\n> \n> I really want to tune temp_buffers, but I would like to be able to \n> detect when temporary tables are spilling over to disk, so that I can \n> increase temp_buffers.\n> \n> Any help would be appreciated.\n\nI am not sure if you can istinguish those two cases from the log.\n\nWhat I would do is identify the problematic query and run it with\nEXPLAIN (ANALYZE, BUFFERS). Then you should see which part of the query\ncreates the temporary files.\n\nIf it is a statement in a function called from your top level query,\nauto_explain with the correct parameters can get you that output for\nthose statements too.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Mon, 12 Jul 2021 14:01:52 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: temporary file log lines"
},
{
"msg_contents": "hmmm, I think spilling over to disk for temporary tables is handled by \nan entirely different branch in the PG source code. In fact, some other \nfolks have chimed in and said log_temp_files doesn't relate to temp \nfiles at all use by temporary tables, just queries as you mentioned \nbelow elsewhere. This seems to be a dark area of PG that is not \nconvered well.\n\nRegards,\nMichael Vitale\n\n\nLaurenz Albe wrote on 7/12/2021 8:01 AM:\n> On Thu, 2021-07-08 at 17:22 -0400, MichaelDBA wrote:\n>> I got a question about PG log lines with temporary file info like this:\n>>\n>> case 1: log line with no contextual info\n>> 2021-07-07 20:28:15 UTC:10.100.11.95(50274):myapp@mydb:[35200]:LOG:\n>> temporary file: path \"base/pgsql_tmp/pgsql_tmp35200.0\", size 389390336\n>>\n>> case 2: log line with contextual info\n>> 2021-07-07 20:56:18 UTC:172.16.193.118(56080):myapp@mydb:[22418]:LOG:\n>> temporary file: path \"base/pgsql_tmp/pgsql_tmp22418.0\", size 1048576000\n>> 2021-07-07 20:56:18\n>> UTC:172.16.193.118(56080):myapp@mydb:[22418]:CONTEXT: PL/pgSQL function\n>> memory.f_memory_usage(boolean) line 13 at RETURN QUERY\n>>\n>> There are at least 2 cases where stuff can spill over to disk:\n>> * queries that don't fit in work_mem, and\n>> * temporary tables that don't fit in temp_buffers\n>>\n>> Question, if log_temp_files is turned on (=0), then how can you tell\n>> from where the temporary log line comes from?\n>> I see a pattern where work_mem spill overs have a CONTEXT line that\n>> immediately follows the LOG LINE with keyword, temporary. See case 2 above.\n>>\n>> For other LOG lines with keyword, temporary, there is no such pattern.\n>> Could those be the ones caused by temp_buffer spill overs to disk? case\n>> 1 above.\n>>\n>> I really want to tune temp_buffers, but I would like to be able to\n>> detect when temporary tables are spilling over to disk, so that I can\n>> increase temp_buffers.\n>>\n>> Any help would be appreciated.\n> I am not sure if you can istinguish those two cases from the log.\n>\n> What I would do is identify the problematic query and run it with\n> EXPLAIN (ANALYZE, BUFFERS). Then you should see which part of the query\n> creates the temporary files.\n>\n> If it is a statement in a function called from your top level query,\n> auto_explain with the correct parameters can get you that output for\n> those statements too.\n>\n> Yours,\n> Laurenz Albe\n\n\n\n",
"msg_date": "Mon, 12 Jul 2021 08:13:16 -0400",
"msg_from": "MichaelDBA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: temporary file log lines"
},
{
"msg_contents": "Hi,\n\nLe lun. 12 juil. 2021 à 14:13, MichaelDBA <[email protected]> a écrit :\n\n> hmmm, I think spilling over to disk for temporary tables is handled by\n> an entirely different branch in the PG source code. In fact, some other\n> folks have chimed in and said log_temp_files doesn't relate to temp\n> files at all use by temporary tables, just queries as you mentioned\n> below elsewhere. This seems to be a dark area of PG that is not\n> convered well.\n>\n\nAs far as I know, log_temp_files only relates to sort/hash going to disks,\nnot to temporary objects (tables and indexes).\n\nBut, even if they are, they are definitely distinguishable by name.\nSort/hash temp files are located in pgsql_tmp, and have a specific template\nname. Temp files for temporary objects are located in the database\nsubdirectory, and also have a specific template name, different from the\nsort/hash temp files' one.\n\nRegards.\n\nHi,Le lun. 12 juil. 2021 à 14:13, MichaelDBA <[email protected]> a écrit :hmmm, I think spilling over to disk for temporary tables is handled by \nan entirely different branch in the PG source code. In fact, some other \nfolks have chimed in and said log_temp_files doesn't relate to temp \nfiles at all use by temporary tables, just queries as you mentioned \nbelow elsewhere. This seems to be a dark area of PG that is not \nconvered well.As far as I know, log_temp_files only relates to sort/hash going to disks, not to temporary objects (tables and indexes).But, even if they are, they are definitely distinguishable by name. Sort/hash temp files are located in pgsql_tmp, and have a specific template name. Temp files for temporary objects are located in the database subdirectory, and also have a specific template name, different from the sort/hash temp files' one.Regards.",
"msg_date": "Tue, 13 Jul 2021 09:29:33 +0200",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: temporary file log lines"
},
{
"msg_contents": "\n\nOn 7/13/21 9:29 AM, Guillaume Lelarge wrote:\n> Hi,\n> \n> Le lun. 12 juil. 2021 à 14:13, MichaelDBA <[email protected] \n> <mailto:[email protected]>> a écrit :\n> \n> hmmm, I think spilling over to disk for temporary tables is handled by\n> an entirely different branch in the PG source code. In fact, some\n> other\n> folks have chimed in and said log_temp_files doesn't relate to temp\n> files at all use by temporary tables, just queries as you mentioned\n> below elsewhere. This seems to be a dark area of PG that is not\n> convered well.\n> \n> \n> As far as I know, log_temp_files only relates to sort/hash going to \n> disks, not to temporary objects (tables and indexes).\n> \n\nRight. log_temp_files does not cover temporary tables.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 13 Jul 2021 20:01:23 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: temporary file log lines"
}
] |
[
{
"msg_contents": "Hi,\n\nProbably my google-foo is weak today but I couldn't find any \n(convincing) explanation for this.\n\nI'm running PostgreSQL 12.6 on 64-bit Linux (CentOS 7, PostgreSQL \ncompiled from sources) and I'm trying to insert 30k rows into a simple \ntable that has an \"ON INSERT .. FOR EACH STATEMENT\" trigger.\n\n Table \"public.parent_table\"\n\n Column | Type | Collation | Nullable | Default\n-------------+------------------------+-----------+----------+--------------------------------------------\n id | bigint | | not null | nextval('parent_table_id_seq'::regclass)\n name | character varying(64) | | not null |\n enabled | boolean | | not null |\n description | character varying(255) | | |\n deleted | boolean | | not null | false\n is_default | boolean | | not null | false\n\nIndexes:\n \"parent_pkey\" PRIMARY KEY, btree (id)\n \"uniq_name\" UNIQUE, btree (name) WHERE deleted = false\n\nReferenced by:\n TABLE \"child\" CONSTRAINT \"child_fkey\" FOREIGN KEY (parent_id) REFERENCES parent_table(id) ON DELETE CASCADE\n\nTriggers:\n parent_changed BEFORE INSERT OR DELETE OR UPDATE OR TRUNCATE ON parent_table FOR EACH STATEMENT EXECUTE FUNCTION parent_table_changed();\n\nThis is the trigger function\n\nCREATE OR REPLACE FUNCTION parent_table_changed() RETURNS trigger LANGUAGE plpgsql\nAS $function$\nBEGIN\n UPDATE data_sync SET last_parent_table_change=CURRENT_TIMESTAMP;\n RETURN NEW;\nEND;\n$function$\n\n\nI'm trying to insert 30k rows (inside a single transaction) into the \nparent table using the following SQL (note that I came across this issue \nwhile debugging an application-level performance problem and the SQL I'm \nusing here is similar to what the application is generating):\n\nBEGIN;\n-- ALTER TABLE public.parent_table DISABLE TRIGGER parent_changed;\nPREPARE my_insert (varchar(64), boolean, varchar(255), boolean, boolean) AS INSERT INTO public.parent_table (name,enabled,description,deleted,is_default) VALUES($1, $2, $3, $4, $5);\nEXECUTE my_insert ($$035001$$, true, $$null$$, false, false);\nEXECUTE my_insert ($$035002$$, true, $$null$$, false, false);\n....29998 more lines\n\n\nThis is the execution time I get when running the script while the \ntrigger is enabled:\n\n~/tmp$ time psql -q -Upostgres -h dbhost -f inserts.sql test_db\n\nreal 0m8,381s\nuser 0m0,203s\nsys 0m0,287s\n\n\nRunning the same SQL script with trigger disabled shows a ~4x speed-up:\n\n\n~/tmp$ time psql -q -Upostgres -h dbhost -f inserts.sql test_db\n\nreal 0m2,284s\nuser 0m0,171s\nsys 0m0,261s\n\n\nDefining the trigger as \"BEFORE INSERT\" or \"AFTER INSERT\" made no \ndifference.\n\nI then got curious , put a \"/timing\" at the start of the SQL script, \nmassaged the psql output a bit and plotted a chart of the statement \nexecution times.\nTo my surprise, I see a linear increase of the per-INSERT execution \ntimes, roughly 4x as well:\n\nWhile the execution time per INSERT remains constant when disabling the \ntrigger before inserting the data:\n\nWhat's causing this slow-down ?\n\nThanks,\nTobias",
"msg_date": "Fri, 16 Jul 2021 23:27:24 +0200",
"msg_from": "Tobias Gierke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Linear slow-down while inserting into a table with an ON INSERT\n trigger ?"
},
{
"msg_contents": "On Fri, Jul 16, 2021 at 11:27:24PM +0200, Tobias Gierke wrote:\n> CREATE OR REPLACE FUNCTION parent_table_changed() RETURNS trigger LANGUAGE plpgsql\n> AS $function$\n> BEGIN\n> UPDATE data_sync SET last_parent_table_change=CURRENT_TIMESTAMP;\n> RETURN NEW;\n> END;\n> $function$\n> \n> I'm trying to insert 30k rows (inside a single transaction) into the parent\n\nThe problem is because you're doing 30k updates of data_sync within a txn.\nIdeally it starts with 1 tuple in 1 page but every row updated requires\nscanning the previous N rows, which haven't been vacuumed (and cannot).\nUpdate is essentially delete+insert, and the table will grow with each update\nuntil the txn ends and it's vacuumed.\n\n pages: 176 removed, 1 remain, 0 skipped due to pins, 0 skipped frozen\n tuples: 40000 removed, 1 remain, 0 are dead but not yet removable, oldest xmin: 2027\n\nYou could run a single UPDATE rather than 30k triggers.\nOr switch to an INSERT on the table, with an index on it, and call\nmax(last_parent_table_change) from whatever needs to ingest it. And prune the\nold entries and vacuum it outside the transaction. Maybe someone else will\nhave a better suggestion.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 16 Jul 2021 23:40:26 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linear slow-down while inserting into a table with an ON INSERT\n trigger ?"
},
{
"msg_contents": "In addition to what Justin was saying, another possible consideration may\nbe the transaction isolation level that you're running at. I don't recall\nwhat Postgres' default is off hand but \"Serializable\" is the most strict\nand would likely impose some overhead on what you described. Check out\nhttps://www.postgresql.org/docs/12/transaction-iso.html for details. If\nyour particular use case can loosen up some of the strictness in the\ncontext of that transaction it might possibly result in a speed\nimprovement. Just make sure you don't trade off data integrity for speed or\nelse you'll get invalid data quickly!\n\n -- Ben Scherrey\n\nOn Sat, Jul 17, 2021 at 4:27 AM Tobias Gierke <\[email protected]> wrote:\n\n> Hi,\n>\n> Probably my google-foo is weak today but I couldn't find any (convincing)\n> explanation for this.\n>\n> I'm running PostgreSQL 12.6 on 64-bit Linux (CentOS 7, PostgreSQL compiled\n> from sources) and I'm trying to insert 30k rows into a simple table that\n> has an \"ON INSERT .. FOR EACH STATEMENT\" trigger.\n>\n> Table \"public.parent_table\"\n>\n> Column | Type | Collation | Nullable | Default\n> -------------+------------------------+-----------+----------+--------------------------------------------\n> id | bigint | | not null | nextval('parent_table_id_seq'::regclass)\n> name | character varying(64) | | not null |\n> enabled | boolean | | not null |\n> description | character varying(255) | | |\n> deleted | boolean | | not null | false\n> is_default | boolean | | not null | false\n>\n> Indexes:\n> \"parent_pkey\" PRIMARY KEY, btree (id)\n> \"uniq_name\" UNIQUE, btree (name) WHERE deleted = false\n>\n> Referenced by:\n> TABLE \"child\" CONSTRAINT \"child_fkey\" FOREIGN KEY (parent_id) REFERENCES parent_table(id) ON DELETE CASCADE\n>\n> Triggers:\n> parent_changed BEFORE INSERT OR DELETE OR UPDATE OR TRUNCATE ON parent_table FOR EACH STATEMENT EXECUTE FUNCTION parent_table_changed();\n>\n>\n> This is the trigger function\n>\n> CREATE OR REPLACE FUNCTION parent_table_changed() RETURNS trigger LANGUAGE plpgsql\n> AS $function$\n> BEGIN\n> UPDATE data_sync SET last_parent_table_change=CURRENT_TIMESTAMP;\n> RETURN NEW;\n> END;\n> $function$\n>\n>\n> I'm trying to insert 30k rows (inside a single transaction) into the\n> parent table using the following SQL (note that I came across this issue\n> while debugging an application-level performance problem and the SQL I'm\n> using here is similar to what the application is generating):\n>\n> BEGIN;\n> -- ALTER TABLE public.parent_table DISABLE TRIGGER parent_changed;\n> PREPARE my_insert (varchar(64), boolean, varchar(255), boolean, boolean) AS INSERT INTO public.parent_table (name,enabled,description,deleted,is_default) VALUES($1, $2, $3, $4, $5);\n> EXECUTE my_insert ($$035001$$, true, $$null$$, false, false);\n> EXECUTE my_insert ($$035002$$, true, $$null$$, false, false);\n> ....29998 more lines\n>\n>\n> This is the execution time I get when running the script while the trigger\n> is enabled:\n>\n> ~/tmp$ time psql -q -Upostgres -h dbhost -f inserts.sql test_db\n>\n> real 0m8,381s\n> user 0m0,203s\n> sys 0m0,287s\n>\n>\n> Running the same SQL script with trigger disabled shows a ~4x speed-up:\n>\n>\n> ~/tmp$ time psql -q -Upostgres -h dbhost -f inserts.sql test_db\n>\n> real 0m2,284s\n> user 0m0,171s\n> sys 0m0,261s\n>\n>\n> Defining the trigger as \"BEFORE INSERT\" or \"AFTER INSERT\" made no\n> difference.\n>\n> I then got curious , put a \"/timing\" at the start of the SQL script,\n> massaged the psql output a bit and plotted a chart of the statement\n> execution times.\n> To my surprise, I see a linear increase of the per-INSERT execution times,\n> roughly 4x as well:\n>\n> While the execution time per INSERT remains constant when disabling the\n> trigger before inserting the data:\n>\n> What's causing this slow-down ?\n>\n> Thanks,\n> Tobias\n>",
"msg_date": "Sat, 17 Jul 2021 13:48:45 +0700",
"msg_from": "Benjamin Scherrey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linear slow-down while inserting into a table with an ON INSERT\n trigger ?"
},
{
"msg_contents": "On Sat, 17 Jul 2021 at 16:40, Justin Pryzby <[email protected]> wrote:\n> You could run a single UPDATE rather than 30k triggers.\n> Or switch to an INSERT on the table, with an index on it, and call\n> max(last_parent_table_change) from whatever needs to ingest it. And prune the\n> old entries and vacuum it outside the transaction. Maybe someone else will\n> have a better suggestion.\n\nMaybe just change the UPDATE statement to:\n\nUPDATE data_sync SET last_parent_table_change=CURRENT_TIMESTAMP WHERE\nlast_parent_table_change <> CURRENT_TIMESTAMP;\n\nThat should reduce the number of actual updates to 1 per transaction.\n\nDavid\n\n\n",
"msg_date": "Sat, 17 Jul 2021 19:02:31 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linear slow-down while inserting into a table with an ON INSERT\n trigger ?"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> On Sat, 17 Jul 2021 at 16:40, Justin Pryzby <[email protected]> wrote:\n>> You could run a single UPDATE rather than 30k triggers.\n>> Or switch to an INSERT on the table, with an index on it, and call\n>> max(last_parent_table_change) from whatever needs to ingest it. And prune the\n>> old entries and vacuum it outside the transaction. Maybe someone else will\n>> have a better suggestion.\n\n> Maybe just change the UPDATE statement to:\n> UPDATE data_sync SET last_parent_table_change=CURRENT_TIMESTAMP WHERE\n> last_parent_table_change <> CURRENT_TIMESTAMP;\n> That should reduce the number of actual updates to 1 per transaction.\n\nOr, if it's impractical to make the application do that for itself,\nthis could be a job for suppress_redundant_updates_trigger().\n\nhttps://www.postgresql.org/docs/current/functions-trigger.html\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 17 Jul 2021 10:33:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linear slow-down while inserting into a table with an ON INSERT\n trigger ?"
},
{
"msg_contents": "How much does the function-ed UPDATE statement take?\n\n\nRegards,\nNinad Shah\n\nOn Sat, 17 Jul 2021 at 02:57, Tobias Gierke <[email protected]>\nwrote:\n\n> Hi,\n>\n> Probably my google-foo is weak today but I couldn't find any (convincing)\n> explanation for this.\n>\n> I'm running PostgreSQL 12.6 on 64-bit Linux (CentOS 7, PostgreSQL compiled\n> from sources) and I'm trying to insert 30k rows into a simple table that\n> has an \"ON INSERT .. FOR EACH STATEMENT\" trigger.\n>\n> Table \"public.parent_table\"\n>\n> Column | Type | Collation | Nullable | Default\n> -------------+------------------------+-----------+----------+--------------------------------------------\n> id | bigint | | not null | nextval('parent_table_id_seq'::regclass)\n> name | character varying(64) | | not null |\n> enabled | boolean | | not null |\n> description | character varying(255) | | |\n> deleted | boolean | | not null | false\n> is_default | boolean | | not null | false\n>\n> Indexes:\n> \"parent_pkey\" PRIMARY KEY, btree (id)\n> \"uniq_name\" UNIQUE, btree (name) WHERE deleted = false\n>\n> Referenced by:\n> TABLE \"child\" CONSTRAINT \"child_fkey\" FOREIGN KEY (parent_id) REFERENCES parent_table(id) ON DELETE CASCADE\n>\n> Triggers:\n> parent_changed BEFORE INSERT OR DELETE OR UPDATE OR TRUNCATE ON parent_table FOR EACH STATEMENT EXECUTE FUNCTION parent_table_changed();\n>\n>\n> This is the trigger function\n>\n> CREATE OR REPLACE FUNCTION parent_table_changed() RETURNS trigger LANGUAGE plpgsql\n> AS $function$\n> BEGIN\n> UPDATE data_sync SET last_parent_table_change=CURRENT_TIMESTAMP;\n> RETURN NEW;\n> END;\n> $function$\n>\n>\n> I'm trying to insert 30k rows (inside a single transaction) into the\n> parent table using the following SQL (note that I came across this issue\n> while debugging an application-level performance problem and the SQL I'm\n> using here is similar to what the application is generating):\n>\n> BEGIN;\n> -- ALTER TABLE public.parent_table DISABLE TRIGGER parent_changed;\n> PREPARE my_insert (varchar(64), boolean, varchar(255), boolean, boolean) AS INSERT INTO public.parent_table (name,enabled,description,deleted,is_default) VALUES($1, $2, $3, $4, $5);\n> EXECUTE my_insert ($$035001$$, true, $$null$$, false, false);\n> EXECUTE my_insert ($$035002$$, true, $$null$$, false, false);\n> ....29998 more lines\n>\n>\n> This is the execution time I get when running the script while the trigger\n> is enabled:\n>\n> ~/tmp$ time psql -q -Upostgres -h dbhost -f inserts.sql test_db\n>\n> real 0m8,381s\n> user 0m0,203s\n> sys 0m0,287s\n>\n>\n> Running the same SQL script with trigger disabled shows a ~4x speed-up:\n>\n>\n> ~/tmp$ time psql -q -Upostgres -h dbhost -f inserts.sql test_db\n>\n> real 0m2,284s\n> user 0m0,171s\n> sys 0m0,261s\n>\n>\n> Defining the trigger as \"BEFORE INSERT\" or \"AFTER INSERT\" made no\n> difference.\n>\n> I then got curious , put a \"/timing\" at the start of the SQL script,\n> massaged the psql output a bit and plotted a chart of the statement\n> execution times.\n> To my surprise, I see a linear increase of the per-INSERT execution times,\n> roughly 4x as well:\n>\n> While the execution time per INSERT remains constant when disabling the\n> trigger before inserting the data:\n>\n> What's causing this slow-down ?\n>\n> Thanks,\n> Tobias\n>",
"msg_date": "Sun, 18 Jul 2021 12:22:00 +0530",
"msg_from": "Ninad Shah <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linear slow-down while inserting into a table with an ON INSERT\n trigger ?"
},
{
"msg_contents": "> How much does the function-ed UPDATE statement take?\n\nINSERTs into the parent table are taking ~0.1 ms/INSERT at the beginning \nof the transaction and this increases to ~0.43 ms after 30k INSERTs. \nWithout the trigger, INSERTs into the parent table consistently take \naround 0.05ms.\n\nRegards,\nTobias\n\n>\n>\n> Regards,\n> Ninad Shah\n>\n> On Sat, 17 Jul 2021 at 02:57, Tobias Gierke \n> <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> Hi,\n>\n> Probably my google-foo is weak today but I couldn't find any\n> (convincing) explanation for this.\n>\n> I'm running PostgreSQL 12.6 on 64-bit Linux (CentOS 7, PostgreSQL\n> compiled from sources) and I'm trying to insert 30k rows into a\n> simple table that has an \"ON INSERT .. FOR EACH STATEMENT\" trigger.\n>\n> Table \"public.parent_table\"\n>\n> Column | Type | Collation | Nullable | Default\n> -------------+------------------------+-----------+----------+--------------------------------------------\n> id | bigint | | not null | nextval('parent_table_id_seq'::regclass)\n> name | character varying(64) | | not null |\n> enabled | boolean | | not null |\n> description | character varying(255) | | |\n> deleted | boolean | | not null | false\n> is_default | boolean | | not null | false\n>\n> Indexes:\n> \"parent_pkey\" PRIMARY KEY, btree (id)\n> \"uniq_name\" UNIQUE, btree (name) WHERE deleted = false\n>\n> Referenced by:\n> TABLE \"child\" CONSTRAINT \"child_fkey\" FOREIGN KEY (parent_id) REFERENCES parent_table(id) ON DELETE CASCADE\n>\n> Triggers:\n> parent_changed BEFORE INSERT OR DELETE OR UPDATE OR TRUNCATE ON parent_table FOR EACH STATEMENT EXECUTE FUNCTION parent_table_changed();\n>\n> This is the trigger function\n>\n> CREATE OR REPLACE FUNCTION parent_table_changed() RETURNS trigger LANGUAGE plpgsql\n> AS $function$\n> BEGIN\n> UPDATE data_sync SET last_parent_table_change=CURRENT_TIMESTAMP;\n> RETURN NEW;\n> END;\n> $function$\n>\n>\n> I'm trying to insert 30k rows (inside a single transaction) into\n> the parent table using the following SQL (note that I came across\n> this issue while debugging an application-level performance\n> problem and the SQL I'm using here is similar to what the\n> application is generating):\n>\n> BEGIN;\n> -- ALTER TABLE public.parent_table DISABLE TRIGGER parent_changed;\n> PREPARE my_insert (varchar(64), boolean, varchar(255), boolean, boolean) AS INSERT INTO public.parent_table (name,enabled,description,deleted,is_default) VALUES($1, $2, $3, $4, $5);\n> EXECUTE my_insert ($$035001$$, true, $$null$$, false, false);\n> EXECUTE my_insert ($$035002$$, true, $$null$$, false, false);\n> ....29998 more lines\n>\n>\n> This is the execution time I get when running the script while the\n> trigger is enabled:\n>\n> ~/tmp$ time psql -q -Upostgres -h dbhost -f inserts.sql test_db\n>\n> real 0m8,381s\n> user 0m0,203s\n> sys 0m0,287s\n>\n>\n> Running the same SQL script with trigger disabled shows a ~4x\n> speed-up:\n>\n>\n> ~/tmp$ time psql -q -Upostgres -h dbhost -f inserts.sql test_db\n>\n> real 0m2,284s\n> user 0m0,171s\n> sys 0m0,261s\n>\n>\n> Defining the trigger as \"BEFORE INSERT\" or \"AFTER INSERT\" made no\n> difference.\n>\n> I then got curious , put a \"/timing\" at the start of the SQL\n> script, massaged the psql output a bit and plotted a chart of the\n> statement execution times.\n> To my surprise, I see a linear increase of the per-INSERT\n> execution times, roughly 4x as well:\n>\n> While the execution time per INSERT remains constant when\n> disabling the trigger before inserting the data:\n>\n> What's causing this slow-down ?\n>\n> Thanks,\n> Tobias\n>",
"msg_date": "Sun, 18 Jul 2021 09:19:00 +0200",
"msg_from": "Tobias Gierke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linear slow-down while inserting into a table with an ON INSERT\n trigger ?"
},
{
"msg_contents": "Great idea ! This brought the time per INSERT into the parent table down \nto a consistent ~0.065ms again (compared to 0.05ms when completely \nremoving the trigger, so penalty for the trigger is roughly ~20%).\n\n> On Sat, 17 Jul 2021 at 16:40, Justin Pryzby <[email protected]> wrote:\n>> You could run a single UPDATE rather than 30k triggers.\n>> Or switch to an INSERT on the table, with an index on it, and call\n>> max(last_parent_table_change) from whatever needs to ingest it. And prune the\n>> old entries and vacuum it outside the transaction. Maybe someone else will\n>> have a better suggestion.\n> Maybe just change the UPDATE statement to:\n>\n> UPDATE data_sync SET last_parent_table_change=CURRENT_TIMESTAMP WHERE\n> last_parent_table_change <> CURRENT_TIMESTAMP;\n>\n> That should reduce the number of actual updates to 1 per transaction.\n>\n> David\n\n\n",
"msg_date": "Sun, 18 Jul 2021 09:25:19 +0200",
"msg_from": "Tobias Gierke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linear slow-down while inserting into a table with an ON INSERT\n trigger ?"
},
{
"msg_contents": "Thank you for the detailed explanation ! Just one more question: I've \ndid an experiment and reduced the fillfactor on the table updated by the \ntrigger to 50%, hoping the HOT feature would kick in and each \nsubsequent INSERT would clean up the \"HOT chain\" of the previous INSERT \n... but execution times did not change at all compared to 100% \nfillfactor, why is this ? Does the HOT feature only work if a different \nbackend accesses the table concurrently ?\n\nThanks,\nTobias\n\n> On Fri, Jul 16, 2021 at 11:27:24PM +0200, Tobias Gierke wrote:\n>> CREATE OR REPLACE FUNCTION parent_table_changed() RETURNS trigger LANGUAGE plpgsql\n>> AS $function$\n>> BEGIN\n>> UPDATE data_sync SET last_parent_table_change=CURRENT_TIMESTAMP;\n>> RETURN NEW;\n>> END;\n>> $function$\n>>\n>> I'm trying to insert 30k rows (inside a single transaction) into the parent\n> The problem is because you're doing 30k updates of data_sync within a txn.\n> Ideally it starts with 1 tuple in 1 page but every row updated requires\n> scanning the previous N rows, which haven't been vacuumed (and cannot).\n> Update is essentially delete+insert, and the table will grow with each update\n> until the txn ends and it's vacuumed.\n>\n> pages: 176 removed, 1 remain, 0 skipped due to pins, 0 skipped frozen\n> tuples: 40000 removed, 1 remain, 0 are dead but not yet removable, oldest xmin: 2027\n>\n> You could run a single UPDATE rather than 30k triggers.\n> Or switch to an INSERT on the table, with an index on it, and call\n> max(last_parent_table_change) from whatever needs to ingest it. And prune the\n> old entries and vacuum it outside the transaction. Maybe someone else will\n> have a better suggestion.\n>\n\n\n",
"msg_date": "Sun, 18 Jul 2021 09:36:33 +0200",
"msg_from": "Tobias Gierke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linear slow-down while inserting into a table with an ON INSERT\n trigger ?"
},
{
"msg_contents": "On Sun, 2021-07-18 at 09:36 +0200, Tobias Gierke wrote:\n> Thank you for the detailed explanation ! Just one more question: I've \n> did an experiment and reduced the fillfactor on the table updated by the \n> trigger to 50%, hoping the HOT feature would kick in and each \n> subsequent INSERT would clean up the \"HOT chain\" of the previous INSERT \n> ... but execution times did not change at all compared to 100% \n> fillfactor, why is this ? Does the HOT feature only work if a different \n> backend accesses the table concurrently ?\n\nNo, but until the transaction is done, the tuples cannot be removed,\nno matter if they are HOT or not.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Mon, 19 Jul 2021 07:45:02 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linear slow-down while inserting into a table with an ON INSERT\n trigger ?"
}
] |
[
{
"msg_contents": "New to Postgres, Oracle background. With Oracle the amount of work a query does is tracked via logical reads. Oracle tracks logical and physical reads differently than Postgres. With Oracle a physical read is always considered a logical read. So if a query reads 5 blocks are all 5 are read from disk the query would do 5 logical reads, 5 physical reads. It appears with Postgres Buffers shared hit are reads from memory and Buffer shared read is off disk. To get total reads one would need to add up shared hits + shared reads.\n\nI have a sample query that is doing more work if some of the reads are physical reads and I'm trying to understand why. If you look at attached QueryWithPhyReads.txt it shows the query did Buffers: shared hit=171 read=880. So it did 171 + 880 = 1051 total block reads (some logical, some physical). QueryWithNoPhyReads.txt shows execution statistics of the execution of the exact same query with same data point. The only difference is the first execution loaded blocks into memory so this execution had all shared hits. In this case the query did this much work: Buffers: shared hit=581.\n\nWith Oracle that would not happen. If the 2nd execution of the query did all reads from memory the shared hits would be 1051, not 581.\n\nSo it appears to me that with Postgres when a query does physical reads it not only has the expense of doing those disk reads but there is also extra work done to increase overall block reads for a query. But I don't understand why that would be the case. Could someone explain why this is happening?\n\nThanks\nThis e-mail is for the sole use of the intended recipient and contains information that may be privileged and/or confidential. If you are not an intended recipient, please notify the sender by return e-mail and delete this e-mail and any attachments. Certain required legal entity disclosures can be accessed on our website: https://www.thomsonreuters.com/en/resources/disclosures.html",
"msg_date": "Wed, 21 Jul 2021 17:13:05 +0000",
"msg_from": "\"Dirschel, Steve\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query Performance"
},
{
"msg_contents": "\"Dirschel, Steve\" <[email protected]> writes:\n> I have a sample query that is doing more work if some of the reads are physical reads and I'm trying to understand why. If you look at attached QueryWithPhyReads.txt it shows the query did Buffers: shared hit=171 read=880. So it did 171 + 880 = 1051 total block reads (some logical, some physical). QueryWithNoPhyReads.txt shows execution statistics of the execution of the exact same query with same data point. The only difference is the first execution loaded blocks into memory so this execution had all shared hits. In this case the query did this much work: Buffers: shared hit=581.\n\nYou haven't provided a lot of context for this observation, but I can\nthink of at least one explanation for the discrepancy. If the first\nquery was the first access to these tables after a bunch of updates,\nit would have been visiting a lot of now-dead row versions. It would\nthen have marked the corresponding index entries dead, resulting in the\nsecond execution not having to visit as many heap pages.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Jul 2021 14:03:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Performance"
}
] |
[
{
"msg_contents": "Hello,\n\nWe have a data warehouse working on Postgres V11.2. We have a query that is pretty beefy that has been taking under 5mn to run consistently every day for about 2 years as part of a data warehouse ETL process. It's a pivot over 55 values on a table with some 15M rows. The total table size is over 2GB (table+indices+other).\n\n\nCREATE TABLE assessmenticcqa_raw\n(\n iccqar_iccassmt_fk integer NOT NULL, -- foreign key to assessment\n iccqar_ques_code character varying(255) COLLATE pg_catalog.\"default\" NOT NULL, -- question code\n iccqar_ans_val character varying(255) COLLATE pg_catalog.\"default\" NOT NULL, -- answer value\n \"lastUpdated\" timestamp with time zone NOT NULL DEFAULT now(),\n CONSTRAINT fk_assessmenticcqa_raw_assessment FOREIGN KEY (iccqar_iccassmt_fk)\n REFERENCES assessmenticc_fact (iccassmt_pk) MATCH SIMPLE\n ON UPDATE CASCADE\n ON DELETE RESTRICT\n)\n\nTABLESPACE pg_default;\n\nCREATE UNIQUE INDEX assessmenticcqa_raw_idx_iccqar_assmt_ques\n ON assessmenticcqa_raw USING btree\n (iccqar_iccassmt_fk ASC NULLS LAST, iccqar_ques_code COLLATE pg_catalog.\"default\" ASC NULLS LAST)\n TABLESPACE pg_default;\n\nCREATE INDEX assessmenticcqa_raw_idx_iccqar_lastupdated\n ON assessmenticcqa_raw USING btree\n (\"lastUpdated\" ASC NULLS LAST)\n TABLESPACE pg_default;\n\n\nThe query that does the pivot is:\n\n\nWITH t AS (\n SELECT assessmenticcqa_raw.iccqar_iccassmt_fk AS iccqa_iccassmt_fk,\n assessmenticcqa_raw.iccqar_ques_code,\n max(assessmenticcqa_raw.iccqar_ans_val::text) AS iccqar_ans_val\n FROM assessmenticcqa_raw\n WHERE assessmenticcqa_raw.iccqar_ques_code::text = ANY (ARRAY['DEBRIDEMENT DATE'::character varying::text\n , 'DEBRIDEMENT THIS VISIT'::character varying::text\n , 'DEBRIDEMENT TYPE'::character varying::text\n , 'DEPTH (CM)'::character varying::text\n , 'DEPTH DESCRIPTION'::character varying::text\n , ... 55 total columns to pivot\n ])\n GROUP BY assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\n )\nSELECT t.iccqa_iccassmt_fk,\n max(t.iccqar_ans_val) AS iccqar_ans_val,\n tilda.todate(max(t.iccqar_ans_val) FILTER (WHERE t.iccqar_ques_code::text = 'DEBRIDEMENT DATE'::text)::character varying, NULL::date) AS \"iccqa_DEBRIDEMENT_DATE\",\n max(t.iccqar_ans_val) FILTER (WHERE t.iccqar_ques_code::text = 'DEBRIDEMENT THIS VISIT'::text) AS \"iccqa_DEBRIDEMENT_THIS_VISIT\",\n max(t.iccqar_ans_val) FILTER (WHERE t.iccqar_ques_code::text = 'DEBRIDEMENT TYPE'::text) AS \"iccqa_DEBRIDEMENT_TYPE\",\n tilda.tofloat(max(t.iccqar_ans_val) FILTER (WHERE t.iccqar_ques_code::text = 'DEPTH (CM)'::text)::character varying, NULL::real) AS \"iccqa_DEPTH_CM\",\n max(t.iccqar_ans_val) FILTER (WHERE t.iccqar_ques_code::text = 'DEPTH DESCRIPTION'::text) AS \"iccqa_DEPTH_DESCRIPTION\",\n ... 55 total columns being pivotted\n FROM t\n GROUP BY t.iccqa_iccassmt_fk;\n\n\n\nThis query has been working flawlessly without so much as a glitch every day for the last 2 years or so with of course an increasing amount of data every day (the data grows at about 15-20 thousand records per day). I know the query is not incremental but at under 5mn, it's simple and works well and can handle inconsistent updates on the data source we use which is pretty dirty.\n\nThe problem I am facing is that we are trying to move to Postgres V13.3 and this query (and several others like it) is now taking 10x longer (3,000 seconds vs 300 seconds) which makes it completely unacceptable. I created a V13 instance following standard practices with pg_upgrade. I have V11 and V13 working side by side on the exact same hardware: the VM is an 8-core (16 threads) 64GB windows server 2012 R2 machine with SSD storage. I have vacuumed both V11 and V13 databases full freeze analyze. The V13 is an exact backup of the V11 database content-wise. The postgres.conf is the same too and hasn't been touched in years:\n\n\n \"effective_cache_size\": \"52GB\",\n \"from_collapse_limit\": \"24\",\n \"jit\": \"off\",\n \"jit_above_cost\": \"2e+08\",\n \"jit_inline_above_cost\": \"5e+08\",\n \"jit_optimize_above_cost\": \"5e+08\",\n \"join_collapse_limit\": \"24\",\n \"max_parallel_workers\": \"20\",\n \"max_parallel_workers_per_gather\": \"8\",\n \"random_page_cost\": \"1.1\",\n \"temp_buffers\": \"4GB\",\n \"work_mem\": \"384MB\"\n\n\nI have done all my testing with either of the database on while the other was off (shutting down the DB) to make sure there wasn't any weird interaction. I have read some articles about major changes between 11 and 13 (some of which occurred in 12). In particular, information about the JIT sometimes causing trouble, and the way some CTEs can now be inlined and which can also cause trouble.\n\n * As you can see from the config above, I have disabled the JIT to make this more comparable with 11 and eliminate that possible source of issues.\n * I have also tried different versions of the query (MATERIALIZED vs NOT MATERIALIZED) with little impact.\n\nThe plans are pretty much identical too. I checked line by line and couldn't see anything much different (note that I have a view over this query). Here is the V13 version of the plan:\n\"[\n {\n \"Plan\": {\n \"Node Type\": \"Subquery Scan\",\n \"Parallel Aware\": false,\n \"Alias\": \"assessmenticcqapivotview\",\n \"Startup Cost\": 1785087.62,\n \"Total Cost\": 1785100.62,\n \"Plan Rows\": 200,\n \"Plan Width\": 1228,\n \"Output\": [\n \"assessmenticcqapivotview.iccqa_iccassmt_fk\",\n \"assessmenticcqapivotview.\\\"iccqa_DEBRIDEMENT_DATE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_DEBRIDEMENT_THIS_VISIT\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_DEBRIDEMENT_TYPE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_DEPTH_CM\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_DEPTH_DESCRIPTION\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_DOES_PATIENT_HAVE_PAIN_ASSOCIATED_WITH_THIS_WOUND\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_DRAIN_PRESENT\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_DRAIN_TYPE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_EDGE_SURROUNDING_TISSUE_MACERATION\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_EDGES\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_EPITHELIALIZATION\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_EXUDATE_AMOUNT\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_EXUDATE_TYPE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_GRANULATION_TISSUE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_INDICATE_OTHER_TYPE_OF_WOUND_CLOSURE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_INDICATE_TYPE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_INDICATE_WOUND_CLOSURE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_IS_THIS_A_CLOSED_SURGICAL_WOUND_OR_SUSPECTED_DEEP_TISSUE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_LENGTH_CM\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_MEASUREMENTS_TAKEN\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_NECROTIC_TISSUE_AMOUNT\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_NECROTIC_TISSUE_TYPE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_ODOR\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_OTHER_COMMENTS_REGARDING_DEBRIDEMENT_TYPE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_OTHER_COMMENTS_REGARDING_DRAIN_TYPE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_OTHER_COMMENTS_REGARDING_PAIN_INTERVENTIONS\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_OTHER_COMMENTS_REGARDING_PAIN_QUALITY\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_OTHER_COMMENTS_REGARDING_REASON_MEASUREMENTS_NOT_TAKEN\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_PAIN_FREQUENCY\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_PAIN_INTERVENTIONS\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_PAIN_QUALITY\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_PERIPHERAL_TISSUE_EDEMA\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_PERIPHERAL_TISSUE_INDURATION\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_REASON_MEASUREMENTS_NOT_TAKEN\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_RESPONSE_TO_PAIN_INTERVENTIONS\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_SHAPE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_SIGNS_AND_SYMPTOMS_OF_INFECTION\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_SKIN_COLOR_SURROUNDING_WOUND\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_STATE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_SURFACE_AREA_SQ_CM\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_TOTAL_NECROTIC_TISSUE_ESCHAR\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_TOTAL_NECROTIC_TISSUE_SLOUGH\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_TUNNELING\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_TUNNELING_SIZE_CM_LOCATION_12_3_O_CLOCK\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_TUNNELING_SIZE_CM_LOCATION_3_6_O_CLOCK\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_TUNNELING_SIZE_CM_LOCATION_6_9_O_CLOCK\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_TUNNELING_SIZE_CM_LOCATION_9_12_O_CLOCK\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_UNDERMINING\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_UNDERMINING_SIZE_CM_LOCATION_12_3_O_CLOCK\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_UNDERMINING_SIZE_CM_LOCATION_3_6_O_CLOCK\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_UNDERMINING_SIZE_CM_LOCATION_6_9_O_CLOCK\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_UNDERMINING_SIZE_CM_LOCATION_9_12_O_CLOCK\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_WIDTH_CM\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_WOUND_PAIN_LEVEL_WHERE_0_NO_PAIN_AND_10_WORST_POS\\\"\"\n ],\n \"Plans\": [\n {\n \"Node Type\": \"Aggregate\",\n \"Strategy\": \"Hashed\",\n \"Partial Mode\": \"Simple\",\n \"Parent Relationship\": \"Subquery\",\n \"Parallel Aware\": false,\n \"Startup Cost\": 1785087.62,\n \"Total Cost\": 1785098.62,\n \"Plan Rows\": 200,\n \"Plan Width\": 1260,\n \"Output\": [\n \"t.iccqa_iccassmt_fk\",\n \"NULL::text\",\n \"tilda.todate((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'DEBRIDEMENT DATE'::text)))::character varying, NULL::date)\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'DEBRIDEMENT THIS VISIT'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'DEBRIDEMENT TYPE'::text))\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'DEPTH (CM)'::text)))::character varying, NULL::real)\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'DEPTH DESCRIPTION'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'DOES PATIENT HAVE PAIN ASSOCIATED WITH THIS WOUND?'::text))\",\n \"tilda.toint((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'DRAIN PRESENT'::text)))::character varying, NULL::integer)\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'DRAIN TYPE'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'EDGE / SURROUNDING TISSUE - MACERATION'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'EDGES'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'EPITHELIALIZATION'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'EXUDATE AMOUNT'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'EXUDATE TYPE'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'GRANULATION TISSUE'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'INDICATE OTHER TYPE OF WOUND CLOSURE'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'INDICATE TYPE'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'INDICATE WOUND CLOSURE'::text))\",\n \"tilda.toint((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'IS THIS A CLOSED SURGICAL WOUND OR SUSPECTED DEEP TISSUE INJURY?'::text)))::character varying, NULL::integer)\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'LENGTH (CM)'::text)))::character varying, NULL::real)\",\n \"tilda.toint((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'MEASUREMENTS TAKEN'::text)))::character varying, NULL::integer)\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'NECROTIC TISSUE AMOUNT'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'NECROTIC TISSUE TYPE'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'ODOR'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'OTHER COMMENTS REGARDING DEBRIDEMENT TYPE'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'OTHER COMMENTS REGARDING DRAIN TYPE'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'OTHER COMMENTS REGARDING PAIN INTERVENTIONS'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'OTHER COMMENTS REGARDING PAIN QUALITY'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'OTHER COMMENTS REGARDING REASON MEASUREMENTS NOT TAKEN'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'PAIN FREQUENCY'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'PAIN INTERVENTIONS'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'PAIN QUALITY'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'PERIPHERAL TISSUE EDEMA'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'PERIPHERAL TISSUE INDURATION'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'REASON MEASUREMENTS NOT TAKEN'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'RESPONSE TO PAIN INTERVENTIONS'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'SHAPE'::text))\",\n \"tilda.toint((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'SIGNS AND SYMPTOMS OF INFECTION'::text)))::character varying, NULL::integer)\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'SKIN COLOR SURROUNDING WOUND'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'STATE'::text))\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'SURFACE AREA (SQ CM)'::text)))::character varying, NULL::real)\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'TOTAL NECROTIC TISSUE ESCHAR'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'TOTAL NECROTIC TISSUE SLOUGH'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'TUNNELING'::text))\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'TUNNELING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK'::text)))::character varying, NULL::real)\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'TUNNELING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK'::text)))::character varying, NULL::real)\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'TUNNELING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK'::text)))::character varying, NULL::real)\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'TUNNELING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK'::text)))::character varying, NULL::real)\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'UNDERMINING'::text))\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'UNDERMINING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK'::text)))::character varying, NULL::real)\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'UNDERMINING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK'::text)))::character varying, NULL::real)\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'UNDERMINING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK'::text)))::character varying, NULL::real)\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'UNDERMINING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK'::text)))::character varying, NULL::real)\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'WIDTH (CM)'::text)))::character varying, NULL::real)\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'WOUND PAIN LEVEL, WHERE 0 = \\\"NO PAIN\\\" AND 10 = \\\"WORST POSSIBLE PAIN\\\"'::text)))::character varying, NULL::real)\"\n ],\n \"Group Key\": [\n \"t.iccqa_iccassmt_fk\"\n ],\n \"Planned Partitions\": 0,\n \"Plans\": [\n {\n \"Node Type\": \"Aggregate\",\n \"Strategy\": \"Hashed\",\n \"Partial Mode\": \"Simple\",\n \"Parent Relationship\": \"InitPlan\",\n \"Subplan Name\": \"CTE t\",\n \"Parallel Aware\": false,\n \"Startup Cost\": 1360804.75,\n \"Total Cost\": 1374830.63,\n \"Plan Rows\": 1402588,\n \"Plan Width\": 56,\n \"Output\": [\n \"assessmenticcqa_raw.iccqar_iccassmt_fk\",\n \"assessmenticcqa_raw.iccqar_ques_code\",\n \"max((assessmenticcqa_raw.iccqar_ans_val)::text)\"\n ],\n \"Group Key\": [\n \"assessmenticcqa_raw.iccqar_iccassmt_fk\",\n \"assessmenticcqa_raw.iccqar_ques_code\"\n ],\n \"Planned Partitions\": 0,\n \"Plans\": [\n {\n \"Node Type\": \"Seq Scan\",\n \"Parent Relationship\": \"Outer\",\n \"Parallel Aware\": false,\n \"Relation Name\": \"assessmenticcqa_raw\",\n \"Schema\": \"public\",\n \"Alias\": \"assessmenticcqa_raw\",\n \"Startup Cost\": 0,\n \"Total Cost\": 1256856.62,\n \"Plan Rows\": 13859750,\n \"Plan Width\": 38,\n \"Output\": [\n \"assessmenticcqa_raw.iccqar_iccassmt_fk\",\n \"assessmenticcqa_raw.iccqar_ques_code\",\n \"assessmenticcqa_raw.iccqar_ques_type\",\n \"assessmenticcqa_raw.iccqar_ans_val\",\n \"assessmenticcqa_raw.created\",\n \"assessmenticcqa_raw.\\\"lastUpdated\\\"\",\n \"assessmenticcqa_raw.deleted\"\n ],\n \"Filter\": \"((assessmenticcqa_raw.iccqar_ques_code)::text = ANY ('{\\\"DEBRIDEMENT DATE\\\",\\\"DEBRIDEMENT THIS VISIT\\\",\\\"DEBRIDEMENT TYPE\\\",\\\"DEPTH (CM)\\\",\\\"DEPTH DESCRIPTION\\\",\\\"DOES PATIENT HAVE PAIN ASSOCIATED WITH THIS WOUND?\\\",\\\"DRAIN PRESENT\\\",\\\"DRAIN TYPE\\\",\\\"EDGE / SURROUNDING TISSUE - MACERATION\\\",EDGES,EPITHELIALIZATION,\\\"EXUDATE AMOUNT\\\",\\\"EXUDATE TYPE\\\",\\\"GRANULATION TISSUE\\\",\\\"INDICATE OTHER TYPE OF WOUND CLOSURE\\\",\\\"INDICATE TYPE\\\",\\\"INDICATE WOUND CLOSURE\\\",\\\"IS THIS A CLOSED SURGICAL WOUND OR SUSPECTED DEEP TISSUE INJURY?\\\",\\\"LENGTH (CM)\\\",\\\"MEASUREMENTS TAKEN\\\",\\\"NECROTIC TISSUE AMOUNT\\\",\\\"NECROTIC TISSUE TYPE\\\",ODOR,\\\"OTHER COMMENTS REGARDING DEBRIDEMENT TYPE\\\",\\\"OTHER COMMENTS REGARDING DRAIN TYPE\\\",\\\"OTHER COMMENTS REGARDING PAIN INTERVENTIONS\\\",\\\"OTHER COMMENTS REGARDING PAIN QUALITY\\\",\\\"OTHER COMMENTS REGARDING REASON MEASUREMENTS NOT TAKEN\\\",\\\"PAIN FREQUENCY\\\",\\\"PAIN INTERVENTIONS\\\",\\\"PAIN QUALITY\\\",\\\"PERIPHERAL TISSUE EDEMA\\\",\\\"PERIPHERAL TISSUE INDURATION\\\",\\\"REASON MEASUREMENTS NOT TAKEN\\\",\\\"RESPONSE TO PAIN INTERVENTIONS\\\",SHAPE,\\\"SIGNS AND SYMPTOMS OF INFECTION\\\",\\\"SKIN COLOR SURROUNDING WOUND\\\",STATE,\\\"SURFACE AREA (SQ CM)\\\",\\\"TOTAL NECROTIC TISSUE ESCHAR\\\",\\\"TOTAL NECROTIC TISSUE SLOUGH\\\",TUNNELING,\\\"TUNNELING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\\\",\\\"TUNNELING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\\\",\\\"TUNNELING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\\\",\\\"TUNNELING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\\\",UNDERMINING,\\\"UNDERMINING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\\\",\\\"UNDERMINING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\\\",\\\"UNDERMINING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\\\",\\\"UNDERMINING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\\\",\\\"WIDTH (CM)\\\",\\\"WOUND PAIN LEVEL, WHERE 0 = \\\\\\\"NO<file:///%22NO> PAIN\\\\\\\" AND 10 = \\\\\\\"WORST<file:///%22WORST> POSSIBLE PAIN\\\\\\\"\\\"}'::text[]))\"\n }\n ]\n },\n {\n \"Node Type\": \"CTE Scan\",\n \"Parent Relationship\": \"Outer\",\n \"Parallel Aware\": false,\n \"CTE Name\": \"t\",\n \"Alias\": \"t\",\n \"Startup Cost\": 0,\n \"Total Cost\": 28051.76,\n \"Plan Rows\": 1402588,\n \"Plan Width\": 552,\n \"Output\": [\n \"t.iccqa_iccassmt_fk\",\n \"t.iccqar_ques_code\",\n \"t.iccqar_ans_val\"\n ]\n }\n ]\n }\n ]\n },\n \"Settings\": {\n \"version\":13.3\n \"effective_cache_size\": \"52GB\",\n \"from_collapse_limit\": \"24\",\n \"jit\": \"off\",\n \"jit_above_cost\": \"2e+08\",\n \"jit_inline_above_cost\": \"5e+08\",\n \"jit_optimize_above_cost\": \"5e+08\",\n \"join_collapse_limit\": \"24\",\n \"max_parallel_workers\": \"20\",\n \"max_parallel_workers_per_gather\": \"8\",\n \"random_page_cost\": \"1.1\",\n \"temp_buffers\": \"4GB\",\n \"work_mem\": \"384MB\"\n },\n \"Planning Time\": 0.784\n }\n]\"\n\n\nI am out of my wits as to what is causing such a massive slowdown and how I could fix it.\n\nAny idea out there?\n\nThank you!\nLaurent Hasson\n\n\n\n\n\n\n\n\n\n\n\n \nHello,\n \nWe have a data warehouse working on Postgres V11.2. We have a query that is pretty beefy that has been taking under 5mn to run consistently every day for about 2 years as part of a data warehouse ETL process. It’s a pivot over 55 values\n on a table with some 15M rows. The total table size is over 2GB (table+indices+other).\n \n \nCREATE TABLE assessmenticcqa_raw\n(\n iccqar_iccassmt_fk integer NOT NULL, -- foreign key to assessment\n iccqar_ques_code character varying(255) COLLATE pg_catalog.\"default\" NOT NULL, -- question code\n iccqar_ans_val character varying(255) COLLATE pg_catalog.\"default\" NOT NULL, -- answer value\n \"lastUpdated\" timestamp with time zone NOT NULL DEFAULT now(),\n CONSTRAINT fk_assessmenticcqa_raw_assessment FOREIGN KEY (iccqar_iccassmt_fk)\n REFERENCES assessmenticc_fact (iccassmt_pk) MATCH SIMPLE\n ON UPDATE CASCADE\n ON DELETE RESTRICT\n)\n \nTABLESPACE pg_default;\n \nCREATE UNIQUE INDEX assessmenticcqa_raw_idx_iccqar_assmt_ques\n ON assessmenticcqa_raw USING btree\n \n(iccqar_iccassmt_fk ASC NULLS LAST, iccqar_ques_code COLLATE pg_catalog.\"default\" ASC NULLS LAST)\n \nTABLESPACE pg_default;\n \nCREATE INDEX assessmenticcqa_raw_idx_iccqar_lastupdated\n ON assessmenticcqa_raw USING btree\n (\"lastUpdated\" ASC NULLS LAST)\n TABLESPACE pg_default;\n \n \nThe query that does the pivot is:\n \n \nWITH t AS (\n SELECT assessmenticcqa_raw.iccqar_iccassmt_fk AS iccqa_iccassmt_fk,\n \nassessmenticcqa_raw.iccqar_ques_code,\n max(assessmenticcqa_raw.iccqar_ans_val::text) AS iccqar_ans_val\n \nFROM assessmenticcqa_raw\n WHERE assessmenticcqa_raw.iccqar_ques_code::text = ANY (ARRAY['DEBRIDEMENT DATE'::character varying::text\n , 'DEBRIDEMENT THIS VISIT'::character varying::text\n , 'DEBRIDEMENT TYPE'::character varying::text\n , 'DEPTH (CM)'::character varying::text\n , 'DEPTH DESCRIPTION'::character varying::text\n , … 55 total columns to pivot\n ])\n GROUP BY assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\n )\nSELECT t.iccqa_iccassmt_fk,\n max(t.iccqar_ans_val) AS iccqar_ans_val,\n tilda.todate(max(t.iccqar_ans_val) FILTER (WHERE t.iccqar_ques_code::text = 'DEBRIDEMENT DATE'::text)::character varying, NULL::date) AS \"iccqa_DEBRIDEMENT_DATE\",\n max(t.iccqar_ans_val) FILTER (WHERE t.iccqar_ques_code::text = 'DEBRIDEMENT THIS VISIT'::text) AS \"iccqa_DEBRIDEMENT_THIS_VISIT\",\n max(t.iccqar_ans_val) FILTER (WHERE t.iccqar_ques_code::text = 'DEBRIDEMENT TYPE'::text) AS \"iccqa_DEBRIDEMENT_TYPE\",\n tilda.tofloat(max(t.iccqar_ans_val) FILTER (WHERE t.iccqar_ques_code::text = 'DEPTH (CM)'::text)::character varying, NULL::real) AS \"iccqa_DEPTH_CM\",\n max(t.iccqar_ans_val) FILTER (WHERE t.iccqar_ques_code::text = 'DEPTH DESCRIPTION'::text) AS \"iccqa_DEPTH_DESCRIPTION\",\n … 55 total columns being pivotted\n FROM t\n GROUP BY t.iccqa_iccassmt_fk;\n \n \n \nThis query has been working flawlessly without so much as a glitch every day for the last 2 years or so with of course an increasing amount of data every day (the data grows at about 15-20 thousand records per day). I know the query is\n not incremental but at under 5mn, it’s simple and works well and can handle inconsistent updates on the data source we use which is pretty dirty.\n \nThe problem I am facing is that we are trying to move to Postgres V13.3 and this query (and several others like it) is now taking 10x longer (3,000 seconds vs 300 seconds) which makes it completely unacceptable. I created a V13 instance\n following standard practices with pg_upgrade. I have V11 and V13 working side by side on the exact same hardware: the VM is an 8-core (16 threads) 64GB windows server 2012 R2 machine with SSD storage. I have vacuumed both V11 and V13 databases full freeze\n analyze. The V13 is an exact backup of the V11 database content-wise. The postgres.conf is the same too and hasn’t been touched in years:\n \n \n \"effective_cache_size\": \"52GB\",\n \"from_collapse_limit\": \"24\",\n \"jit\": \"off\",\n \"jit_above_cost\": \"2e+08\",\n \"jit_inline_above_cost\": \"5e+08\",\n \"jit_optimize_above_cost\": \"5e+08\",\n \"join_collapse_limit\": \"24\",\n \"max_parallel_workers\": \"20\",\n \"max_parallel_workers_per_gather\": \"8\",\n \"random_page_cost\": \"1.1\",\n \"temp_buffers\": \"4GB\",\n \"work_mem\": \"384MB\"\n \n \nI have done all my testing with either of the database on while the other was off (shutting down the DB) to make sure there wasn’t any weird interaction. I have read some articles about major changes between 11 and 13 (some of which occurred\n in 12). In particular, information about the JIT sometimes causing trouble, and the way some CTEs can now be inlined and which can also cause trouble.\n\nAs you can see from the config above, I have disabled the JIT to make this more comparable with 11 and eliminate that possible source of issues.I have also tried different versions of the query (MATERIALIZED vs NOT MATERIALIZED) with little impact.\n \nThe plans are pretty much identical too. I checked line by line and couldn’t see anything much different (note that I have a view over this query). Here is the V13 version of the plan:\n\"[\n {\n \"Plan\": {\n \"Node Type\": \"Subquery Scan\",\n \"Parallel Aware\": false,\n \"Alias\": \"assessmenticcqapivotview\",\n \"Startup Cost\": 1785087.62,\n \"Total Cost\": 1785100.62,\n \"Plan Rows\": 200,\n \"Plan Width\": 1228,\n \"Output\": [\n \"assessmenticcqapivotview.iccqa_iccassmt_fk\",\n \"assessmenticcqapivotview.\\\"iccqa_DEBRIDEMENT_DATE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_DEBRIDEMENT_THIS_VISIT\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_DEBRIDEMENT_TYPE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_DEPTH_CM\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_DEPTH_DESCRIPTION\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_DOES_PATIENT_HAVE_PAIN_ASSOCIATED_WITH_THIS_WOUND\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_DRAIN_PRESENT\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_DRAIN_TYPE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_EDGE_SURROUNDING_TISSUE_MACERATION\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_EDGES\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_EPITHELIALIZATION\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_EXUDATE_AMOUNT\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_EXUDATE_TYPE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_GRANULATION_TISSUE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_INDICATE_OTHER_TYPE_OF_WOUND_CLOSURE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_INDICATE_TYPE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_INDICATE_WOUND_CLOSURE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_IS_THIS_A_CLOSED_SURGICAL_WOUND_OR_SUSPECTED_DEEP_TISSUE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_LENGTH_CM\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_MEASUREMENTS_TAKEN\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_NECROTIC_TISSUE_AMOUNT\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_NECROTIC_TISSUE_TYPE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_ODOR\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_OTHER_COMMENTS_REGARDING_DEBRIDEMENT_TYPE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_OTHER_COMMENTS_REGARDING_DRAIN_TYPE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_OTHER_COMMENTS_REGARDING_PAIN_INTERVENTIONS\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_OTHER_COMMENTS_REGARDING_PAIN_QUALITY\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_OTHER_COMMENTS_REGARDING_REASON_MEASUREMENTS_NOT_TAKEN\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_PAIN_FREQUENCY\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_PAIN_INTERVENTIONS\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_PAIN_QUALITY\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_PERIPHERAL_TISSUE_EDEMA\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_PERIPHERAL_TISSUE_INDURATION\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_REASON_MEASUREMENTS_NOT_TAKEN\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_RESPONSE_TO_PAIN_INTERVENTIONS\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_SHAPE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_SIGNS_AND_SYMPTOMS_OF_INFECTION\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_SKIN_COLOR_SURROUNDING_WOUND\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_STATE\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_SURFACE_AREA_SQ_CM\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_TOTAL_NECROTIC_TISSUE_ESCHAR\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_TOTAL_NECROTIC_TISSUE_SLOUGH\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_TUNNELING\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_TUNNELING_SIZE_CM_LOCATION_12_3_O_CLOCK\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_TUNNELING_SIZE_CM_LOCATION_3_6_O_CLOCK\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_TUNNELING_SIZE_CM_LOCATION_6_9_O_CLOCK\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_TUNNELING_SIZE_CM_LOCATION_9_12_O_CLOCK\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_UNDERMINING\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_UNDERMINING_SIZE_CM_LOCATION_12_3_O_CLOCK\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_UNDERMINING_SIZE_CM_LOCATION_3_6_O_CLOCK\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_UNDERMINING_SIZE_CM_LOCATION_6_9_O_CLOCK\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_UNDERMINING_SIZE_CM_LOCATION_9_12_O_CLOCK\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_WIDTH_CM\\\"\",\n \"assessmenticcqapivotview.\\\"iccqa_WOUND_PAIN_LEVEL_WHERE_0_NO_PAIN_AND_10_WORST_POS\\\"\"\n ],\n \"Plans\": [\n {\n \"Node Type\": \"Aggregate\",\n \"Strategy\": \"Hashed\",\n \"Partial Mode\": \"Simple\",\n \"Parent Relationship\": \"Subquery\",\n \"Parallel Aware\": false,\n \"Startup Cost\": 1785087.62,\n \"Total Cost\": 1785098.62,\n \"Plan Rows\": 200,\n \"Plan Width\": 1260,\n \"Output\": [\n \"t.iccqa_iccassmt_fk\",\n \"NULL::text\",\n \"tilda.todate((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'DEBRIDEMENT DATE'::text)))::character varying, NULL::date)\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'DEBRIDEMENT THIS VISIT'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'DEBRIDEMENT TYPE'::text))\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'DEPTH (CM)'::text)))::character varying, NULL::real)\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'DEPTH DESCRIPTION'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'DOES PATIENT HAVE PAIN ASSOCIATED WITH THIS WOUND?'::text))\",\n \"tilda.toint((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'DRAIN PRESENT'::text)))::character varying, NULL::integer)\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'DRAIN TYPE'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'EDGE / SURROUNDING TISSUE - MACERATION'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'EDGES'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'EPITHELIALIZATION'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'EXUDATE AMOUNT'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'EXUDATE TYPE'::text))\",\n \n\"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'GRANULATION TISSUE'::text))\",\n \n\"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'INDICATE OTHER TYPE OF WOUND CLOSURE'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'INDICATE TYPE'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'INDICATE WOUND CLOSURE'::text))\",\n \"tilda.toint((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'IS THIS A CLOSED SURGICAL WOUND OR SUSPECTED DEEP TISSUE INJURY?'::text)))::character varying,\n NULL::integer)\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'LENGTH (CM)'::text)))::character varying, NULL::real)\",\n \"tilda.toint((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'MEASUREMENTS TAKEN'::text)))::character varying, NULL::integer)\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'NECROTIC TISSUE AMOUNT'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'NECROTIC TISSUE TYPE'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'ODOR'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'OTHER COMMENTS REGARDING DEBRIDEMENT TYPE'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'OTHER COMMENTS REGARDING DRAIN TYPE'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'OTHER COMMENTS REGARDING PAIN INTERVENTIONS'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'OTHER COMMENTS REGARDING PAIN QUALITY'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'OTHER COMMENTS REGARDING REASON MEASUREMENTS NOT TAKEN'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'PAIN FREQUENCY'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'PAIN INTERVENTIONS'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'PAIN QUALITY'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'PERIPHERAL TISSUE EDEMA'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'PERIPHERAL TISSUE INDURATION'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'REASON MEASUREMENTS NOT TAKEN'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'RESPONSE TO PAIN INTERVENTIONS'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'SHAPE'::text))\",\n \"tilda.toint((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'SIGNS AND SYMPTOMS OF INFECTION'::text)))::character varying, NULL::integer)\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'SKIN COLOR SURROUNDING WOUND'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'STATE'::text))\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'SURFACE AREA (SQ CM)'::text)))::character varying, NULL::real)\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'TOTAL NECROTIC TISSUE ESCHAR'::text))\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'TOTAL NECROTIC TISSUE SLOUGH'::text))\",\n \n\"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'TUNNELING'::text))\",\n \n\"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'TUNNELING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK'::text)))::character varying, NULL::real)\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'TUNNELING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK'::text)))::character varying, NULL::real)\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'TUNNELING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK'::text)))::character varying, NULL::real)\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'TUNNELING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK'::text)))::character varying, NULL::real)\",\n \"max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'UNDERMINING'::text))\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'UNDERMINING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK'::text)))::character varying, NULL::real)\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'UNDERMINING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK'::text)))::character varying, NULL::real)\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'UNDERMINING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK'::text)))::character varying, NULL::real)\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'UNDERMINING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK'::text)))::character varying, NULL::real)\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'WIDTH (CM)'::text)))::character varying, NULL::real)\",\n \"tilda.tofloat((max(t.iccqar_ans_val) FILTER (WHERE ((t.iccqar_ques_code)::text = 'WOUND PAIN LEVEL, WHERE 0 = \\\"NO PAIN\\\" AND 10 = \\\"WORST POSSIBLE PAIN\\\"'::text)))::character\n varying, NULL::real)\"\n ],\n \"Group Key\": [\n \"t.iccqa_iccassmt_fk\"\n ],\n \"Planned Partitions\": 0,\n \"Plans\": [\n {\n \"Node Type\": \"Aggregate\",\n \"Strategy\": \"Hashed\",\n \"Partial Mode\": \"Simple\",\n \"Parent Relationship\": \"InitPlan\",\n \"Subplan Name\": \"CTE t\",\n \"Parallel Aware\": false,\n \"Startup Cost\": 1360804.75,\n \"Total Cost\": 1374830.63,\n \"Plan Rows\": 1402588,\n \"Plan Width\": 56,\n \"Output\": [\n \"assessmenticcqa_raw.iccqar_iccassmt_fk\",\n \n\"assessmenticcqa_raw.iccqar_ques_code\",\n \n\"max((assessmenticcqa_raw.iccqar_ans_val)::text)\"\n ],\n \"Group Key\": [\n \"assessmenticcqa_raw.iccqar_iccassmt_fk\",\n \"assessmenticcqa_raw.iccqar_ques_code\"\n ],\n \"Planned Partitions\": 0,\n \"Plans\": [\n {\n \"Node Type\": \"Seq Scan\",\n \"Parent Relationship\": \"Outer\",\n \"Parallel Aware\": false,\n \"Relation Name\": \"assessmenticcqa_raw\",\n \"Schema\": \"public\",\n \"Alias\": \"assessmenticcqa_raw\",\n \"Startup Cost\": 0,\n \"Total Cost\": 1256856.62,\n \"Plan Rows\": 13859750,\n \"Plan Width\": 38,\n \"Output\": [\n \"assessmenticcqa_raw.iccqar_iccassmt_fk\",\n \n\"assessmenticcqa_raw.iccqar_ques_code\",\n \"assessmenticcqa_raw.iccqar_ques_type\",\n \"assessmenticcqa_raw.iccqar_ans_val\",\n \"assessmenticcqa_raw.created\",\n \"assessmenticcqa_raw.\\\"lastUpdated\\\"\",\n \"assessmenticcqa_raw.deleted\"\n ],\n \"Filter\": \"((assessmenticcqa_raw.iccqar_ques_code)::text = ANY ('{\\\"DEBRIDEMENT DATE\\\",\\\"DEBRIDEMENT THIS VISIT\\\",\\\"DEBRIDEMENT TYPE\\\",\\\"DEPTH (CM)\\\",\\\"DEPTH DESCRIPTION\\\",\\\"DOES\n PATIENT HAVE PAIN ASSOCIATED WITH THIS WOUND?\\\",\\\"DRAIN PRESENT\\\",\\\"DRAIN TYPE\\\",\\\"EDGE / SURROUNDING TISSUE - MACERATION\\\",EDGES,EPITHELIALIZATION,\\\"EXUDATE AMOUNT\\\",\\\"EXUDATE TYPE\\\",\\\"GRANULATION TISSUE\\\",\\\"INDICATE OTHER TYPE OF WOUND CLOSURE\\\",\\\"INDICATE\n TYPE\\\",\\\"INDICATE WOUND CLOSURE\\\",\\\"IS THIS A CLOSED SURGICAL WOUND OR SUSPECTED DEEP TISSUE INJURY?\\\",\\\"LENGTH (CM)\\\",\\\"MEASUREMENTS TAKEN\\\",\\\"NECROTIC TISSUE AMOUNT\\\",\\\"NECROTIC TISSUE TYPE\\\",ODOR,\\\"OTHER COMMENTS REGARDING DEBRIDEMENT TYPE\\\",\\\"OTHER COMMENTS\n REGARDING DRAIN TYPE\\\",\\\"OTHER COMMENTS REGARDING PAIN INTERVENTIONS\\\",\\\"OTHER COMMENTS REGARDING PAIN QUALITY\\\",\\\"OTHER COMMENTS REGARDING REASON MEASUREMENTS NOT TAKEN\\\",\\\"PAIN FREQUENCY\\\",\\\"PAIN INTERVENTIONS\\\",\\\"PAIN QUALITY\\\",\\\"PERIPHERAL TISSUE EDEMA\\\",\\\"PERIPHERAL\n TISSUE INDURATION\\\",\\\"REASON MEASUREMENTS NOT TAKEN\\\",\\\"RESPONSE TO PAIN INTERVENTIONS\\\",SHAPE,\\\"SIGNS AND SYMPTOMS OF INFECTION\\\",\\\"SKIN COLOR SURROUNDING WOUND\\\",STATE,\\\"SURFACE AREA (SQ CM)\\\",\\\"TOTAL NECROTIC TISSUE ESCHAR\\\",\\\"TOTAL NECROTIC TISSUE SLOUGH\\\",TUNNELING,\\\"TUNNELING\n SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\\\",\\\"TUNNELING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\\\",\\\"TUNNELING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\\\",\\\"TUNNELING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\\\",UNDERMINING,\\\"UNDERMINING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\\\",\\\"UNDERMINING\n SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\\\",\\\"UNDERMINING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\\\",\\\"UNDERMINING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\\\",\\\"WIDTH (CM)\\\",\\\"WOUND PAIN LEVEL, WHERE 0 =\n\\\\\\\"NO PAIN\\\\\\\" AND 10 = \\\\\\\"WORST POSSIBLE PAIN\\\\\\\"\\\"}'::text[]))\"\n }\n ]\n },\n {\n \"Node Type\": \"CTE Scan\",\n \"Parent Relationship\": \"Outer\",\n \"Parallel Aware\": false,\n \"CTE Name\": \"t\",\n \"Alias\": \"t\",\n \"Startup Cost\": 0,\n \"Total Cost\": 28051.76,\n \"Plan Rows\": 1402588,\n \"Plan Width\": 552,\n \"Output\": [\n \n\"t.iccqa_iccassmt_fk\",\n \"t.iccqar_ques_code\",\n \n\"t.iccqar_ans_val\"\n ]\n }\n ]\n }\n ]\n },\n \"Settings\": {\n \"version\":13.3\n \"effective_cache_size\": \"52GB\",\n \"from_collapse_limit\": \"24\",\n \"jit\": \"off\",\n \"jit_above_cost\": \"2e+08\",\n \"jit_inline_above_cost\": \"5e+08\",\n \"jit_optimize_above_cost\": \"5e+08\",\n \"join_collapse_limit\": \"24\",\n \"max_parallel_workers\": \"20\",\n \"max_parallel_workers_per_gather\": \"8\",\n \"random_page_cost\": \"1.1\",\n \"temp_buffers\": \"4GB\",\n \"work_mem\": \"384MB\"\n },\n \"Planning Time\": 0.784\n }\n]\"\n \n \nI am out of my wits as to what is causing such a massive slowdown and how I could fix it.\n \nAny idea out there?\n \nThank you!\nLaurent Hasson",
"msg_date": "Wed, 21 Jul 2021 18:50:58 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "On Wed, Jul 21, 2021 at 06:50:58PM +0000, [email protected] wrote:\n> The plans are pretty much identical too. I checked line by line and couldn't see anything much different (note that I have a view over this query). Here is the V13 version of the plan:\n\n> I am out of my wits as to what is causing such a massive slowdown and how I could fix it.\n> \n> Any idea out there?\n\nCould you send the \"explain (analyze,buffers,settings) for query on the v11 and\nv13 instances ?\n\nOr a link to the execution plan pasted into explain.depesz.com.\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#EXPLAIN_.28ANALYZE.2C_BUFFERS.29.2C_not_just_EXPLAIN\n\nIt might be good to check using a copy of your data that there's no regression\nbetween 11.2 and 11.12.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 21 Jul 2021 14:15:28 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "I'm not seeing the valueof the CTE. Why not access assessmenticcqa_raw\ndirectly in the main query and only do GROUP BY once? Do you have many\nvalues in iccqar_ques_code which are not used in this query?\n\n>\n\nI'm not seeing the valueof the CTE. Why not access assessmenticcqa_raw directly in the main query and only do GROUP BY once? Do you have many values in iccqar_ques_code which are not used in this query?",
"msg_date": "Wed, 21 Jul 2021 16:12:16 -0600",
"msg_from": "Michael Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "\n-----Original Message-----\nFrom: Justin Pryzby <[email protected]> \nSent: Wednesday, July 21, 2021 15:15\nTo: [email protected]\nCc: [email protected]\nSubject: Re: Big performance slowdown from 11.2 to 13.3\n\nOn Wed, Jul 21, 2021 at 06:50:58PM +0000, [email protected] wrote:\n> The plans are pretty much identical too. I checked line by line and couldn't see anything much different (note that I have a view over this query). Here is the V13 version of the plan:\n\n> I am out of my wits as to what is causing such a massive slowdown and how I could fix it.\n> \n> Any idea out there?\n\nCould you send the \"explain (analyze,buffers,settings) for query on the v11 and\nv13 instances ?\n\nOr a link to the execution plan pasted into explain.depesz.com.\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#EXPLAIN_.28ANALYZE.2C_BUFFERS.29.2C_not_just_EXPLAIN\n\nIt might be good to check using a copy of your data that there's no regression between 11.2 and 11.12.\n\n--\nJustin\n\n\n\n\n\nMy apologies... I thought this is what I had attached in my original email from PGADMIN. In any case, I reran from the command line and here are the two plans.\n\n\nV11.2 explain (analyze,buffers,COSTS,TIMING)\n========================================\nHashAggregate (cost=1758361.62..1758372.62 rows=200 width=1260) (actual time=80545.907..161176.867 rows=720950 loops=1)\n Group Key: t.iccqa_iccassmt_fk\n Buffers: shared hit=8 read=170093 written=23, temp written=82961\n CTE t\n -> HashAggregate (cost=1338668.50..1352428.93 rows=1376043 width=56) (actual time=23669.075..32038.977 rows=13821646 loops=1)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\n Buffers: shared read=170084 written=23\n -> Seq Scan on assessmenticcqa_raw (cost=0.00..1236517.01 rows=13620198 width=38) (actual time=0.081..10525.487 rows=13821646 loops=1)\n Filter: ((iccqar_ques_code)::text = ANY ('{\"DEBRIDEMENT DATE\",\"DEBRIDEMENT THIS VISIT\",\"DEBRIDEMENT TYPE\",\"DEPTH (CM)\",\"DEPTH DESCRIPTION\",\"DOES PATIENT HAVE PAIN ASSOCIATED WITH THIS WOUND?\",\"DRAIN PRESENT\",\"DRAIN TYPE\",\"EDGE / SURROUNDING TISSUE - MACERATION\",EDGES,EPITHELIALIZATION,\"EXUDATE AMOUNT\",\"EXUDATE TYPE\",\"GRANULATION TISSUE\",\"INDICATE OTHER TYPE OF WOUND CLOSURE\",\"INDICATE TYPE\",\"INDICATE WOUND CLOSURE\",\"IS THIS A CLOSED SURGICAL WOUND OR SUSPECTED DEEP TISSUE INJURY?\",\"LENGTH (CM)\",\"MEASUREMENTS TAKEN\",\"NECROTIC TISSUE AMOUNT\",\"NECROTIC TISSUE TYPE\",ODOR,\"OTHER COMMENTS REGARDING DEBRIDEMENT TYPE\",\"OTHER COMMENTS REGARDING DRAIN TYPE\",\"OTHER COMMENTS REGARDING PAIN INTERVENTIONS\",\"OTHER COMMENTS REGARDING PAIN QUALITY\",\"OTHER COMMENTS REGARDING REASON MEASUREMENTS NOT TAKEN\",\"PAIN FREQUENCY\",\"PAIN INTERVENTIONS\",\"PAIN QUALITY\",\"PERIPHERAL TISSUE EDEMA\",\"PERIPHERAL TISSUE INDURATION\",\"REASON MEASUREMENTS NOT TAKEN\",\"RESPONSE TO PAIN INTERVENTIONS\",SHAPE,\"SIGNS AND SYMPTOMS OF INFECTION\",\"SKIN COLOR SURROUNDING WOUND\",STATE,\"SURFACE AREA (SQ CM)\",\"TOTAL NECROTIC TISSUE ESCHAR\",\"TOTAL NECROTIC TISSUE SLOUGH\",TUNNELING,\"TUNNELING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\",UNDERMINING,\"UNDERMINING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\",\"WIDTH (CM)\",\"WOUND PAIN LEVEL, WHERE 0 = \\\"NO PAIN\\\" AND 10 = \\\"WORST POSSIBLE PAIN\\\"\"}'::text[]))\n Rows Removed by Filter: 169940\n Buffers: shared read=170084 written=23\n -> CTE Scan on t (cost=0.00..27520.86 rows=1376043 width=552) (actual time=23669.081..39393.726 rows=13821646 loops=1)\n Buffers: shared read=170084 written=23, temp written=82961\nPlanning Time: 6.160 ms\nExecution Time: 161,942.304 ms\n\n\nV13.3 explain (analyze,buffers,COSTS,TIMING,SETTINGS)\n======================================================\nHashAggregate (cost=1774568.21..1774579.21 rows=200 width=1260) (actual time=81053.437..1699800.741 rows=722853 loops=1)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk\n Batches: 5 Memory Usage: 284737kB Disk Usage: 600000kB\n Buffers: shared hit=20 read=169851, temp read=185258 written=305014\n -> HashAggregate (cost=1360804.75..1374830.63 rows=1402588 width=56) (actual time=24967.655..47587.401 rows=13852618 loops=1)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\n Batches: 21 Memory Usage: 393273kB Disk Usage: 683448kB\n Buffers: shared read=169851, temp read=110477 written=174216\n -> Seq Scan on assessmenticcqa_raw (cost=0.00..1256856.62 rows=13859750 width=38) (actual time=0.104..12406.726 rows=13852618 loops=1)\n Filter: ((iccqar_ques_code)::text = ANY ('{\"DEBRIDEMENT DATE\",\"DEBRIDEMENT THIS VISIT\",\"DEBRIDEMENT TYPE\",\"DEPTH (CM)\",\"DEPTH DESCRIPTION\",\"DOES PATIENT HAVE PAIN ASSOCIATED WITH THIS WOUND?\",\"DRAIN PRESENT\",\"DRAIN TYPE\",\"EDGE / SURROUNDING TISSUE - MACERATION\",EDGES,EPITHELIALIZATION,\"EXUDATE AMOUNT\",\"EXUDATE TYPE\",\"GRANULATION TISSUE\",\"INDICATE OTHER TYPE OF WOUND CLOSURE\",\"INDICATE TYPE\",\"INDICATE WOUND CLOSURE\",\"IS THIS A CLOSED SURGICAL WOUND OR SUSPECTED DEEP TISSUE INJURY?\",\"LENGTH (CM)\",\"MEASUREMENTS TAKEN\",\"NECROTIC TISSUE AMOUNT\",\"NECROTIC TISSUE TYPE\",ODOR,\"OTHER COMMENTS REGARDING DEBRIDEMENT TYPE\",\"OTHER COMMENTS REGARDING DRAIN TYPE\",\"OTHER COMMENTS REGARDING PAIN INTERVENTIONS\",\"OTHER COMMENTS REGARDING PAIN QUALITY\",\"OTHER COMMENTS REGARDING REASON MEASUREMENTS NOT TAKEN\",\"PAIN FREQUENCY\",\"PAIN INTERVENTIONS\",\"PAIN QUALITY\",\"PERIPHERAL TISSUE EDEMA\",\"PERIPHERAL TISSUE INDURATION\",\"REASON MEASUREMENTS NOT TAKEN\",\"RESPONSE TO PAIN INTERVENTIONS\",SHAPE,\"SIGNS AND SYMPTOMS OF INFECTION\",\"SKIN COLOR SURROUNDING WOUND\",STATE,\"SURFACE AREA (SQ CM)\",\"TOTAL NECROTIC TISSUE ESCHAR\",\"TOTAL NECROTIC TISSUE SLOUGH\",TUNNELING,\"TUNNELING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\",UNDERMINING,\"UNDERMINING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\",\"WIDTH (CM)\",\"WOUND PAIN LEVEL, WHERE 0 = \\\"NO PAIN\\\" AND 10 = \\\"WORST POSSIBLE PAIN\\\"\"}'::text[]))\n Rows Removed by Filter: 171680\n Buffers: shared read=169851\nSettings: effective_cache_size = '52GB', from_collapse_limit = '24', jit = 'off', jit_above_cost = '2e+08', jit_inline_above_cost = '5e+08', jit_optimize_above_cost = '5e+08', join_collapse_limit = '24', max_parallel_workers = '20', max_parallel_workers_per_gather = '8', random_page_cost = '1.1', temp_buffers = '4GB', work_mem = '384MB'\nPlanning:\n Buffers: shared hit=203 read=20\nPlanning Time: 52.820 ms\nExecution Time: 1,700,228.424 ms\n\n\n\nAs you can see, the V13.3 execution is about 10x slower.\n\nIt may be hard for me to create a whole copy of the database on 11.12 and check that environment by itself. I'd want to do it on the same machine to control variables, and I don't have much extra disk space at the moment.\n\n\n\n\n\nThank you,\nLaurent.\n\n\n",
"msg_date": "Wed, 21 Jul 2021 23:19:04 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "\r\nFrom: Michael Lewis <[email protected]> \r\nSent: Wednesday, July 21, 2021 18:12\r\nTo: [email protected]\r\nCc: [email protected]\r\nSubject: Re: Big performance slowdown from 11.2 to 13.3\r\n\r\nI'm not seeing the valueof the CTE. Why not access assessmenticcqa_raw directly in the main query and only do GROUP BY once? Do you have many values in iccqar_ques_code which are not used in this query?\r\n\r\n\r\n\r\n\r\nYes, there are close to 600 different values, and we are picking up only a small amount. And by the way, this is a classic case where the query could be folder as a sub-select in a join clause, and I tried this as well, with the same results.\r\n\r\nThank you,\r\nLaurent.\r\n\r\n",
"msg_date": "Wed, 21 Jul 2021 23:31:06 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "On Wed, Jul 21, 2021 at 4:19 PM [email protected]\n<[email protected]> wrote:\n> As you can see, the V13.3 execution is about 10x slower.\n>\n> It may be hard for me to create a whole copy of the database on 11.12 and check that environment by itself. I'd want to do it on the same machine to control variables, and I don't have much extra disk space at the moment.\n\nI imagine that this has something to do with the fact that the hash\naggregate spills to disk in Postgres 13.\n\nYou might try increasing hash_mem_multiplier from its default of 1.0,\nto 2.0 or even 4.0. That way you'd be able to use 2x or 4x more memory\nfor executor nodes that are based on hashing (hash join and hash\naggregate), without also affecting other kinds of nodes, which are\ntypically much less sensitive to memory availability. This is very\nsimilar to increasing work_mem, except that it is better targeted.\n\nIt might even make sense to *decrease* work_mem and increase\nhash_mem_multiplier even further than 4.0. That approach is more\naggressive, though, so I wouldn't use it until it actually proved\nnecessary.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 21 Jul 2021 16:33:33 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "\"[email protected]\" <[email protected]> writes:\n> My apologies... I thought this is what I had attached in my original email from PGADMIN. In any case, I reran from the command line and here are the two plans.\n\nSo the pain seems to be coming in with the upper hash aggregation, which\nis spilling to disk because work_mem of '384MB' is nowhere near enough.\nThe v11 explain doesn't show any batching there, which makes me suspect\nthat it was using a larger value of work_mem. (There could also be some\nedge effect that is making v13 use a bit more memory for the same number\nof tuples, which could lead it to spill when v11 had managed to scrape by\nwithout doing so.)\n\nSo the first thing I'd try is seeing if setting work_mem to 1GB or so\nimproves matters.\n\nThe other thing that's notable is that v13 has collapsed out the CTE\nthat used to sit between the two levels of hashagg. Now I don't know\nof any reason that that wouldn't be a strict improvement, but if the\nwork_mem theory doesn't pan out then that's something that'd deserve\na closer look. Does marking the WITH as WITH MATERIALIZED change\nanything about v13's performance?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Jul 2021 19:35:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "\r\n\r\n-----Original Message-----\r\nFrom: Peter Geoghegan <[email protected]> \r\nSent: Wednesday, July 21, 2021 19:34\r\nTo: [email protected]\r\nCc: Justin Pryzby <[email protected]>; [email protected]\r\nSubject: Re: Big performance slowdown from 11.2 to 13.3\r\n\r\nOn Wed, Jul 21, 2021 at 4:19 PM [email protected] <[email protected]> wrote:\r\n> As you can see, the V13.3 execution is about 10x slower.\r\n>\r\n> It may be hard for me to create a whole copy of the database on 11.12 and check that environment by itself. I'd want to do it on the same machine to control variables, and I don't have much extra disk space at the moment.\r\n\r\nI imagine that this has something to do with the fact that the hash aggregate spills to disk in Postgres 13.\r\n\r\nYou might try increasing hash_mem_multiplier from its default of 1.0, to 2.0 or even 4.0. That way you'd be able to use 2x or 4x more memory for executor nodes that are based on hashing (hash join and hash aggregate), without also affecting other kinds of nodes, which are typically much less sensitive to memory availability. This is very similar to increasing work_mem, except that it is better targeted.\r\n\r\nIt might even make sense to *decrease* work_mem and increase hash_mem_multiplier even further than 4.0. That approach is more aggressive, though, so I wouldn't use it until it actually proved necessary.\r\n\r\n--\r\nPeter Geoghegan\r\n\r\n\r\n\r\nSo how is this happening? I mean, it's the exact same query, looks like the same plan to me, it's the same data on the exact same VM etc... Why is that behavior so different?\r\n\r\nAs soon as I can, I'll check if perhaps the hash_mem_multiplier is somehow set differently between the two setups? That would be my first guess, but absent that, looks like a very different behavior across those 2 versions?\r\n\r\nThank you,\r\nLaurent.\r\n",
"msg_date": "Wed, 21 Jul 2021 23:37:40 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "\"[email protected]\" <[email protected]> writes:\n> From: Peter Geoghegan <[email protected]> \n>> I imagine that this has something to do with the fact that the hash aggregate spills to disk in Postgres 13.\n\n> So how is this happening? I mean, it's the exact same query, looks like the same plan to me, it's the same data on the exact same VM etc... Why is that behavior so different?\n\nWhat Peter's pointing out is that v11 never spilled hashagg hash tables to\ndisk period, no matter how big they got (possibly leading to out-of-memory\nsituations or swapping, but evidently you have enough RAM to have avoided\nthat sort of trouble). I'd momentarily forgotten that, but I think he's\ndead on about that explaining the difference. As he says, messing with\nhash_mem_multiplier would be a more targeted fix than increasing work_mem\nacross the board.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Jul 2021 19:43:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "\n\n-----Original Message-----\nFrom: Tom Lane <[email protected]> \nSent: Wednesday, July 21, 2021 19:36\nTo: [email protected]\nCc: Justin Pryzby <[email protected]>; [email protected]\nSubject: Re: Big performance slowdown from 11.2 to 13.3\n\n\"[email protected]\" <[email protected]> writes:\n> My apologies... I thought this is what I had attached in my original email from PGADMIN. In any case, I reran from the command line and here are the two plans.\n\nSo the pain seems to be coming in with the upper hash aggregation, which is spilling to disk because work_mem of '384MB' is nowhere near enough.\nThe v11 explain doesn't show any batching there, which makes me suspect that it was using a larger value of work_mem. (There could also be some edge effect that is making v13 use a bit more memory for the same number of tuples, which could lead it to spill when v11 had managed to scrape by without doing so.)\n\nSo the first thing I'd try is seeing if setting work_mem to 1GB or so improves matters.\n\nThe other thing that's notable is that v13 has collapsed out the CTE that used to sit between the two levels of hashagg. Now I don't know of any reason that that wouldn't be a strict improvement, but if the work_mem theory doesn't pan out then that's something that'd deserve a closer look. Does marking the WITH as WITH MATERIALIZED change anything about v13's performance?\n\n\t\t\tregards, tom lane\n\n\n\n\nHello Tom (and Peter)! Thanks for all this info. \n\nI created 3 versions of this query: CTE MATERIALIZED, CTE NOT MATERIALIZED, and no CTE (select directly in a sub join). Only very minor change in the final execution time (seconds).\n\nI'll try the following later this evening:\n- set work_mem to 1GB\n- play with hash_mem_multiplier as per Peter's suggestions although he did suggest to try being more aggressive with it and lower work_mem... so I'll play with those 2 variables.\n\nThank you,\nLaurent.\n\n\n\n\n\n",
"msg_date": "Wed, 21 Jul 2021 23:44:40 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "\n\n-----Original Message-----\nFrom: Tom Lane <[email protected]> \nSent: Wednesday, July 21, 2021 19:43\nTo: [email protected]\nCc: Peter Geoghegan <[email protected]>; Justin Pryzby <[email protected]>; [email protected]\nSubject: Re: Big performance slowdown from 11.2 to 13.3\n\n\"[email protected]\" <[email protected]> writes:\n> From: Peter Geoghegan <[email protected]> \n>> I imagine that this has something to do with the fact that the hash aggregate spills to disk in Postgres 13.\n\n> So how is this happening? I mean, it's the exact same query, looks like the same plan to me, it's the same data on the exact same VM etc... Why is that behavior so different?\n\nWhat Peter's pointing out is that v11 never spilled hashagg hash tables to\ndisk period, no matter how big they got (possibly leading to out-of-memory\nsituations or swapping, but evidently you have enough RAM to have avoided\nthat sort of trouble). I'd momentarily forgotten that, but I think he's\ndead on about that explaining the difference. As he says, messing with\nhash_mem_multiplier would be a more targeted fix than increasing work_mem\nacross the board.\n\n\t\t\tregards, tom lane\n\n\nOK, got it! That sounds and smells good. Will try later tonight or tomorrow and report back.\n\nThank you!\nLaurent.\n\n\n",
"msg_date": "Wed, 21 Jul 2021 23:45:44 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "\r\n\r\n-----Original Message-----\r\nFrom: [email protected] <[email protected]> \r\nSent: Wednesday, July 21, 2021 19:46\r\nTo: Tom Lane <[email protected]>\r\nCc: Peter Geoghegan <[email protected]>; Justin Pryzby <[email protected]>; [email protected]\r\nSubject: RE: Big performance slowdown from 11.2 to 13.3\r\n\r\n\r\n\r\n-----Original Message-----\r\nFrom: Tom Lane <[email protected]> \r\nSent: Wednesday, July 21, 2021 19:43\r\nTo: [email protected]\r\nCc: Peter Geoghegan <[email protected]>; Justin Pryzby <[email protected]>; [email protected]\r\nSubject: Re: Big performance slowdown from 11.2 to 13.3\r\n\r\n\"[email protected]\" <[email protected]> writes:\r\n> From: Peter Geoghegan <[email protected]> \r\n>> I imagine that this has something to do with the fact that the hash aggregate spills to disk in Postgres 13.\r\n\r\n> So how is this happening? I mean, it's the exact same query, looks like the same plan to me, it's the same data on the exact same VM etc... Why is that behavior so different?\r\n\r\nWhat Peter's pointing out is that v11 never spilled hashagg hash tables to\r\ndisk period, no matter how big they got (possibly leading to out-of-memory\r\nsituations or swapping, but evidently you have enough RAM to have avoided\r\nthat sort of trouble). I'd momentarily forgotten that, but I think he's\r\ndead on about that explaining the difference. As he says, messing with\r\nhash_mem_multiplier would be a more targeted fix than increasing work_mem\r\nacross the board.\r\n\r\n\t\t\tregards, tom lane\r\n\r\n\r\nOK, got it! That sounds and smells good. Will try later tonight or tomorrow and report back.\r\n\r\nThank you!\r\nLaurent.\r\n\r\n\r\n\r\n\r\n\r\n\r\nHello all,\r\n\r\nSeems like no cigar ☹ See plan pasted below. I changed the conf as follows:\r\n - hash_mem_multiplier = '2'\r\n - work_mem = '1GB'\r\n\r\nI tried a few other configuration, i.e., 512MB/4, 256MB/8 with similar results.\r\n\r\nAlso, you mentioned previously that the hash was spilling to disk? How are you seeing this in the plans? What should I be looking for on my end when playing around with parameters to see the intended effect?\r\n\r\nThank you,\r\nLaurent.\r\n\r\n\r\nHashAggregate (cost=1774568.21..1774579.21 rows=200 width=1260) (actual time=70844.078..1554843.323 rows=722853 loops=1)\r\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk\r\n Batches: 1 Memory Usage: 1277985kB\r\n Buffers: shared hit=14 read=169854, temp read=15777 written=27588\r\n -> HashAggregate (cost=1360804.75..1374830.63 rows=1402588 width=56) (actual time=23370.026..33839.347 rows=13852618 loops=1)\r\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\r\n Batches: 5 Memory Usage: 2400305kB Disk Usage: 126560kB\r\n Buffers: shared read=169851, temp read=15777 written=27588\r\n -> Seq Scan on assessmenticcqa_raw (cost=0.00..1256856.62 rows=13859750 width=38) (actual time=0.072..10906.894 rows=13852618 loops=1)\r\n Filter: ((iccqar_ques_code)::text = ANY ('{\"DEBRIDEMENT DATE\",\"DEBRIDEMENT THIS VISIT\",\"DEBRIDEMENT TYPE\",\"DEPTH (CM)\",\"DEPTH DESCRIPTION\",\"DOES PATIENT HAVE PAIN ASSOCIATED WITH THIS WOUND?\",\"DRAIN PRESENT\",\"DRAIN TYPE\",\"EDGE / SURROUNDING TISSUE - MACERATION\",EDGES,EPITHELIALIZATION,\"EXUDATE AMOUNT\",\"EXUDATE TYPE\",\"GRANULATION TISSUE\",\"INDICATE OTHER TYPE OF WOUND CLOSURE\",\"INDICATE TYPE\",\"INDICATE WOUND CLOSURE\",\"IS THIS A CLOSED SURGICAL WOUND OR SUSPECTED DEEP TISSUE INJURY?\",\"LENGTH (CM)\",\"MEASUREMENTS TAKEN\",\"NECROTIC TISSUE AMOUNT\",\"NECROTIC TISSUE TYPE\",ODOR,\"OTHER COMMENTS REGARDING DEBRIDEMENT TYPE\",\"OTHER COMMENTS REGARDING DRAIN TYPE\",\"OTHER COMMENTS REGARDING PAIN INTERVENTIONS\",\"OTHER COMMENTS REGARDING PAIN QUALITY\",\"OTHER COMMENTS REGARDING REASON MEASUREMENTS NOT TAKEN\",\"PAIN FREQUENCY\",\"PAIN INTERVENTIONS\",\"PAIN QUALITY\",\"PERIPHERAL TISSUE EDEMA\",\"PERIPHERAL TISSUE INDURATION\",\"REASON MEASUREMENTS NOT TAKEN\",\"RESPONSE TO PAIN INTERVENTIONS\",SHAPE,\"SIGNS AND SYMPTOMS OF INFECTION\",\"SKIN COLOR SURROUNDING WOUND\",STATE,\"SURFACE AREA (SQ CM)\",\"TOTAL NECROTIC TISSUE ESCHAR\",\"TOTAL NECROTIC TISSUE SLOUGH\",TUNNELING,\"TUNNELING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\",UNDERMINING,\"UNDERMINING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\",\"WIDTH (CM)\",\"WOUND PAIN LEVEL, WHERE 0 = \\\"NO PAIN\\\" AND 10 = \\\"WORST POSSIBLE PAIN\\\"\"}'::text[]))\r\n Rows Removed by Filter: 171680\r\n Buffers: shared read=169851\r\nSettings: effective_cache_size = '52GB', from_collapse_limit = '24', hash_mem_multiplier = '2', jit = 'off', jit_above_cost = '2e+08', jit_inline_above_cost = '5e+08', jit_optimize_above_cost = '5e+08', join_collapse_limit = '24', max_parallel_workers = '20', max_parallel_workers_per_gather = '8', random_page_cost = '1.1', temp_buffers = '4GB', work_mem = '1GB'\r\nPlanning:\r\n Buffers: shared hit=186 read=37\r\nPlanning Time: 3.667 ms\r\nExecution Time: 1555300.746 ms\r\n",
"msg_date": "Thu, 22 Jul 2021 04:37:41 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "On Thu, 22 Jul 2021 at 16:37, [email protected]\n<[email protected]> wrote:\n> Seems like no cigar ☹ See plan pasted below. I changed the conf as follows:\n> - hash_mem_multiplier = '2'\n> - work_mem = '1GB'\n\n> Batches: 5 Memory Usage: 2400305kB Disk Usage: 126560kB\n\nYou might want to keep going higher with hash_mem_multiplier until you\nsee no \"Disk Usage\" there. As mentioned, v11 didn't spill to disk and\njust used all the memory it pleased. That was a bit dangerous as it\ncould result in OOM, so it was fixed.\n\nDavid\n\n\n",
"msg_date": "Thu, 22 Jul 2021 16:43:51 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "OK. Will do another round of testing.\r\n\r\n\r\n-----Original Message-----\r\nFrom: David Rowley <[email protected]> \r\nSent: Thursday, July 22, 2021 00:44\r\nTo: [email protected]\r\nCc: Tom Lane <[email protected]>; Peter Geoghegan <[email protected]>; Justin Pryzby <[email protected]>; [email protected]\r\nSubject: Re: Big performance slowdown from 11.2 to 13.3\r\n\r\nOn Thu, 22 Jul 2021 at 16:37, [email protected] <[email protected]> wrote:\r\n> Seems like no cigar ☹ See plan pasted below. I changed the conf as follows:\r\n> - hash_mem_multiplier = '2'\r\n> - work_mem = '1GB'\r\n\r\n> Batches: 5 Memory Usage: 2400305kB Disk Usage: 126560kB\r\n\r\nYou might want to keep going higher with hash_mem_multiplier until you see no \"Disk Usage\" there. As mentioned, v11 didn't spill to disk and just used all the memory it pleased. That was a bit dangerous as it could result in OOM, so it was fixed.\r\n\r\nDavid\r\n",
"msg_date": "Thu, 22 Jul 2021 13:37:18 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "\r\n\r\n-----Original Message-----\r\nFrom: [email protected] <[email protected]> \r\nSent: Thursday, July 22, 2021 09:37\r\nTo: David Rowley <[email protected]>\r\nCc: Tom Lane <[email protected]>; Peter Geoghegan <[email protected]>; Justin Pryzby <[email protected]>; [email protected]\r\nSubject: RE: Big performance slowdown from 11.2 to 13.3\r\n\r\nOK. Will do another round of testing.\r\n\r\n\r\n-----Original Message-----\r\nFrom: David Rowley <[email protected]> \r\nSent: Thursday, July 22, 2021 00:44\r\nTo: [email protected]\r\nCc: Tom Lane <[email protected]>; Peter Geoghegan <[email protected]>; Justin Pryzby <[email protected]>; [email protected]\r\nSubject: Re: Big performance slowdown from 11.2 to 13.3\r\n\r\nOn Thu, 22 Jul 2021 at 16:37, [email protected] <[email protected]> wrote:\r\n> Seems like no cigar ☹ See plan pasted below. I changed the conf as follows:\r\n> - hash_mem_multiplier = '2'\r\n> - work_mem = '1GB'\r\n\r\n> Batches: 5 Memory Usage: 2400305kB Disk Usage: 126560kB\r\n\r\nYou might want to keep going higher with hash_mem_multiplier until you see no \"Disk Usage\" there. As mentioned, v11 didn't spill to disk and just used all the memory it pleased. That was a bit dangerous as it could result in OOM, so it was fixed.\r\n\r\nDavid\r\n\r\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n\r\nHello all,\r\n\r\nSo, I went possibly nuclear, and still no cigar. Something's not right.\r\n- hash_mem_multiplier = '10'\r\n- work_mem = '1GB'\r\n\r\nThe results are\r\n\tBatches: 5 Memory Usage: 2,449,457kB Disk Usage: 105,936kB\r\n\tExecution Time: 1,837,126.766 ms\r\n\r\nIt's still spilling to disk and seems to cap at 2.5GB of memory usage in spite of configuration. More importantly\r\n - I am not understanding how spilling to disk 100MB (which seems low to me and should be fast on our SSD), causes the query to slow down by a factor of 10.\r\n - It seems at the very least that memory consumption on 11 was more moderate? This process of ours was running several of these types of queries concurrently and I don't think I ever saw the machine go over 40GB in memory usage.\r\n\r\n\r\nHashAggregate (cost=1774568.21..1774579.21 rows=200 width=1260) (actual time=84860.629..1836583.909 rows=722853 loops=1)\r\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk\r\n Batches: 1 Memory Usage: 1277985kB\r\n Buffers: shared hit=46 read=169822, temp read=13144 written=23035\r\n -> HashAggregate (cost=1360804.75..1374830.63 rows=1402588 width=56) (actual time=27890.422..39975.074 rows=13852618 loops=1)\r\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\r\n Batches: 5 Memory Usage: 2449457kB Disk Usage: 105936kB\r\n Buffers: shared hit=32 read=169819, temp read=13144 written=23035\r\n -> Seq Scan on assessmenticcqa_raw (cost=0.00..1256856.62 rows=13859750 width=38) (actual time=0.053..13623.310 rows=13852618 loops=1)\r\n Filter: ((iccqar_ques_code)::text = ANY ('{\"DEBRIDEMENT DATE\",\"DEBRIDEMENT THIS VISIT\",\"DEBRIDEMENT TYPE\",\"DEPTH (CM)\",\"DEPTH DESCRIPTION\",\"DOES PATIENT HAVE PAIN ASSOCIATED WITH THIS WOUND?\",\"DRAIN PRESENT\",\"DRAIN TYPE\",\"EDGE / SURROUNDING TISSUE - MACERATION\",EDGES,EPITHELIALIZATION,\"EXUDATE AMOUNT\",\"EXUDATE TYPE\",\"GRANULATION TISSUE\",\"INDICATE OTHER TYPE OF WOUND CLOSURE\",\"INDICATE TYPE\",\"INDICATE WOUND CLOSURE\",\"IS THIS A CLOSED SURGICAL WOUND OR SUSPECTED DEEP TISSUE INJURY?\",\"LENGTH (CM)\",\"MEASUREMENTS TAKEN\",\"NECROTIC TISSUE AMOUNT\",\"NECROTIC TISSUE TYPE\",ODOR,\"OTHER COMMENTS REGARDING DEBRIDEMENT TYPE\",\"OTHER COMMENTS REGARDING DRAIN TYPE\",\"OTHER COMMENTS REGARDING PAIN INTERVENTIONS\",\"OTHER COMMENTS REGARDING PAIN QUALITY\",\"OTHER COMMENTS REGARDING REASON MEASUREMENTS NOT TAKEN\",\"PAIN FREQUENCY\",\"PAIN INTERVENTIONS\",\"PAIN QUALITY\",\"PERIPHERAL TISSUE EDEMA\",\"PERIPHERAL TISSUE INDURATION\",\"REASON MEASUREMENTS NOT TAKEN\",\"RESPONSE TO PAIN INTERVENTIONS\",SHAPE,\"SIGNS AND SYMPTOMS OF INFECTION\",\"SKIN COLOR SURROUNDING WOUND\",STATE,\"SURFACE AREA (SQ CM)\",\"TOTAL NECROTIC TISSUE ESCHAR\",\"TOTAL NECROTIC TISSUE SLOUGH\",TUNNELING,\"TUNNELING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\",UNDERMINING,\"UNDERMINING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\",\"WIDTH (CM)\",\"WOUND PAIN LEVEL, WHERE 0 = \\\"NO PAIN\\\" AND 10 = \\\"WORST POSSIBLE PAIN\\\"\"}'::text[]))\r\n Rows Removed by Filter: 171680\r\n Buffers: shared hit=32 read=169819\r\nSettings: effective_cache_size = '52GB', from_collapse_limit = '24', hash_mem_multiplier = '10', jit = 'off', jit_above_cost = '2e+08', jit_inline_above_cost = '5e+08', jit_optimize_above_cost = '5e+08', join_collapse_limit = '24', max_parallel_workers = '20', max_parallel_workers_per_gather = '8', random_page_cost = '1.1', temp_buffers = '4GB', work_mem = '1GB'\r\nPlanning:\r\n Buffers: shared hit=3\r\nPlanning Time: 1.038 ms\r\nExecution Time: 1837126.766 ms\r\n\r\n\r\nThank you,\r\n\r\n\r\n",
"msg_date": "Thu, 22 Jul 2021 15:00:33 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "\"[email protected]\" <[email protected]> writes:\n> So, I went possibly nuclear, and still no cigar. Something's not right.\n> - hash_mem_multiplier = '10'\n> - work_mem = '1GB'\n\n> The results are\n> \tBatches: 5 Memory Usage: 2,449,457kB Disk Usage: 105,936kB\n> \tExecution Time: 1,837,126.766 ms\n\n> It's still spilling to disk and seems to cap at 2.5GB of memory usage in spite of configuration.\n\nThat is ... weird. Maybe you have found a bug in the spill-to-disk logic;\nit's quite new after all. Can you extract a self-contained test case that\nbehaves this way?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Jul 2021 11:45:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "I wrote:\n> \"[email protected]\" <[email protected]> writes:\n>> It's still spilling to disk and seems to cap at 2.5GB of memory usage in spite of configuration.\n\n> That is ... weird.\n\nOh: see get_hash_mem:\n\n\thash_mem = (double) work_mem * hash_mem_multiplier;\n\n\t/*\n\t * guc.c enforces a MAX_KILOBYTES limitation on work_mem in order to\n\t * support the assumption that raw derived byte values can be stored in\n\t * 'long' variables. The returned hash_mem value must also meet this\n\t * assumption.\n\t *\n\t * We clamp the final value rather than throw an error because it should\n\t * be possible to set work_mem and hash_mem_multiplier independently.\n\t */\n\tif (hash_mem < MAX_KILOBYTES)\n\t\treturn (int) hash_mem;\n\n\treturn MAX_KILOBYTES;\n\nSo basically, we now have a hard restriction that hashaggs can't use\nmore than INT_MAX kilobytes, or approximately 2.5GB, and this use case\nis getting eaten alive by that restriction. Seems like we need to\ndo something about that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Jul 2021 11:56:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "On Fri, 23 Jul 2021 at 03:56, Tom Lane <[email protected]> wrote:\n> So basically, we now have a hard restriction that hashaggs can't use\n> more than INT_MAX kilobytes, or approximately 2.5GB, and this use case\n> is getting eaten alive by that restriction. Seems like we need to\n> do something about that.\n\nHmm, math check?\n\npostgres=# select pg_size_pretty(power(2,31)::numeric*1024);\n pg_size_pretty\n----------------\n 2048 GB\n(1 row)\n\nDavid\n\n\n",
"msg_date": "Fri, 23 Jul 2021 04:04:00 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 8:45 AM Tom Lane <[email protected]> wrote:\n> That is ... weird. Maybe you have found a bug in the spill-to-disk logic;\n> it's quite new after all. Can you extract a self-contained test case that\n> behaves this way?\n\nI wonder if this has something to do with the way that the input data\nis clustered. I recall noticing that that could significantly alter\nthe behavior of HashAggs as of Postgres 13.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 22 Jul 2021 09:14:19 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "\n\n-----Original Message-----\nFrom: Tom Lane <[email protected]> \nSent: Thursday, July 22, 2021 11:57\nTo: [email protected]\nCc: David Rowley <[email protected]>; Peter Geoghegan <[email protected]>; Justin Pryzby <[email protected]>; [email protected]\nSubject: Re: Big performance slowdown from 11.2 to 13.3\n\nI wrote:\n> \"[email protected]\" <[email protected]> writes:\n>> It's still spilling to disk and seems to cap at 2.5GB of memory usage in spite of configuration.\n\n> That is ... weird.\n\nOh: see get_hash_mem:\n\n\thash_mem = (double) work_mem * hash_mem_multiplier;\n\n\t/*\n\t * guc.c enforces a MAX_KILOBYTES limitation on work_mem in order to\n\t * support the assumption that raw derived byte values can be stored in\n\t * 'long' variables. The returned hash_mem value must also meet this\n\t * assumption.\n\t *\n\t * We clamp the final value rather than throw an error because it should\n\t * be possible to set work_mem and hash_mem_multiplier independently.\n\t */\n\tif (hash_mem < MAX_KILOBYTES)\n\t\treturn (int) hash_mem;\n\n\treturn MAX_KILOBYTES;\n\nSo basically, we now have a hard restriction that hashaggs can't use more than INT_MAX kilobytes, or approximately 2.5GB, and this use case is getting eaten alive by that restriction. Seems like we need to do something about that.\n\n\t\t\tregards, tom lane\n\n\n\n\n------------------------------------------------------------------------------------------------------------------------------------------------------\n\nHello!\n\n\nAh... int vs long then? Tried even more (multiplier=16) and this seems to be definitely the case.\n\nIs it fair then to deduce that the total memory usage would be 2,400,305kB + 126,560kB? Is this what under the covers V11 is consuming more or less?\n\nIs it also expected that a spill over of just 100MB (on top of 2.4GB memory consumption) would cause the query to collapse like this? I am still not visualizing in my head how that would happen. 100MB just seems so small, and our SSD is fast.\n\nGenerating a dataset would take me a lot of time. This is a clinical database so I cannot reuse the current table. I would have to entirely mock the use case and create a dummy dataset from scratch.\n\n\n\nHashAggregate (cost=1774568.21..1774579.21 rows=200 width=1260) (actual time=94618.303..1795311.542 rows=722853 loops=1)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk\n Batches: 1 Memory Usage: 1277985kB\n Buffers: shared hit=14 read=169854, temp read=15777 written=27588\n -> HashAggregate (cost=1360804.75..1374830.63 rows=1402588 width=56) (actual time=30753.022..45384.558 rows=13852618 loops=1)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\n Batches: 5 Memory Usage: 2400305kB Disk Usage: 126560kB\n Buffers: shared read=169851, temp read=15777 written=27588\n -> Seq Scan on assessmenticcqa_raw (cost=0.00..1256856.62 rows=13859750 width=38) (actual time=0.110..14342.258 rows=13852618 loops=1)\n Filter: ((iccqar_ques_code)::text = ANY ('{\"DEBRIDEMENT DATE\",\"DEBRIDEMENT THIS VISIT\",\"DEBRIDEMENT TYPE\",\"DEPTH (CM)\",\"DEPTH DESCRIPTION\",\"DOES PATIENT HAVE PAIN ASSOCIATED WITH THIS WOUND?\",\"DRAIN PRESENT\",\"DRAIN TYPE\",\"EDGE / SURROUNDING TISSUE - MACERATION\",EDGES,EPITHELIALIZATION,\"EXUDATE AMOUNT\",\"EXUDATE TYPE\",\"GRANULATION TISSUE\",\"INDICATE OTHER TYPE OF WOUND CLOSURE\",\"INDICATE TYPE\",\"INDICATE WOUND CLOSURE\",\"IS THIS A CLOSED SURGICAL WOUND OR SUSPECTED DEEP TISSUE INJURY?\",\"LENGTH (CM)\",\"MEASUREMENTS TAKEN\",\"NECROTIC TISSUE AMOUNT\",\"NECROTIC TISSUE TYPE\",ODOR,\"OTHER COMMENTS REGARDING DEBRIDEMENT TYPE\",\"OTHER COMMENTS REGARDING DRAIN TYPE\",\"OTHER COMMENTS REGARDING PAIN INTERVENTIONS\",\"OTHER COMMENTS REGARDING PAIN QUALITY\",\"OTHER COMMENTS REGARDING REASON MEASUREMENTS NOT TAKEN\",\"PAIN FREQUENCY\",\"PAIN INTERVENTIONS\",\"PAIN QUALITY\",\"PERIPHERAL TISSUE EDEMA\",\"PERIPHERAL TISSUE INDURATION\",\"REASON MEASUREMENTS NOT TAKEN\",\"RESPONSE TO PAIN INTERVENTIONS\",SHAPE,\"SIGNS AND SYMPTOMS OF INFECTION\",\"SKIN COLOR SURROUNDING WOUND\",STATE,\"SURFACE AREA (SQ CM)\",\"TOTAL NECROTIC TISSUE ESCHAR\",\"TOTAL NECROTIC TISSUE SLOUGH\",TUNNELING,\"TUNNELING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\",UNDERMINING,\"UNDERMINING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\",\"WIDTH (CM)\",\"WOUND PAIN LEVEL, WHERE 0 = \\\"NO PAIN\\\" AND 10 = \\\"WORST POSSIBLE PAIN\\\"\"}'::text[]))\n Rows Removed by Filter: 171680\n Buffers: shared read=169851\nSettings: effective_cache_size = '52GB', from_collapse_limit = '24', hash_mem_multiplier = '16', jit = 'off', jit_above_cost = '2e+08', jit_inline_above_cost = '5e+08', jit_optimize_above_cost = '5e+08', join_collapse_limit = '24', max_parallel_workers = '20', max_parallel_workers_per_gather = '8', random_page_cost = '1.1', temp_buffers = '4GB', work_mem = '1GB'\nPlanning:\n Buffers: shared hit=186 read=37\nPlanning Time: 55.709 ms\nExecution Time: 1795921.717 ms\n\n\nThank you,\nLaurent.\n\n\n\n",
"msg_date": "Thu, 22 Jul 2021 16:16:34 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "On Fri, 23 Jul 2021 at 04:14, Peter Geoghegan <[email protected]> wrote:\n>\n> On Thu, Jul 22, 2021 at 8:45 AM Tom Lane <[email protected]> wrote:\n> > That is ... weird. Maybe you have found a bug in the spill-to-disk logic;\n> > it's quite new after all. Can you extract a self-contained test case that\n> > behaves this way?\n>\n> I wonder if this has something to do with the way that the input data\n> is clustered. I recall noticing that that could significantly alter\n> the behavior of HashAggs as of Postgres 13.\n\nIsn't it more likely to be reaching the group limit rather than the\nmemory limit?\n\nif (input_groups * hashentrysize < hash_mem * 1024L)\n{\nif (num_partitions != NULL)\n*num_partitions = 0;\n*mem_limit = hash_mem * 1024L;\n*ngroups_limit = *mem_limit / hashentrysize;\nreturn;\n}\n\nThere are 55 aggregates on a varchar(255). I think hashentrysize is\npretty big. If it was 255*55 then only 765591 groups fit in the 10GB\nof memory.\n\nDavid\n\n\n",
"msg_date": "Fri, 23 Jul 2021 04:17:45 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "\r\n-----Original Message-----\r\nFrom: Peter Geoghegan <[email protected]> \r\nSent: Thursday, July 22, 2021 12:14\r\nTo: Tom Lane <[email protected]>\r\nCc: Jeff Davis <[email protected]>; [email protected]; David Rowley <[email protected]>; Justin Pryzby <[email protected]>; [email protected]\r\nSubject: Re: Big performance slowdown from 11.2 to 13.3\r\n\r\nOn Thu, Jul 22, 2021 at 8:45 AM Tom Lane <[email protected]> wrote:\r\n> That is ... weird. Maybe you have found a bug in the spill-to-disk \r\n> logic; it's quite new after all. Can you extract a self-contained \r\n> test case that behaves this way?\r\n\r\nI wonder if this has something to do with the way that the input data is clustered. I recall noticing that that could significantly alter the behavior of HashAggs as of Postgres 13.\r\n\r\n--\r\nPeter Geoghegan\r\n\r\n\r\n\r\nI could execute that test and re-cluster against the index. But I believe that's already done? Let me check.\r\n\r\nThank you,\r\nLaurent.\r\n",
"msg_date": "Thu, 22 Jul 2021 16:18:55 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> On Fri, 23 Jul 2021 at 03:56, Tom Lane <[email protected]> wrote:\n>> So basically, we now have a hard restriction that hashaggs can't use\n>> more than INT_MAX kilobytes, or approximately 2.5GB, and this use case\n>> is getting eaten alive by that restriction. Seems like we need to\n>> do something about that.\n\n> Hmm, math check?\n\nYeah, I should have said \"2GB plus palloc slop\". It doesn't surprise\nme a bit that we seem to be eating another 20% on top of the nominal\nlimit.\n\nI think the right fix here is to remove the cap, which will require\nchanging get_hash_mem to return double, and then maybe some cascading\nchanges --- I've not looked at its callers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Jul 2021 12:21:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 04:16:34PM +0000, [email protected] wrote:\n> Is it fair then to deduce that the total memory usage would be 2,400,305kB + 126,560kB? Is this what under the covers V11 is consuming more or less?\n\nIt might be helpful to know how much RAM v11 is using.\n\nCould you run the query with log_executor_stats=on; client_min_messages=debug;\n\nThe interesting part is this:\n! 7808 kB max resident size\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 22 Jul 2021 11:22:54 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "\r\n\r\n-----Original Message-----\r\nFrom: David Rowley <[email protected]> \r\nSent: Thursday, July 22, 2021 12:18\r\nTo: Peter Geoghegan <[email protected]>\r\nCc: Tom Lane <[email protected]>; Jeff Davis <[email protected]>; [email protected]; Justin Pryzby <[email protected]>; [email protected]\r\nSubject: Re: Big performance slowdown from 11.2 to 13.3\r\n\r\nOn Fri, 23 Jul 2021 at 04:14, Peter Geoghegan <[email protected]> wrote:\r\n>\r\n> On Thu, Jul 22, 2021 at 8:45 AM Tom Lane <[email protected]> wrote:\r\n> > That is ... weird. Maybe you have found a bug in the spill-to-disk \r\n> > logic; it's quite new after all. Can you extract a self-contained \r\n> > test case that behaves this way?\r\n>\r\n> I wonder if this has something to do with the way that the input data \r\n> is clustered. I recall noticing that that could significantly alter \r\n> the behavior of HashAggs as of Postgres 13.\r\n\r\nIsn't it more likely to be reaching the group limit rather than the memory limit?\r\n\r\nif (input_groups * hashentrysize < hash_mem * 1024L) { if (num_partitions != NULL) *num_partitions = 0; *mem_limit = hash_mem * 1024L; *ngroups_limit = *mem_limit / hashentrysize; return; }\r\n\r\nThere are 55 aggregates on a varchar(255). I think hashentrysize is pretty big. If it was 255*55 then only 765591 groups fit in the 10GB of memory.\r\n\r\nDavid\r\n\r\n\r\n\r\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n\r\nHello,\r\n\r\nSo, FYI.... The query I shared is actually a simpler use case of ours 😊 We do have a similar pivot query over 600 columns to create a large flat tale for analysis on an even larger table. Takes about 15mn to run on V11 with strong CPU usage and no particular memory usage spike that I can detect via TaskManager. We have been pushing PG hard and simplify the workflows of our analysts and data scientists downstream.\r\n\r\nThank you,\r\nLaurent.\r\n\r\n",
"msg_date": "Thu, 22 Jul 2021 16:24:40 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "I wrote:\n> I think the right fix here is to remove the cap, which will require\n> changing get_hash_mem to return double, and then maybe some cascading\n> changes --- I've not looked at its callers.\n\nOr, actually, returning size_t would likely make the most sense.\nWe'd fold the 1024L multiplier in here too instead of doing that\nat the callers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Jul 2021 12:26:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "\n\n-----Original Message-----\nFrom: Justin Pryzby <[email protected]> \nSent: Thursday, July 22, 2021 12:23\nTo: [email protected]\nCc: Tom Lane <[email protected]>; David Rowley <[email protected]>; Peter Geoghegan <[email protected]>; [email protected]\nSubject: Re: Big performance slowdown from 11.2 to 13.3\n\nOn Thu, Jul 22, 2021 at 04:16:34PM +0000, [email protected] wrote:\n> Is it fair then to deduce that the total memory usage would be 2,400,305kB + 126,560kB? Is this what under the covers V11 is consuming more or less?\n\nIt might be helpful to know how much RAM v11 is using.\n\nCould you run the query with log_executor_stats=on; client_min_messages=debug;\n\nThe interesting part is this:\n! 7808 kB max resident size\n\n-- \nJustin\n\n\n-------------------------------------------\n\nHello Justin,\n\n> log_executor_stats=on; client_min_messages=debug;\n\nWould the results then come in EXPLAIN or would I need to pick something up from the logs?\n\nThank you,\nLaurent.\n\n\n\n\n",
"msg_date": "Thu, 22 Jul 2021 16:30:00 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 04:30:00PM +0000, [email protected] wrote:\n> Hello Justin,\n> \n> > log_executor_stats=on; client_min_messages=debug;\n> \n> Would the results then come in EXPLAIN or would I need to pick something up from the logs?\n\nIf you're running with psql, and client_min_messages=debug, then it'll show up\nas a debug message to the client:\n\nts=# SET log_executor_stats=on; SET client_min_messages=debug; explain analyze SELECT 1;\nSET\nSET\nLOG: EXECUTOR STATISTICS\nDETAIL: ! system usage stats:\n! 0.000011 s user, 0.000209 s system, 0.000219 s elapsed\n! [0.040608 s user, 0.020304 s system total]\n! 7808 kB max resident size\n...\n\nIt can but doesn't have to use \"explain\" - that just avoids showing the query\noutput, since it's not what's interesting here.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 22 Jul 2021 11:35:56 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 9:21 AM Tom Lane <[email protected]> wrote:\n> Yeah, I should have said \"2GB plus palloc slop\". It doesn't surprise\n> me a bit that we seem to be eating another 20% on top of the nominal\n> limit.\n\nMAX_KILOBYTES is the max_val for the work_mem GUC itself, and has been\nfor many years. The function get_hash_mem() returns a work_mem-style\nint that callers refer to as hash_mem -- the convention is that\ncallers pretend that there is a work_mem style GUC (called hash_mem)\nthat they must access by calling get_hash_mem().\n\nI don't see how it's possible for get_hash_mem() to be unable to\nreturn a hash_mem value that could be represented by work_mem\ndirectly. MAX_KILOBYTES is an annoyingly low limit on Windows, where\nsizeof(long) is 4. But that's nothing new.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 22 Jul 2021 09:36:02 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 09:36:02AM -0700, Peter Geoghegan wrote:\n> I don't see how it's possible for get_hash_mem() to be unable to\n> return a hash_mem value that could be represented by work_mem\n> directly. MAX_KILOBYTES is an annoyingly low limit on Windows, where\n> sizeof(long) is 4. But that's nothing new.\n\nOh. So the problem seems to be that:\n\n1) In v12, HashAgg now obeyes work_mem*hash_mem_multiplier;\n2) Under windows, work_mem is limited to 2GB.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 22 Jul 2021 11:41:27 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "Peter Geoghegan <[email protected]> writes:\n> On Thu, Jul 22, 2021 at 9:21 AM Tom Lane <[email protected]> wrote:\n>> Yeah, I should have said \"2GB plus palloc slop\". It doesn't surprise\n>> me a bit that we seem to be eating another 20% on top of the nominal\n>> limit.\n\n> MAX_KILOBYTES is the max_val for the work_mem GUC itself, and has been\n> for many years.\n\nRight. The point here is that before v13, hash aggregation was not\nsubject to the work_mem limit, nor any related limit. If you did an\naggregation requiring more than 2GB-plus-slop, it would work just fine\nas long as your machine had enough RAM. Now, the performance sucks and\nthere is no knob you can turn to fix it. That's unacceptable in my book.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Jul 2021 12:42:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 9:42 AM Tom Lane <[email protected]> wrote:\n> Right. The point here is that before v13, hash aggregation was not\n> subject to the work_mem limit, nor any related limit. If you did an\n> aggregation requiring more than 2GB-plus-slop, it would work just fine\n> as long as your machine had enough RAM. Now, the performance sucks and\n> there is no knob you can turn to fix it. That's unacceptable in my book.\n\nOh! That makes way more sense.\n\nI suspect David's theory about hash_agg_set_limits()'s ngroup limit is\ncorrect. It certainly seems like a good starting point.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 22 Jul 2021 09:53:21 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "Justin Pryzby <[email protected]> writes:\n> Oh. So the problem seems to be that:\n\n> 1) In v12, HashAgg now obeyes work_mem*hash_mem_multiplier;\n> 2) Under windows, work_mem is limited to 2GB.\n\nAnd more to the point, work_mem*hash_mem_multiplier is *also* limited\nto 2GB. We didn't think that through very carefully. The point of\nthe hash_mem_multiplier feature was to allow hash aggregation to still\nconsume more than the work_mem limit, but we failed to free it from\nthis 2GB limit.\n\nYou're right though that this is Windows-only; on machines with\n64-bit \"long\" there's less of a problem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Jul 2021 12:56:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 9:53 AM Peter Geoghegan <[email protected]> wrote:\n> I suspect David's theory about hash_agg_set_limits()'s ngroup limit is\n> correct. It certainly seems like a good starting point.\n\nI also suspect that if Laurent set work_mem and/or hash_mem_multiplier\n*extremely* aggressively, then eventually the hash agg would be\nin-memory. And without actually using all that much memory.\n\nI'm not suggesting that that is a sensible resolution to Laurent's\ncomplaint. I'm just pointing out that it's probably not fundamentally\nimpossible to make the hash agg avoid spilling through tuning these\nGUCs. At least I see no evidence of that right now.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 22 Jul 2021 10:04:31 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "Peter Geoghegan <[email protected]> writes:\n> I also suspect that if Laurent set work_mem and/or hash_mem_multiplier\n> *extremely* aggressively, then eventually the hash agg would be\n> in-memory. And without actually using all that much memory.\n\nNo, he already tried, upthread. The trouble is that he's on a Windows\nmachine, so get_hash_mem is quasi-artificially constraining the product\nto 2GB. And he needs it to be a bit more than that. Whether the\nconstraint is hitting at the ngroups stage or it's related to actual\nmemory consumption isn't that relevant.\n\nWhat I'm wondering about is whether it's worth putting in a solution\nfor this issue in isolation, or whether we ought to embark on the\nlong-ignored project of getting rid of use of \"long\" for any\nmemory-size-related computations. There would be no chance of\nback-patching something like the latter into v13, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Jul 2021 13:11:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "I did try 2000MB work_mem and 16 multiplier 😊 It seems to plateau at 2GB no matter what. This is what the explain had:\r\n\r\nHashAggregate (cost=1774568.21..1774579.21 rows=200 width=1260) (actual time=94618.303..1795311.542 rows=722853 loops=1)\r\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk\r\n Batches: 1 Memory Usage: 1277985kB\r\n Buffers: shared hit=14 read=169854, temp read=15777 written=27588\r\n -> HashAggregate (cost=1360804.75..1374830.63 rows=1402588 width=56) (actual time=30753.022..45384.558 rows=13852618 loops=1)\r\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\r\n Batches: 5 Memory Usage: 2400305kB Disk Usage: 126560kB\r\n Buffers: shared read=169851, temp read=15777 written=27588\r\n -> Seq Scan on assessmenticcqa_raw (cost=0.00..1256856.62 rows=13859750 width=38) (actual time=0.110..14342.258 rows=13852618 loops=1)\r\n Filter: ((iccqar_ques_code)::text = ANY ('{\"DEBRIDEMENT DATE\",\"DEBRIDEMENT THIS VISIT\",\"DEBRIDEMENT TYPE\",\"DEPTH (CM)\",\"DEPTH DESCRIPTION\",\"DOES PATIENT HAVE PAIN ASSOCIATED WITH THIS WOUND?\",\"DRAIN PRESENT\",\"DRAIN TYPE\",\"EDGE / SURROUNDING TISSUE - MACERATION\",EDGES,EPITHELIALIZATION,\"EXUDATE AMOUNT\",\"EXUDATE TYPE\",\"GRANULATION TISSUE\",\"INDICATE OTHER TYPE OF WOUND CLOSURE\",\"INDICATE TYPE\",\"INDICATE WOUND CLOSURE\",\"IS THIS A CLOSED SURGICAL WOUND OR SUSPECTED DEEP TISSUE INJURY?\",\"LENGTH (CM)\",\"MEASUREMENTS TAKEN\",\"NECROTIC TISSUE AMOUNT\",\"NECROTIC TISSUE TYPE\",ODOR,\"OTHER COMMENTS REGARDING DEBRIDEMENT TYPE\",\"OTHER COMMENTS REGARDING DRAIN TYPE\",\"OTHER COMMENTS REGARDING PAIN INTERVENTIONS\",\"OTHER COMMENTS REGARDING PAIN QUALITY\",\"OTHER COMMENTS REGARDING REASON MEASUREMENTS NOT TAKEN\",\"PAIN FREQUENCY\",\"PAIN INTERVENTIONS\",\"PAIN QUALITY\",\"PERIPHERAL TISSUE EDEMA\",\"PERIPHERAL TISSUE INDURATION\",\"REASON MEASUREMENTS NOT TAKEN\",\"RESPONSE TO PAIN INTERVENTIONS\",SHAPE,\"SIGNS AND SYMPTOMS OF INFECTION\",\"SKIN COLOR SURROUNDING WOUND\",STATE,\"SURFACE AREA (SQ CM)\",\"TOTAL NECROTIC TISSUE ESCHAR\",\"TOTAL NECROTIC TISSUE SLOUGH\",TUNNELING,\"TUNNELING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\",UNDERMINING,\"UNDERMINING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\",\"WIDTH (CM)\",\"WOUND PAIN LEVEL, WHERE 0 = \\\"NO PAIN\\\" AND 10 = \\\"WORST POSSIBLE PAIN\\\"\"}'::text[]))\r\n Rows Removed by Filter: 171680\r\n Buffers: shared read=169851\r\nSettings: effective_cache_size = '52GB', from_collapse_limit = '24', hash_mem_multiplier = '16', jit = 'off', jit_above_cost = '2e+08', jit_inline_above_cost = '5e+08', jit_optimize_above_cost = '5e+08', join_collapse_limit = '24', max_parallel_workers = '20', max_parallel_workers_per_gather = '8', random_page_cost = '1.1', temp_buffers = '4GB', work_mem = ' 2000MB'\r\nPlanning:\r\n Buffers: shared hit=186 read=37\r\nPlanning Time: 55.709 ms\r\nExecution Time: 1795921.717 ms\r\n\r\n\r\n\r\n\r\n\r\n-----Original Message-----\r\nFrom: Peter Geoghegan <[email protected]> \r\nSent: Thursday, July 22, 2021 13:05\r\nTo: Tom Lane <[email protected]>\r\nCc: David Rowley <[email protected]>; [email protected]; Justin Pryzby <[email protected]>; [email protected]\r\nSubject: Re: Big performance slowdown from 11.2 to 13.3\r\n\r\nOn Thu, Jul 22, 2021 at 9:53 AM Peter Geoghegan <[email protected]> wrote:\r\n> I suspect David's theory about hash_agg_set_limits()'s ngroup limit is \r\n> correct. It certainly seems like a good starting point.\r\n\r\nI also suspect that if Laurent set work_mem and/or hash_mem_multiplier\r\n*extremely* aggressively, then eventually the hash agg would be in-memory. And without actually using all that much memory.\r\n\r\nI'm not suggesting that that is a sensible resolution to Laurent's complaint. I'm just pointing out that it's probably not fundamentally impossible to make the hash agg avoid spilling through tuning these GUCs. At least I see no evidence of that right now.\r\n\r\n--\r\nPeter Geoghegan\r\n",
"msg_date": "Thu, 22 Jul 2021 17:16:36 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "\n\n-----Original Message-----\nFrom: Justin Pryzby <[email protected]> \nSent: Thursday, July 22, 2021 12:36\nTo: [email protected]\nCc: Tom Lane <[email protected]>; David Rowley <[email protected]>; Peter Geoghegan <[email protected]>; [email protected]\nSubject: Re: Big performance slowdown from 11.2 to 13.3\n\nOn Thu, Jul 22, 2021 at 04:30:00PM +0000, [email protected] wrote:\n> Hello Justin,\n> \n> > log_executor_stats=on; client_min_messages=debug;\n> \n> Would the results then come in EXPLAIN or would I need to pick something up from the logs?\n\nIf you're running with psql, and client_min_messages=debug, then it'll show up as a debug message to the client:\n\nts=# SET log_executor_stats=on; SET client_min_messages=debug; explain analyze SELECT 1; SET SET\nLOG: EXECUTOR STATISTICS\nDETAIL: ! system usage stats:\n! 0.000011 s user, 0.000209 s system, 0.000219 s elapsed\n! [0.040608 s user, 0.020304 s system total]\n! 7808 kB max resident size\n...\n\nIt can but doesn't have to use \"explain\" - that just avoids showing the query output, since it's not what's interesting here.\n\n--\nJustin\n\n\n-------------------------------------------------------------------------------------\n\nJustin,\n\nI tried this but not seeing max resident size data output.\n\nPepper=# SET log_executor_stats=on; SET client_min_messages=debug; explain analyze SELECT 1;\nSET\nSET\nLOG: EXECUTOR STATISTICS\nDETAIL: ! system usage stats:\n! 0.000000 s user, 0.000000 s system, 0.000109 s elapsed\n! [494.640625 s user, 19.171875 s system total]\n QUERY PLAN\n------------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=1)\n Planning Time: 0.041 ms\n Execution Time: 0.012 ms\n(3 rows)\n\n\nFor my query:\n\n\nLOG: EXECUTOR STATISTICS\nDETAIL: ! system usage stats:\n! 169.625000 s user, 5.843750 s system, 175.490088 s elapsed\n! [494.640625 s user, 19.171875 s system total]\n HashAggregate (cost=1764285.18..1764296.18 rows=200 width=1260) (actual time=86323.813..174737.442 rows=723659 loops=1)\n Group Key: t.iccqa_iccassmt_fk\n Buffers: shared hit=364 read=170293, temp written=83229\n CTE t\n -> HashAggregate (cost=1343178.39..1356985.17 rows=1380678 width=56) (actual time=22594.053..32519.573 rows=13865785 loops=1)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\n Buffers: shared hit=364 read=170293\n -> Seq Scan on assessmenticcqa_raw (cost=0.00..1240682.76 rows=13666084 width=38) (actual time=0.170..10714.598 rows=13865785 loops=1)\n Filter: ((iccqar_ques_code)::text = ANY ('{\"DEBRIDEMENT DATE\",\"DEBRIDEMENT THIS VISIT\",\"DEBRIDEMENT TYPE\",\"DEPTH (CM)\",\"DEPTH DESCRIPTION\",\"DOE\nS PATIENT HAVE PAIN ASSOCIATED WITH THIS WOUND?\",\"DRAIN PRESENT\",\"DRAIN TYPE\",\"EDGE / SURROUNDING TISSUE - MACERATION\",EDGES,EPITHELIALIZATION,\"EXUDATE AMOUNT\",\n\"EXUDATE TYPE\",\"GRANULATION TISSUE\",\"INDICATE OTHER TYPE OF WOUND CLOSURE\",\"INDICATE TYPE\",\"INDICATE WOUND CLOSURE\",\"IS THIS A CLOSED SURGICAL WOUND OR SUSPECTE\nD DEEP TISSUE INJURY?\",\"LENGTH (CM)\",\"MEASUREMENTS TAKEN\",\"NECROTIC TISSUE AMOUNT\",\"NECROTIC TISSUE TYPE\",ODOR,\"OTHER COMMENTS REGARDING DEBRIDEMENT TYPE\",\"OTHE\nR COMMENTS REGARDING DRAIN TYPE\",\"OTHER COMMENTS REGARDING PAIN INTERVENTIONS\",\"OTHER COMMENTS REGARDING PAIN QUALITY\",\"OTHER COMMENTS REGARDING REASON MEASUREM\nENTS NOT TAKEN\",\"PAIN FREQUENCY\",\"PAIN INTERVENTIONS\",\"PAIN QUALITY\",\"PERIPHERAL TISSUE EDEMA\",\"PERIPHERAL TISSUE INDURATION\",\"REASON MEASUREMENTS NOT TAKEN\",\"R\nESPONSE TO PAIN INTERVENTIONS\",SHAPE,\"SIGNS AND SYMPTOMS OF INFECTION\",\"SKIN COLOR SURROUNDING WOUND\",STATE,\"SURFACE AREA (SQ CM)\",\"TOTAL NECROTIC TISSUE ESCHAR\n\",\"TOTAL NECROTIC TISSUE SLOUGH\",TUNNELING,\"TUNNELING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\",\"TUNNELING SIZE(CM)/LO\nCATION - 6 - 9 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\",UNDERMINING,\"UNDERMINING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOC\nATION - 3 - 6 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\",\"WIDTH (CM)\",\"WOUND PAIN LEVEL, WHERE\n 0 = \\\"NO PAIN\\\" AND 10 = \\\"WORST POSSIBLE PAIN\\\"\"}'::text[]))\n Rows Removed by Filter: 172390\n Buffers: shared hit=364 read=170293\n -> CTE Scan on t (cost=0.00..27613.56 rows=1380678 width=552) (actual time=22594.062..40248.874 rows=13865785 loops=1)\n Buffers: shared hit=364 read=170293, temp written=83229\n Planning Time: 0.728 ms\n Execution Time: 175482.904 ms\n(15 rows)\n\n\n\n\n",
"msg_date": "Thu, 22 Jul 2021 17:26:26 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "\r\n\r\n-----Original Message-----\r\nFrom: Tom Lane <[email protected]> \r\nSent: Thursday, July 22, 2021 12:42\r\nTo: Peter Geoghegan <[email protected]>\r\nCc: David Rowley <[email protected]>; [email protected]; Justin Pryzby <[email protected]>; [email protected]\r\nSubject: Re: Big performance slowdown from 11.2 to 13.3\r\n\r\nPeter Geoghegan <[email protected]> writes:\r\n> On Thu, Jul 22, 2021 at 9:21 AM Tom Lane <[email protected]> wrote:\r\n>> Yeah, I should have said \"2GB plus palloc slop\". It doesn't surprise \r\n>> me a bit that we seem to be eating another 20% on top of the nominal \r\n>> limit.\r\n\r\n> MAX_KILOBYTES is the max_val for the work_mem GUC itself, and has been \r\n> for many years.\r\n\r\nRight. The point here is that before v13, hash aggregation was not subject to the work_mem limit, nor any related limit. If you did an aggregation requiring more than 2GB-plus-slop, it would work just fine as long as your machine had enough RAM. Now, the performance sucks and there is no knob you can turn to fix it. That's unacceptable in my book.\r\n\r\n\t\t\tregards, tom lane\r\n\r\n\r\n-------------------------------------------------------\r\n\r\nHello all,\r\n\r\nAs a user of PG, we have taken pride in the last few years in tuning the heck out of the system and getting great performance compared to alternatives like SQLServer. The customers we work with typically have data centers and are overwhelmingly Windows shops: we won the battle to deploy a complex operational system on PG vs SQLServer, but Linux vs Windows was still a bridge too far for many. I am surprised that this limitation introduced after V11 hasn't caused issues elsewhere though. Are we doing things that are such out of the normal? Are we early in pushing V13 to full production? 😊 Doing analytics with pivoted tables with hundreds of columns is not uncommon in our world.\r\n\r\nAs for the three other requests from the team:\r\n\r\nClustering:\r\n==========================\r\nI re-clustered the table on the index that drives the pivot logic but I didn't see any change:\r\n\r\ncluster verbose assessmenticcqa_raw using assessmenticcqa_raw_idx_iccqar_assmt_ques;\r\n\r\nHashAggregate (cost=1774465.36..1774476.36 rows=200 width=1260) (actual time=80848.591..1763443.586 rows=722853 loops=1)\r\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk\r\n Batches: 1 Memory Usage: 1277985kB\r\n Buffers: shared hit=369 read=169577, temp read=15780 written=27584\r\n -> HashAggregate (cost=1360748.50..1374772.80 rows=1402430 width=56) (actual time=25475.554..38256.923 rows=13852618 loops=1)\r\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\r\n Batches: 5 Memory Usage: 2400305kB Disk Usage: 126552kB\r\n Buffers: shared hit=352 read=169577, temp read=15780 written=27584\r\n -> Seq Scan on assessmenticcqa_raw (cost=0.00..1256812.09 rows=13858188 width=38) (actual time=0.085..11914.135 rows=13852618 loops=1)\r\n Filter: ((iccqar_ques_code)::text = ANY ('{\"DEBRIDEMENT DATE\",\"DEBRIDEMENT THIS VISIT\",\"DEBRIDEMENT TYPE\",\"DEPTH (CM)\",\"DEPTH DESCRIPTION\",\"DOES PATIENT HAVE PAIN ASSOCIATED WITH THIS WOUND?\",\"DRAIN PRESENT\",\"DRAIN TYPE\",\"EDGE / SURROUNDING TISSUE - MACERATION\",EDGES,EPITHELIALIZATION,\"EXUDATE AMOUNT\",\"EXUDATE TYPE\",\"GRANULATION TISSUE\",\"INDICATE OTHER TYPE OF WOUND CLOSURE\",\"INDICATE TYPE\",\"INDICATE WOUND CLOSURE\",\"IS THIS A CLOSED SURGICAL WOUND OR SUSPECTED DEEP TISSUE INJURY?\",\"LENGTH (CM)\",\"MEASUREMENTS TAKEN\",\"NECROTIC TISSUE AMOUNT\",\"NECROTIC TISSUE TYPE\",ODOR,\"OTHER COMMENTS REGARDING DEBRIDEMENT TYPE\",\"OTHER COMMENTS REGARDING DRAIN TYPE\",\"OTHER COMMENTS REGARDING PAIN INTERVENTIONS\",\"OTHER COMMENTS REGARDING PAIN QUALITY\",\"OTHER COMMENTS REGARDING REASON MEASUREMENTS NOT TAKEN\",\"PAIN FREQUENCY\",\"PAIN INTERVENTIONS\",\"PAIN QUALITY\",\"PERIPHERAL TISSUE EDEMA\",\"PERIPHERAL TISSUE INDURATION\",\"REASON MEASUREMENTS NOT TAKEN\",\"RESPONSE TO PAIN INTERVENTIONS\",SHAPE,\"SIGNS AND SYMPTOMS OF INFECTION\",\"SKIN COLOR SURROUNDING WOUND\",STATE,\"SURFACE AREA (SQ CM)\",\"TOTAL NECROTIC TISSUE ESCHAR\",\"TOTAL NECROTIC TISSUE SLOUGH\",TUNNELING,\"TUNNELING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\",UNDERMINING,\"UNDERMINING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\",\"WIDTH (CM)\",\"WOUND PAIN LEVEL, WHERE 0 = \\\"NO PAIN\\\" AND 10 = \\\"WORST POSSIBLE PAIN\\\"\"}'::text[]))\r\n Rows Removed by Filter: 171680\r\n Buffers: shared hit=352 read=169577\r\nSettings: effective_cache_size = '52GB', from_collapse_limit = '24', hash_mem_multiplier = '4', jit = 'off', jit_above_cost = '2e+08', jit_inline_above_cost = '5e+08', jit_optimize_above_cost = '5e+08', join_collapse_limit = '24', max_parallel_workers = '20', max_parallel_workers_per_gather = '8', random_page_cost = '1.1', temp_buffers = '4GB', work_mem = '1GB'\r\nPlanning:\r\n Buffers: shared hit=100 read=1\r\nPlanning Time: 1.663 ms\r\nExecution Time: 1763967.567 ms\r\n\r\n\r\n\r\nMore debugging on V11:\r\n==========================\r\nLOG: EXECUTOR STATISTICS\r\nDETAIL: ! system usage stats:\r\n! 169.625000 s user, 5.843750 s system, 175.490088 s elapsed\r\n! [494.640625 s user, 19.171875 s system total]\r\n HashAggregate (cost=1764285.18..1764296.18 rows=200 width=1260) (actual time=86323.813..174737.442 rows=723659 loops=1)\r\n Group Key: t.iccqa_iccassmt_fk\r\n Buffers: shared hit=364 read=170293, temp written=83229\r\n CTE t\r\n -> HashAggregate (cost=1343178.39..1356985.17 rows=1380678 width=56) (actual time=22594.053..32519.573 rows=13865785 loops=1)\r\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\r\n Buffers: shared hit=364 read=170293\r\n -> Seq Scan on assessmenticcqa_raw (cost=0.00..1240682.76 rows=13666084 width=38) (actual time=0.170..10714.598 rows=13865785 loops=1)\r\n Filter: ((iccqar_ques_code)::text = ANY ('{\"DEBRIDEMENT DATE\",\"DEBRIDEMENT THIS VISIT\",\"DEBRIDEMENT TYPE\",\"DEPTH (CM)\",\"DEPTH DESCRIPTION\",\"DOE\r\nS PATIENT HAVE PAIN ASSOCIATED WITH THIS WOUND?\",\"DRAIN PRESENT\",\"DRAIN TYPE\",\"EDGE / SURROUNDING TISSUE - MACERATION\",EDGES,EPITHELIALIZATION,\"EXUDATE AMOUNT\",\r\n\"EXUDATE TYPE\",\"GRANULATION TISSUE\",\"INDICATE OTHER TYPE OF WOUND CLOSURE\",\"INDICATE TYPE\",\"INDICATE WOUND CLOSURE\",\"IS THIS A CLOSED SURGICAL WOUND OR SUSPECTE\r\nD DEEP TISSUE INJURY?\",\"LENGTH (CM)\",\"MEASUREMENTS TAKEN\",\"NECROTIC TISSUE AMOUNT\",\"NECROTIC TISSUE TYPE\",ODOR,\"OTHER COMMENTS REGARDING DEBRIDEMENT TYPE\",\"OTHE\r\nR COMMENTS REGARDING DRAIN TYPE\",\"OTHER COMMENTS REGARDING PAIN INTERVENTIONS\",\"OTHER COMMENTS REGARDING PAIN QUALITY\",\"OTHER COMMENTS REGARDING REASON MEASUREM\r\nENTS NOT TAKEN\",\"PAIN FREQUENCY\",\"PAIN INTERVENTIONS\",\"PAIN QUALITY\",\"PERIPHERAL TISSUE EDEMA\",\"PERIPHERAL TISSUE INDURATION\",\"REASON MEASUREMENTS NOT TAKEN\",\"R\r\nESPONSE TO PAIN INTERVENTIONS\",SHAPE,\"SIGNS AND SYMPTOMS OF INFECTION\",\"SKIN COLOR SURROUNDING WOUND\",STATE,\"SURFACE AREA (SQ CM)\",\"TOTAL NECROTIC TISSUE ESCHAR\r\n\",\"TOTAL NECROTIC TISSUE SLOUGH\",TUNNELING,\"TUNNELING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\",\"TUNNELING SIZE(CM)/LO\r\nCATION - 6 - 9 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\",UNDERMINING,\"UNDERMINING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOC\r\nATION - 3 - 6 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\",\"WIDTH (CM)\",\"WOUND PAIN LEVEL, WHERE\r\n 0 = \\\"NO PAIN\\\" AND 10 = \\\"WORST POSSIBLE PAIN\\\"\"}'::text[]))\r\n Rows Removed by Filter: 172390\r\n Buffers: shared hit=364 read=170293\r\n -> CTE Scan on t (cost=0.00..27613.56 rows=1380678 width=552) (actual time=22594.062..40248.874 rows=13865785 loops=1)\r\n Buffers: shared hit=364 read=170293, temp written=83229\r\n Planning Time: 0.728 ms\r\n Execution Time: 175482.904 ms\r\n(15 rows)\r\n\r\n\r\n\r\nPatch on V13?\r\n==========================\r\nMaybe there can be a patch on V13 and then a longer-term effort afterwards? As it is, I have no way to deploy V13 as this is a hard regression for us.\r\n\r\nThank you,\r\nLaurent.\r\n\r\n\r\n\r\n",
"msg_date": "Thu, 22 Jul 2021 17:26:36 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 10:11 AM Tom Lane <[email protected]> wrote:\n> No, he already tried, upthread. The trouble is that he's on a Windows\n> machine, so get_hash_mem is quasi-artificially constraining the product\n> to 2GB. And he needs it to be a bit more than that. Whether the\n> constraint is hitting at the ngroups stage or it's related to actual\n> memory consumption isn't that relevant.\n\nSomehow I missed that part.\n\n> What I'm wondering about is whether it's worth putting in a solution\n> for this issue in isolation, or whether we ought to embark on the\n> long-ignored project of getting rid of use of \"long\" for any\n> memory-size-related computations. There would be no chance of\n> back-patching something like the latter into v13, though.\n\n+1. Even if we assume that Windows is a low priority platform, in the\nlong run it'll be easier to make it more like every other platform.\n\nThe use of \"long\" is inherently suspect to me. It signals that the\nprogrammer wants something wider than \"int\", even if the standard\ndoesn't actually require that \"long\" be wider. This seems to\ncontradict what we know to be true for Postgres, which is that in\ngeneral it's unsafe to assume that long is int64. It's not just\nwork_mem related calculations. There is also code like logtape.c,\nwhich uses long for block numbers -- that also exposes us to risk on\nWindows.\n\nBy requiring int64 be used instead of long, we don't actually increase\nrisk for non-Windows platforms to any significant degree. I'm pretty\nsure that \"long\" means int64 on non-Windows 64-bit platforms anyway.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 22 Jul 2021 10:27:43 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 05:26:26PM +0000, [email protected] wrote:\n> I tried this but not seeing max resident size data output.\n\nOh. Apparently, that's not supported under windows..\n\n#if defined(HAVE_GETRUSAGE)\n appendStringInfo(&str,\n \"!\\t%ld kB max resident size\\n\",\n\n\n",
"msg_date": "Thu, 22 Jul 2021 12:28:52 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "\n-----Original Message-----\nFrom: Justin Pryzby <[email protected]> \nSent: Thursday, July 22, 2021 13:29\nTo: [email protected]\nCc: Tom Lane <[email protected]>; David Rowley <[email protected]>; Peter Geoghegan <[email protected]>; [email protected]\nSubject: Re: Big performance slowdown from 11.2 to 13.3\n\nOn Thu, Jul 22, 2021 at 05:26:26PM +0000, [email protected] wrote:\n> I tried this but not seeing max resident size data output.\n\nOh. Apparently, that's not supported under windows..\n\n#if defined(HAVE_GETRUSAGE)\n appendStringInfo(&str,\n \"!\\t%ld kB max resident size\\n\",\n\n\n----------------------------\n\nHello,\n\nDamn... I know Windows is a lower priority, and this is yet another issue, but in Healthcare, Windows is so prevalent everywhere...\n\nThank you,\nLaurent.\n\n\n",
"msg_date": "Thu, 22 Jul 2021 17:33:09 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "Peter Geoghegan <[email protected]> writes:\n> On Thu, Jul 22, 2021 at 10:11 AM Tom Lane <[email protected]> wrote:\n>> What I'm wondering about is whether it's worth putting in a solution\n>> for this issue in isolation, or whether we ought to embark on the\n>> long-ignored project of getting rid of use of \"long\" for any\n>> memory-size-related computations. There would be no chance of\n>> back-patching something like the latter into v13, though.\n\n> By requiring int64 be used instead of long, we don't actually increase\n> risk for non-Windows platforms to any significant degree. I'm pretty\n> sure that \"long\" means int64 on non-Windows 64-bit platforms anyway.\n\nWell, what we really ought to be using is size_t (a/k/a Size), at least\nfor memory-space-related calculations. I don't have an opinion right\nnow about what logtape.c ought to use. I do agree that avoiding \"long\"\naltogether would be a good ultimate goal.\n\nIn the short term though, the question is whether we want to regard this\nhashagg issue as something we need a fix for in v13/v14. The fact that\nit's Windows-only makes it slightly less pressing in my mind, but it's\nstill a regression that some people are going to hit.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Jul 2021 13:35:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 10:33 AM [email protected]\n<[email protected]> wrote:\n> Damn... I know Windows is a lower priority, and this is yet another issue, but in Healthcare, Windows is so prevalent everywhere...\n\nTo be clear, I didn't actually say that. I said that it doesn't matter\neither way, as far as addressing this long standing \"int64 vs long\"\nissue goes.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 22 Jul 2021 10:36:06 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "\r\n\r\n-----Original Message-----\r\nFrom: Peter Geoghegan <[email protected]> \r\nSent: Thursday, July 22, 2021 13:36\r\nTo: [email protected]\r\nCc: Justin Pryzby <[email protected]>; Tom Lane <[email protected]>; David Rowley <[email protected]>; [email protected]\r\nSubject: Re: Big performance slowdown from 11.2 to 13.3\r\n\r\nOn Thu, Jul 22, 2021 at 10:33 AM [email protected] <[email protected]> wrote:\r\n> Damn... I know Windows is a lower priority, and this is yet another issue, but in Healthcare, Windows is so prevalent everywhere...\r\n\r\nTo be clear, I didn't actually say that. I said that it doesn't matter either way, as far as addressing this long standing \"int64 vs long\"\r\nissue goes.\r\n\r\n--\r\nPeter Geoghegan\r\n\r\n\r\nYes, agreed Peter... The \"lower priority\" issue was mentioned, but not in terms of the applicability of the fix overall. Personally, I would prefer going the size_t route vs int/long/int64 in C/C++/. Of course, as a user, I'd love a patch on V13 and something cleaner in V14.\r\n\r\nThanks,\r\nLaurent.\r\n\r\n\r\n",
"msg_date": "Thu, 22 Jul 2021 17:40:53 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 10:35 AM Tom Lane <[email protected]> wrote:\n> Well, what we really ought to be using is size_t (a/k/a Size), at least\n> for memory-space-related calculations. I don't have an opinion right\n> now about what logtape.c ought to use. I do agree that avoiding \"long\"\n> altogether would be a good ultimate goal.\n\nI assume that we often use \"long\" in contexts where a signed integer\ntype is required. Maybe this is not true in the case of the work_mem\nstyle calculations. But I know that it works that way in logtape.c,\nwhere -1 is a sentinel value.\n\nWe already use int64 (not size_t) in tuplesort.c for roughly the same\nreason: LACKMEM() needs to work with negative values, to handle\ncertain edge cases.\n\n> In the short term though, the question is whether we want to regard this\n> hashagg issue as something we need a fix for in v13/v14. The fact that\n> it's Windows-only makes it slightly less pressing in my mind, but it's\n> still a regression that some people are going to hit.\n\nTrue. I worry about the potential for introducing new bugs on Windows\nby backpatching a fix for this. Technically this restriction existed\nin every other work_mem consumer on Windows. Of course this won't\nmatter much to users like Laurent.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 22 Jul 2021 10:41:49 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "Em qui., 22 de jul. de 2021 às 14:28, Peter Geoghegan <[email protected]> escreveu:\n\n> On Thu, Jul 22, 2021 at 10:11 AM Tom Lane <[email protected]> wrote:\n> > No, he already tried, upthread. The trouble is that he's on a Windows\n> > machine, so get_hash_mem is quasi-artificially constraining the product\n> > to 2GB. And he needs it to be a bit more than that. Whether the\n> > constraint is hitting at the ngroups stage or it's related to actual\n> > memory consumption isn't that relevant.\n>\n> Somehow I missed that part.\n>\n> > What I'm wondering about is whether it's worth putting in a solution\n> > for this issue in isolation, or whether we ought to embark on the\n> > long-ignored project of getting rid of use of \"long\" for any\n> > memory-size-related computations. There would be no chance of\n> > back-patching something like the latter into v13, though.\n>\n> +1. Even if we assume that Windows is a low priority platform, in the\n> long run it'll be easier to make it more like every other platform.\n>\n> The use of \"long\" is inherently suspect to me. It signals that the\n> programmer wants something wider than \"int\", even if the standard\n> doesn't actually require that \"long\" be wider. This seems to\n> contradict what we know to be true for Postgres, which is that in\n> general it's unsafe to assume that long is int64. It's not just\n> work_mem related calculations. There is also code like logtape.c,\n> which uses long for block numbers -- that also exposes us to risk on\n> Windows.\n>\n> By requiring int64 be used instead of long, we don't actually increase\n> risk for non-Windows platforms to any significant degree. I'm pretty\n> sure that \"long\" means int64 on non-Windows 64-bit platforms anyway.\n>\nI wonder if similar issues not raise from this [1].\n\n(b/src/backend/optimizer/path/costsize.c)\ncost_tuplesort uses *long* to store sort_mem_bytes.\nI suggested switching to int64, but obviously to no avail.\n\n+1 to switch long to int64.\n\nregards,\nRanier Vilela\n\n[1]\nhttps://www.postgresql.org/message-id/CAApHDvqhUYHYGmovoGWJQ1%2BZ%2B50Mz%3DPV6bW%3DQYEh3Z%2BwZTufPQ%40mail.gmail.com\n\nEm qui., 22 de jul. de 2021 às 14:28, Peter Geoghegan <[email protected]> escreveu:On Thu, Jul 22, 2021 at 10:11 AM Tom Lane <[email protected]> wrote:\n> No, he already tried, upthread. The trouble is that he's on a Windows\n> machine, so get_hash_mem is quasi-artificially constraining the product\n> to 2GB. And he needs it to be a bit more than that. Whether the\n> constraint is hitting at the ngroups stage or it's related to actual\n> memory consumption isn't that relevant.\n\nSomehow I missed that part.\n\n> What I'm wondering about is whether it's worth putting in a solution\n> for this issue in isolation, or whether we ought to embark on the\n> long-ignored project of getting rid of use of \"long\" for any\n> memory-size-related computations. There would be no chance of\n> back-patching something like the latter into v13, though.\n\n+1. Even if we assume that Windows is a low priority platform, in the\nlong run it'll be easier to make it more like every other platform.\n\nThe use of \"long\" is inherently suspect to me. It signals that the\nprogrammer wants something wider than \"int\", even if the standard\ndoesn't actually require that \"long\" be wider. This seems to\ncontradict what we know to be true for Postgres, which is that in\ngeneral it's unsafe to assume that long is int64. It's not just\nwork_mem related calculations. There is also code like logtape.c,\nwhich uses long for block numbers -- that also exposes us to risk on\nWindows.\n\nBy requiring int64 be used instead of long, we don't actually increase\nrisk for non-Windows platforms to any significant degree. I'm pretty\nsure that \"long\" means int64 on non-Windows 64-bit platforms anyway.I wonder if similar issues not raise from this [1].(b/src/backend/optimizer/path/costsize.c)cost_tuplesort uses *long* to store sort_mem_bytes.I suggested switching to int64, but obviously to no avail.+1 to switch long to int64.regards,Ranier Vilela [1] https://www.postgresql.org/message-id/CAApHDvqhUYHYGmovoGWJQ1%2BZ%2B50Mz%3DPV6bW%3DQYEh3Z%2BwZTufPQ%40mail.gmail.com",
"msg_date": "Thu, 22 Jul 2021 14:56:03 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "On 2021-Jul-22, [email protected] wrote:\n\n> Yes, agreed Peter... The \"lower priority\" issue was mentioned, but not\n> in terms of the applicability of the fix overall. Personally, I would\n> prefer going the size_t route vs int/long/int64 in C/C++/. Of course,\n> as a user, I'd love a patch on V13 and something cleaner in V14.\n\nJust to clarify our terminology here. \"A patch\" means any kind of\nchange to the source code, regardless of its cleanliness or\napplicability to versions deemed stable. You can have one patch which\nis a ugly hack for a stable version that can't have invasive changes,\nand another patch which is a clean, more maintainable version of a\ntotally different fix for the version in development. We use the same\nword, \"patch\", for both types of source code changes.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 22 Jul 2021 13:56:54 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "Just asking, I may be completely wrong.\n\nis this query parallel safe?\ncan we force parallel workers, by setting low parallel_setup_cost or\notherwise to make use of scatter gather and Partial HashAggregate(s)?\nI am just assuming more workers doing things in parallel, would require\nless disk spill per hash aggregate (or partial hash aggregate ?) and the\nscatter gather at the end.\n\nI did some runs in my demo environment, not with the same query, some group\nby aggregates with around 25M rows, and it showed reasonable results, not\ntoo off.\nthis was pg14 on ubuntu.\n\nJust asking, I may be completely wrong.is this query parallel safe?can we force parallel workers, by setting low parallel_setup_cost or otherwise to make use of scatter gather and Partial HashAggregate(s)?I am just assuming more workers doing things in parallel, would require less disk spill per hash aggregate (or partial hash aggregate ?) and the scatter gather at the end.I did some runs in my demo environment, not with the same query, some group by aggregates with around 25M rows, and it showed reasonable results, not too off.this was pg14 on ubuntu.",
"msg_date": "Fri, 23 Jul 2021 02:01:45 +0530",
"msg_from": "Vijaykumar Jain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "I am not sure I understand this parameter well enough but it’s with a default value right now of 1000. I have read Robert’s post (http://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html) and could play with those parameters, but unsure whether what you are describing will unlock this 2GB limit.\r\n\r\n\r\nFrom: Vijaykumar Jain <[email protected]>\r\nSent: Thursday, July 22, 2021 16:32\r\nTo: [email protected]\r\nCc: Justin Pryzby <[email protected]>; [email protected]\r\nSubject: Re: Big performance slowdown from 11.2 to 13.3\r\n\r\nJust asking, I may be completely wrong.\r\n\r\nis this query parallel safe?\r\ncan we force parallel workers, by setting low parallel_setup_cost or otherwise to make use of scatter gather and Partial HashAggregate(s)?\r\nI am just assuming more workers doing things in parallel, would require less disk spill per hash aggregate (or partial hash aggregate ?) and the scatter gather at the end.\r\n\r\nI did some runs in my demo environment, not with the same query, some group by aggregates with around 25M rows, and it showed reasonable results, not too off.\r\nthis was pg14 on ubuntu.\r\n\r\n\r\n\r\n\r\n\n\n\n\n\n\n\n\n\nI am not sure I understand this parameter well enough but it’s with a default value right now of 1000. I have read Robert’s post (http://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html)\r\n and could play with those parameters, but unsure whether what you are describing will unlock this 2GB limit.\n \n \n\nFrom: Vijaykumar Jain <[email protected]>\r\n\nSent: Thursday, July 22, 2021 16:32\nTo: [email protected]\nCc: Justin Pryzby <[email protected]>; [email protected]\nSubject: Re: Big performance slowdown from 11.2 to 13.3\n\n \n\n\n\n\nJust asking, I may be completely wrong.\n\n\n \n\n\nis this query parallel safe?\n\n\ncan we force parallel workers, by setting low parallel_setup_cost or otherwise to make use of scatter gather and Partial HashAggregate(s)?\n\n\nI am just assuming more workers doing things in parallel, would require less disk spill per hash aggregate (or partial hash aggregate ?) and the scatter gather at the end.\n\n\n \n\n\nI did some runs in my demo environment, not with the same query, some group by aggregates with around 25M rows, and it showed reasonable results, not too off.\n\n\nthis was pg14 on ubuntu.",
"msg_date": "Thu, 22 Jul 2021 21:36:04 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "On Fri, 23 Jul 2021 at 03:06, [email protected] <[email protected]>\nwrote:\n\n> I am not sure I understand this parameter well enough but it’s with a\n> default value right now of 1000. I have read Robert’s post (\n> http://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html)\n> and could play with those parameters, but unsure whether what you are\n> describing will unlock this 2GB limit.\n>\n>\n>\n\nYeah, may be i was diverting, and possibly cannot use the windows\nbottleneck.\n\nalthough the query is diff, the steps were\n1) use system default, work_mem = 4MB, parallel_setup_cost = 1000\n-- runs the query in parallel, no disk spill as work_mem suff.for my data\n\npostgres=# explain analyze with cte as (select month_name, day_name,\nyear_actual, max(date) date from dimensions.dates group by year_actual,\nmonth_name, day_name) select max(date),year_actual from cte group by\nyear_actual;\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=931227.78..932398.85 rows=200 width=8) (actual\ntime=7850.214..7855.848 rows=51 loops=1)\n Group Key: dates.year_actual\n -> Finalize GroupAggregate (cost=931227.78..932333.85 rows=4200\nwidth=28) (actual time=7850.075..7855.611 rows=4201 loops=1)\n Group Key: dates.year_actual, dates.month_name, dates.day_name\n -> Gather Merge (cost=931227.78..932207.85 rows=8400 width=28)\n(actual time=7850.069..7854.008 rows=11295 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Sort (cost=930227.76..930238.26 rows=4200 width=28)\n(actual time=7846.419..7846.551 rows=3765 loops=3)\n Sort Key: dates.year_actual, dates.month_name,\ndates.day_name\n Sort Method: quicksort Memory: 391kB\n Worker 0: Sort Method: quicksort Memory: 392kB\n Worker 1: Sort Method: quicksort Memory: 389kB\n -> Partial HashAggregate (cost=929933.00..929975.00\nrows=4200 width=28) (actual time=7841.979..7842.531 rows=3765 loops=3)\n Group Key: dates.year_actual, dates.month_name,\ndates.day_name\n Batches: 1 Memory Usage: 721kB\n Worker 0: Batches: 1 Memory Usage: 721kB\n Worker 1: Batches: 1 Memory Usage: 721kB\n -> Parallel Seq Scan on dates\n(cost=0.00..820355.00 rows=10957800 width=28) (actual time=3.347..4779.784\nrows=8766240 loops=3)\n Planning Time: 0.133 ms\n Execution Time: 7855.958 ms\n(20 rows)\n\n-- set work_mem to a very low value, to spill to disk and compare the\nspill in parallel vs serial\npostgres=# set work_mem TO 64; --\nSET\npostgres=# explain analyze with cte as (select month_name, day_name,\nyear_actual, max(date) date from dimensions.dates group by year_actual,\nmonth_name, day_name) select max(date),year_actual from cte group by\nyear_actual;\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=2867778.00..2868949.07 rows=200 width=8) (actual\ntime=18116.529..18156.972 rows=51 loops=1)\n Group Key: dates.year_actual\n -> Finalize GroupAggregate (cost=2867778.00..2868884.07 rows=4200\nwidth=28) (actual time=18116.421..18156.729 rows=4201 loops=1)\n Group Key: dates.year_actual, dates.month_name, dates.day_name\n -> Gather Merge (cost=2867778.00..2868758.07 rows=8400 width=28)\n(actual time=18116.412..18155.136 rows=11126 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Sort (cost=2866777.98..2866788.48 rows=4200 width=28)\n(actual time=17983.836..17984.981 rows=3709 loops=3)\n Sort Key: dates.year_actual, dates.month_name,\ndates.day_name\n Sort Method: external merge Disk: 160kB\n Worker 0: Sort Method: external merge Disk: 168kB\n Worker 1: Sort Method: external merge Disk: 160kB\n -> Partial HashAggregate\n(cost=2566754.38..2866423.72 rows=4200 width=28) (actual\ntime=10957.390..17976.250 rows=3709 loops=3)\n Group Key: dates.year_actual, dates.month_name,\ndates.day_name\n Planned Partitions: 4 Batches: 21 Memory\nUsage: 93kB Disk Usage: 457480kB\n Worker 0: Batches: 21 Memory Usage: 93kB Disk\nUsage: *473056kB*\n Worker 1: Batches: 21 Memory Usage: 93kB Disk\nUsage: *456792kB*\n -> Parallel Seq Scan on dates\n(cost=0.00..820355.00 rows=10957800 width=28) (actual time=1.042..5893.803\nrows=8766240 loops=3)\n Planning Time: 0.142 ms\n Execution Time: 18195.973 ms\n(20 rows)\n\npostgres=# set parallel_setup_cost TO 1000000000; -- make sure it never\nuses parallel, check disk spill (much more than when parallel workers used)\nSET\npostgres=# explain analyze with cte as (select month_name, day_name,\nyear_actual, max(date) date from dimensions.dates group by year_actual,\nmonth_name, day_name) select max(date),year_actual from cte group by\nyear_actual;\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=5884624.58..5884658.08 rows=200 width=8) (actual\ntime=35462.340..35463.142 rows=51 loops=1)\n Group Key: cte.year_actual\n -> Sort (cost=5884624.58..5884635.08 rows=4200 width=8) (actual\ntime=35462.325..35462.752 rows=4201 loops=1)\n Sort Key: cte.year_actual\n Sort Method: external merge Disk: 80kB\n -> Subquery Scan on cte (cost=5165122.70..5884312.33 rows=4200\nwidth=8) (actual time=21747.139..35461.371 rows=4201 loops=1)\n -> HashAggregate (cost=5165122.70..5884270.33 rows=4200\nwidth=28) (actual time=21747.138..35461.140 rows=4201 loops=1)\n Group Key: dates.year_actual, dates.month_name,\ndates.day_name\n Planned Partitions: 4 Batches: 21 Memory Usage:\n93kB Disk Usage: *1393192kB*\n -> Seq Scan on dates (cost=0.00..973764.20\nrows=26298720 width=28) (actual time=0.005..10698.392 rows=26298721 loops=1)\n Planning Time: 0.124 ms\n Execution Time: 35548.514 ms\n(12 rows)\n\nI was thinking trying to make the query run in parallel, would reduce disk\nio per worker, and maybe speed up aggregate, especially if ti runs around 1\nhours.\nofcourse, this was just trying things, maybe i am trying to override\noptimizer, but just wanted to understand cost diff and resource by forcing\ncustom plans.\n\ni also tried with enable_sort to off, enable_hashag to off <it only got\nworse, so not sharing as it would deviate the thread>.\n\nagain, ignore, if it does not make sense :)\n\nOn Fri, 23 Jul 2021 at 03:06, [email protected] <[email protected]> wrote:\n\n\nI am not sure I understand this parameter well enough but it’s with a default value right now of 1000. I have read Robert’s post (http://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html)\n and could play with those parameters, but unsure whether what you are describing will unlock this 2GB limit.\n Yeah, may be i was diverting, and possibly cannot use the windows bottleneck.although the query is diff, the steps were1) use system default, work_mem = 4MB, parallel_setup_cost = 1000 -- runs the query in parallel, no disk spill as work_mem suff.for my datapostgres=# explain analyze with cte as (select month_name, day_name, year_actual, max(date) date from dimensions.dates group by year_actual, month_name, day_name) select max(date),year_actual from cte group by year_actual; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------ GroupAggregate (cost=931227.78..932398.85 rows=200 width=8) (actual time=7850.214..7855.848 rows=51 loops=1) Group Key: dates.year_actual -> Finalize GroupAggregate (cost=931227.78..932333.85 rows=4200 width=28) (actual time=7850.075..7855.611 rows=4201 loops=1) Group Key: dates.year_actual, dates.month_name, dates.day_name -> Gather Merge (cost=931227.78..932207.85 rows=8400 width=28) (actual time=7850.069..7854.008 rows=11295 loops=1) Workers Planned: 2 Workers Launched: 2 -> Sort (cost=930227.76..930238.26 rows=4200 width=28) (actual time=7846.419..7846.551 rows=3765 loops=3) Sort Key: dates.year_actual, dates.month_name, dates.day_name Sort Method: quicksort Memory: 391kB Worker 0: Sort Method: quicksort Memory: 392kB Worker 1: Sort Method: quicksort Memory: 389kB -> Partial HashAggregate (cost=929933.00..929975.00 rows=4200 width=28) (actual time=7841.979..7842.531 rows=3765 loops=3) Group Key: dates.year_actual, dates.month_name, dates.day_name Batches: 1 Memory Usage: 721kB Worker 0: Batches: 1 Memory Usage: 721kB Worker 1: Batches: 1 Memory Usage: 721kB -> Parallel Seq Scan on dates (cost=0.00..820355.00 rows=10957800 width=28) (actual time=3.347..4779.784 rows=8766240 loops=3) Planning Time: 0.133 ms Execution Time: 7855.958 ms(20 rows)-- set work_mem to a very low value, to spill to disk and compare the spill in parallel vs serialpostgres=# set work_mem TO 64; -- SETpostgres=# explain analyze with cte as (select month_name, day_name, year_actual, max(date) date from dimensions.dates group by year_actual, month_name, day_name) select max(date),year_actual from cte group by year_actual; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------ GroupAggregate (cost=2867778.00..2868949.07 rows=200 width=8) (actual time=18116.529..18156.972 rows=51 loops=1) Group Key: dates.year_actual -> Finalize GroupAggregate (cost=2867778.00..2868884.07 rows=4200 width=28) (actual time=18116.421..18156.729 rows=4201 loops=1) Group Key: dates.year_actual, dates.month_name, dates.day_name -> Gather Merge (cost=2867778.00..2868758.07 rows=8400 width=28) (actual time=18116.412..18155.136 rows=11126 loops=1) Workers Planned: 2 Workers Launched: 2 -> Sort (cost=2866777.98..2866788.48 rows=4200 width=28) (actual time=17983.836..17984.981 rows=3709 loops=3) Sort Key: dates.year_actual, dates.month_name, dates.day_name Sort Method: external merge Disk: 160kB Worker 0: Sort Method: external merge Disk: 168kB Worker 1: Sort Method: external merge Disk: 160kB -> Partial HashAggregate (cost=2566754.38..2866423.72 rows=4200 width=28) (actual time=10957.390..17976.250 rows=3709 loops=3) Group Key: dates.year_actual, dates.month_name, dates.day_name Planned Partitions: 4 Batches: 21 Memory Usage: 93kB Disk Usage: 457480kB Worker 0: Batches: 21 Memory Usage: 93kB Disk Usage: 473056kB Worker 1: Batches: 21 Memory Usage: 93kB Disk Usage: 456792kB -> Parallel Seq Scan on dates (cost=0.00..820355.00 rows=10957800 width=28) (actual time=1.042..5893.803 rows=8766240 loops=3) Planning Time: 0.142 ms Execution Time: 18195.973 ms(20 rows)postgres=# set parallel_setup_cost TO 1000000000; -- make sure it never uses parallel, check disk spill (much more than when parallel workers used)SETpostgres=# explain analyze with cte as (select month_name, day_name, year_actual, max(date) date from dimensions.dates group by year_actual, month_name, day_name) select max(date),year_actual from cte group by year_actual; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------- GroupAggregate (cost=5884624.58..5884658.08 rows=200 width=8) (actual time=35462.340..35463.142 rows=51 loops=1) Group Key: cte.year_actual -> Sort (cost=5884624.58..5884635.08 rows=4200 width=8) (actual time=35462.325..35462.752 rows=4201 loops=1) Sort Key: cte.year_actual Sort Method: external merge Disk: 80kB -> Subquery Scan on cte (cost=5165122.70..5884312.33 rows=4200 width=8) (actual time=21747.139..35461.371 rows=4201 loops=1) -> HashAggregate (cost=5165122.70..5884270.33 rows=4200 width=28) (actual time=21747.138..35461.140 rows=4201 loops=1) Group Key: dates.year_actual, dates.month_name, dates.day_name Planned Partitions: 4 Batches: 21 Memory Usage: 93kB Disk Usage: 1393192kB -> Seq Scan on dates (cost=0.00..973764.20 rows=26298720 width=28) (actual time=0.005..10698.392 rows=26298721 loops=1) Planning Time: 0.124 ms Execution Time: 35548.514 ms(12 rows)I was thinking trying to make the query run in parallel, would reduce disk io per worker, and maybe speed up aggregate, especially if ti runs around 1 hours.ofcourse, this was just trying things, maybe i am trying to override optimizer, but just wanted to understand cost diff and resource by forcing custom plans.i also tried with enable_sort to off, enable_hashag to off <it only got worse, so not sharing as it would deviate the thread>.again, ignore, if it does not make sense :)",
"msg_date": "Fri, 23 Jul 2021 20:15:01 +0530",
"msg_from": "Vijaykumar Jain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "From: Vijaykumar Jain <[email protected]>\r\nSent: Friday, July 23, 2021 10:45\r\nTo: [email protected]\r\nCc: Justin Pryzby <[email protected]>; [email protected]\r\nSubject: Re: Big performance slowdown from 11.2 to 13.3\r\n\r\nOn Fri, 23 Jul 2021 at 03:06, [email protected]<mailto:[email protected]> <[email protected]<mailto:[email protected]>> wrote:\r\nI am not sure I understand this parameter well enough but it’s with a default value right now of 1000. I have read Robert’s post (http://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html) and could play with those parameters, but unsure whether what you are describing will unlock this 2GB limit.\r\n\r\n\r\nYeah, may be i was diverting, and possibly cannot use the windows bottleneck.\r\n\r\nalthough the query is diff, the steps were\r\n1) use system default, work_mem = 4MB, parallel_setup_cost = 1000\r\n-- runs the query in parallel, no disk spill as work_mem suff.for my data\r\n\r\npostgres=# explain analyze with cte as (select month_name, day_name, year_actual, max(date) date from dimensions.dates group by year_actual, month_name, day_name) select max(date),year_actual from cte group by year_actual;\r\n QUERY PLAN\r\n------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n GroupAggregate (cost=931227.78..932398.85 rows=200 width=8) (actual time=7850.214..7855.848 rows=51 loops=1)\r\n Group Key: dates.year_actual\r\n -> Finalize GroupAggregate (cost=931227.78..932333.85 rows=4200 width=28) (actual time=7850.075..7855.611 rows=4201 loops=1)\r\n Group Key: dates.year_actual, dates.month_name, dates.day_name\r\n -> Gather Merge (cost=931227.78..932207.85 rows=8400 width=28) (actual time=7850.069..7854.008 rows=11295 loops=1)\r\n Workers Planned: 2\r\n Workers Launched: 2\r\n -> Sort (cost=930227.76..930238.26 rows=4200 width=28) (actual time=7846.419..7846.551 rows=3765 loops=3)\r\n Sort Key: dates.year_actual, dates.month_name, dates.day_name\r\n Sort Method: quicksort Memory: 391kB\r\n Worker 0: Sort Method: quicksort Memory: 392kB\r\n Worker 1: Sort Method: quicksort Memory: 389kB\r\n -> Partial HashAggregate (cost=929933.00..929975.00 rows=4200 width=28) (actual time=7841.979..7842.531 rows=3765 loops=3)\r\n Group Key: dates.year_actual, dates.month_name, dates.day_name\r\n Batches: 1 Memory Usage: 721kB\r\n Worker 0: Batches: 1 Memory Usage: 721kB\r\n Worker 1: Batches: 1 Memory Usage: 721kB\r\n -> Parallel Seq Scan on dates (cost=0.00..820355.00 rows=10957800 width=28) (actual time=3.347..4779.784 rows=8766240 loops=3)\r\n Planning Time: 0.133 ms\r\n Execution Time: 7855.958 ms\r\n(20 rows)\r\n\r\n-- set work_mem to a very low value, to spill to disk and compare the spill in parallel vs serial\r\npostgres=# set work_mem TO 64; --\r\nSET\r\npostgres=# explain analyze with cte as (select month_name, day_name, year_actual, max(date) date from dimensions.dates group by year_actual, month_name, day_name) select max(date),year_actual from cte group by year_actual;\r\n QUERY PLAN\r\n------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n GroupAggregate (cost=2867778.00..2868949.07 rows=200 width=8) (actual time=18116.529..18156.972 rows=51 loops=1)\r\n Group Key: dates.year_actual\r\n -> Finalize GroupAggregate (cost=2867778.00..2868884.07 rows=4200 width=28) (actual time=18116.421..18156.729 rows=4201 loops=1)\r\n Group Key: dates.year_actual, dates.month_name, dates.day_name\r\n -> Gather Merge (cost=2867778.00..2868758.07 rows=8400 width=28) (actual time=18116.412..18155.136 rows=11126 loops=1)\r\n Workers Planned: 2\r\n Workers Launched: 2\r\n -> Sort (cost=2866777.98..2866788.48 rows=4200 width=28) (actual time=17983.836..17984.981 rows=3709 loops=3)\r\n Sort Key: dates.year_actual, dates.month_name, dates.day_name\r\n Sort Method: external merge Disk: 160kB\r\n Worker 0: Sort Method: external merge Disk: 168kB\r\n Worker 1: Sort Method: external merge Disk: 160kB\r\n -> Partial HashAggregate (cost=2566754.38..2866423.72 rows=4200 width=28) (actual time=10957.390..17976.250 rows=3709 loops=3)\r\n Group Key: dates.year_actual, dates.month_name, dates.day_name\r\n Planned Partitions: 4 Batches: 21 Memory Usage: 93kB Disk Usage: 457480kB\r\n Worker 0: Batches: 21 Memory Usage: 93kB Disk Usage: 473056kB\r\n Worker 1: Batches: 21 Memory Usage: 93kB Disk Usage: 456792kB\r\n -> Parallel Seq Scan on dates (cost=0.00..820355.00 rows=10957800 width=28) (actual time=1.042..5893.803 rows=8766240 loops=3)\r\n Planning Time: 0.142 ms\r\n Execution Time: 18195.973 ms\r\n(20 rows)\r\n\r\npostgres=# set parallel_setup_cost TO 1000000000; -- make sure it never uses parallel, check disk spill (much more than when parallel workers used)\r\nSET\r\npostgres=# explain analyze with cte as (select month_name, day_name, year_actual, max(date) date from dimensions.dates group by year_actual, month_name, day_name) select max(date),year_actual from cte group by year_actual;\r\n QUERY PLAN\r\n-----------------------------------------------------------------------------------------------------------------------------------------------\r\n GroupAggregate (cost=5884624.58..5884658.08 rows=200 width=8) (actual time=35462.340..35463.142 rows=51 loops=1)\r\n Group Key: cte.year_actual\r\n -> Sort (cost=5884624.58..5884635.08 rows=4200 width=8) (actual time=35462.325..35462.752 rows=4201 loops=1)\r\n Sort Key: cte.year_actual\r\n Sort Method: external merge Disk: 80kB\r\n -> Subquery Scan on cte (cost=5165122.70..5884312.33 rows=4200 width=8) (actual time=21747.139..35461.371 rows=4201 loops=1)\r\n -> HashAggregate (cost=5165122.70..5884270.33 rows=4200 width=28) (actual time=21747.138..35461.140 rows=4201 loops=1)\r\n Group Key: dates.year_actual, dates.month_name, dates.day_name\r\n Planned Partitions: 4 Batches: 21 Memory Usage: 93kB Disk Usage: 1393192kB\r\n -> Seq Scan on dates (cost=0.00..973764.20 rows=26298720 width=28) (actual time=0.005..10698.392 rows=26298721 loops=1)\r\n Planning Time: 0.124 ms\r\n Execution Time: 35548.514 ms\r\n(12 rows)\r\n\r\nI was thinking trying to make the query run in parallel, would reduce disk io per worker, and maybe speed up aggregate, especially if ti runs around 1 hours.\r\nofcourse, this was just trying things, maybe i am trying to override optimizer, but just wanted to understand cost diff and resource by forcing custom plans.\r\n\r\ni also tried with enable_sort to off, enable_hashag to off <it only got worse, so not sharing as it would deviate the thread>.\r\n\r\nagain, ignore, if it does not make sense :)\r\n\r\n\r\n\r\n\r\nHello,\r\n\r\nOK, that makes sense. I have some limited time to test those additional scenarios, but they make sense. I’ll see what I can do. The query on 11 takes under 5mn, and 50mn+ on 13.\r\n\r\nThank you,\r\nLaurent.\r\n\n\n\n\n\n\n\n\n\n \n \n\nFrom: Vijaykumar Jain <[email protected]>\r\n\nSent: Friday, July 23, 2021 10:45\nTo: [email protected]\nCc: Justin Pryzby <[email protected]>; [email protected]\nSubject: Re: Big performance slowdown from 11.2 to 13.3\n\n \n\n\n\n\nOn Fri, 23 Jul 2021 at 03:06, \r\[email protected] <[email protected]> wrote:\n\n\n\n\n\n\r\nI am not sure I understand this parameter well enough but it’s with a default value right now of 1000. I have read Robert’s post (http://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html)\r\n and could play with those parameters, but unsure whether what you are describing will unlock this 2GB limit.\n\r\n \n\n\n\n\n \n\n\nYeah, may be i was diverting, and possibly cannot use the windows bottleneck.\n\n\n \n\n\nalthough the query is diff, the steps were\n\n\n1) use system default, work_mem = 4MB, parallel_setup_cost = 1000 \n\n\n-- runs the query in parallel, no disk spill as work_mem suff.for my data\n\n\n \n\n\n\npostgres=# explain analyze with cte as (select month_name, day_name, year_actual, max(date) date from dimensions.dates group by year_actual, month_name, day_name) select max(date),year_actual from cte group by\r\n year_actual;\n\n\n QUERY PLAN \n\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n GroupAggregate (cost=931227.78..932398.85 rows=200 width=8) (actual time=7850.214..7855.848 rows=51 loops=1)\n\n\n Group Key: dates.year_actual\n\n\n -> Finalize GroupAggregate (cost=931227.78..932333.85 rows=4200 width=28) (actual time=7850.075..7855.611 rows=4201 loops=1)\n\n\n Group Key: dates.year_actual, dates.month_name, dates.day_name\n\n\n -> Gather Merge (cost=931227.78..932207.85 rows=8400 width=28) (actual time=7850.069..7854.008 rows=11295 loops=1)\n\n\n Workers Planned: 2\n\n\n Workers Launched: 2\n\n\n -> Sort (cost=930227.76..930238.26 rows=4200 width=28) (actual time=7846.419..7846.551 rows=3765 loops=3)\n\n\n Sort Key: dates.year_actual, dates.month_name, dates.day_name\n\n\n Sort Method: quicksort Memory: 391kB\n\n\n Worker 0: Sort Method: quicksort Memory: 392kB\n\n\n Worker 1: Sort Method: quicksort Memory: 389kB\n\n\n -> Partial HashAggregate (cost=929933.00..929975.00 rows=4200 width=28) (actual time=7841.979..7842.531 rows=3765 loops=3)\n\n\n Group Key: dates.year_actual, dates.month_name, dates.day_name\n\n\n Batches: 1 Memory Usage: 721kB\n\n\n Worker 0: Batches: 1 Memory Usage: 721kB\n\n\n Worker 1: Batches: 1 Memory Usage: 721kB\n\n\n -> Parallel Seq Scan on dates (cost=0.00..820355.00 rows=10957800 width=28) (actual time=3.347..4779.784 rows=8766240 loops=3)\n\n\n Planning Time: 0.133 ms\n\n\n Execution Time: 7855.958 ms\n\n\n(20 rows)\n\n\n \n\n\n-- set work_mem to a very low value, to spill to disk and compare the spill in parallel vs serial\n\n\npostgres=# set work_mem TO 64; -- \n\n\nSET\n\n\npostgres=# explain analyze with cte as (select month_name, day_name, year_actual, max(date) date from dimensions.dates group by year_actual, month_name, day_name) select max(date),year_actual from cte group by\r\n year_actual;\n\n\n QUERY PLAN \n\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n GroupAggregate (cost=2867778.00..2868949.07 rows=200 width=8) (actual time=18116.529..18156.972 rows=51 loops=1)\n\n\n Group Key: dates.year_actual\n\n\n -> Finalize GroupAggregate (cost=2867778.00..2868884.07 rows=4200 width=28) (actual time=18116.421..18156.729 rows=4201 loops=1)\n\n\n Group Key: dates.year_actual, dates.month_name, dates.day_name\n\n\n -> Gather Merge (cost=2867778.00..2868758.07 rows=8400 width=28) (actual time=18116.412..18155.136 rows=11126 loops=1)\n\n\n Workers Planned: 2\n\n\n Workers Launched: 2\n\n\n -> Sort (cost=2866777.98..2866788.48 rows=4200 width=28) (actual time=17983.836..17984.981 rows=3709 loops=3)\n\n\n Sort Key: dates.year_actual, dates.month_name, dates.day_name\n\n\n Sort Method: external merge Disk: 160kB\n\n\n Worker 0: Sort Method: external merge Disk: 168kB\n\n\n Worker 1: Sort Method: external merge Disk: 160kB\n\n\n -> Partial HashAggregate (cost=2566754.38..2866423.72 rows=4200 width=28) (actual time=10957.390..17976.250 rows=3709 loops=3)\n\n\n Group Key: dates.year_actual, dates.month_name, dates.day_name\n\n\n Planned Partitions: 4 Batches: 21 Memory Usage: 93kB Disk Usage: 457480kB\n\n\n Worker 0: Batches: 21 Memory Usage: 93kB Disk Usage:\r\n473056kB\n\n\n Worker 1: Batches: 21 Memory Usage: 93kB Disk Usage:\r\n456792kB\n\n\n -> Parallel Seq Scan on dates (cost=0.00..820355.00 rows=10957800 width=28) (actual time=1.042..5893.803 rows=8766240 loops=3)\n\n\n Planning Time: 0.142 ms\n\n\n Execution Time: 18195.973 ms\n\n\n(20 rows)\n\n\n \n\n\npostgres=# set parallel_setup_cost TO 1000000000; -- make sure it never uses parallel, check disk spill (much more than when parallel workers used)\n\n\nSET\n\n\npostgres=# explain analyze with cte as (select month_name, day_name, year_actual, max(date) date from dimensions.dates group by year_actual, month_name, day_name) select max(date),year_actual from cte group by\r\n year_actual;\n\n\n QUERY PLAN \n\n\n-----------------------------------------------------------------------------------------------------------------------------------------------\n\n\n GroupAggregate (cost=5884624.58..5884658.08 rows=200 width=8) (actual time=35462.340..35463.142 rows=51 loops=1)\n\n\n Group Key: cte.year_actual\n\n\n -> Sort (cost=5884624.58..5884635.08 rows=4200 width=8) (actual time=35462.325..35462.752 rows=4201 loops=1)\n\n\n Sort Key: cte.year_actual\n\n\n Sort Method: external merge Disk: 80kB\n\n\n -> Subquery Scan on cte (cost=5165122.70..5884312.33 rows=4200 width=8) (actual time=21747.139..35461.371 rows=4201 loops=1)\n\n\n -> HashAggregate (cost=5165122.70..5884270.33 rows=4200 width=28) (actual time=21747.138..35461.140 rows=4201 loops=1)\n\n\n Group Key: dates.year_actual, dates.month_name, dates.day_name\n\n\n Planned Partitions: 4 Batches: 21 Memory Usage: 93kB Disk Usage:\r\n1393192kB\n\n\n -> Seq Scan on dates (cost=0.00..973764.20 rows=26298720 width=28) (actual time=0.005..10698.392 rows=26298721 loops=1)\n\n\n Planning Time: 0.124 ms\n\n\n Execution Time: 35548.514 ms\n\n\n(12 rows)\n\n\n\n \n\n\nI was thinking trying to make the query run in parallel, would reduce disk io per worker, and maybe speed up aggregate, especially if ti runs around 1 hours.\n\n\nofcourse, this was just trying things, maybe i am trying to override optimizer, but just wanted to understand cost diff and resource by forcing custom plans.\n\n\n \n\n\ni also tried with enable_sort to off, enable_hashag to off <it only got worse, so not sharing as it would deviate the thread>.\n\n\n \n\n\nagain, ignore, if it does not make sense :)\n\n\n \n\n\n \n\n\n \n\n\n \nHello,\n \nOK, that makes sense. I have some limited time to test those additional scenarios, but they make sense. I’ll see what I can do. The query on 11 takes under 5mn, and 50mn+ on 13.\n \nThank you,\nLaurent.",
"msg_date": "Fri, 23 Jul 2021 17:17:48 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "\"[email protected]\" <[email protected]> writes:\n> As a user of PG, we have taken pride in the last few years in tuning the heck out of the system and getting great performance compared to alternatives like SQLServer. The customers we work with typically have data centers and are overwhelmingly Windows shops: we won the battle to deploy a complex operational system on PG vs SQLServer, but Linux vs Windows was still a bridge too far for many. I am surprised that this limitation introduced after V11 hasn't caused issues elsewhere though.\n\nMaybe it has, but you're the first to report the problem, or at least\nthe first to report it with enough detail to trace the cause.\n\nI've pushed a fix that removes the artificial restriction on work_mem\ntimes hash_mem_multiplier; it will be in next month's 13.4 release.\nYou'll still need to increase hash_mem_multiplier to get satisfactory\nperformance on your workload, but at least it'll be possible to do that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 25 Jul 2021 14:08:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "\r\n\r\n-----Original Message-----\r\nFrom: Tom Lane <[email protected]> \r\nSent: Sunday, July 25, 2021 14:08\r\nTo: [email protected]\r\nCc: Peter Geoghegan <[email protected]>; David Rowley <[email protected]>; Justin Pryzby <[email protected]>; [email protected]\r\nSubject: Re: Big performance slowdown from 11.2 to 13.3\r\n\r\n\"[email protected]\" <[email protected]> writes:\r\n> As a user of PG, we have taken pride in the last few years in tuning the heck out of the system and getting great performance compared to alternatives like SQLServer. The customers we work with typically have data centers and are overwhelmingly Windows shops: we won the battle to deploy a complex operational system on PG vs SQLServer, but Linux vs Windows was still a bridge too far for many. I am surprised that this limitation introduced after V11 hasn't caused issues elsewhere though.\r\n\r\nMaybe it has, but you're the first to report the problem, or at least the first to report it with enough detail to trace the cause.\r\n\r\nI've pushed a fix that removes the artificial restriction on work_mem times hash_mem_multiplier; it will be in next month's 13.4 release.\r\nYou'll still need to increase hash_mem_multiplier to get satisfactory performance on your workload, but at least it'll be possible to do that.\r\n\r\n\t\t\tregards, tom lane\r\n\r\n\r\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n\r\nHello Tom,\r\n\r\nSo happy to help in whatever capacity I can 😊\r\n\r\nThis is fantastic news! Can't wait to try 13.4 asap and will report back.\r\n\r\nThank you,\r\nLaurent.\r\n",
"msg_date": "Mon, 26 Jul 2021 05:12:30 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "Tom,\n\nOne question that popped up in my head. hash_mem_multiplier is an upper-bound right: it doesn't reserve memory ahead of time correct? So there is no reason for me to spend undue amounts of time fine-tuning this parameter? If I have work_mem to 521MB, then I can set hash_mem_multiplier to 8 and should be OK. This doesn't mean that every query will consume 4GB of memory.\n\nThank you,\nLaurent.\n\n\n-----Original Message-----\nFrom: Tom Lane <[email protected]> \nSent: Sunday, July 25, 2021 14:08\nTo: [email protected]\nCc: Peter Geoghegan <[email protected]>; David Rowley <[email protected]>; Justin Pryzby <[email protected]>; [email protected]\nSubject: Re: Big performance slowdown from 11.2 to 13.3\n\n\"[email protected]\" <[email protected]> writes:\n> As a user of PG, we have taken pride in the last few years in tuning the heck out of the system and getting great performance compared to alternatives like SQLServer. The customers we work with typically have data centers and are overwhelmingly Windows shops: we won the battle to deploy a complex operational system on PG vs SQLServer, but Linux vs Windows was still a bridge too far for many. I am surprised that this limitation introduced after V11 hasn't caused issues elsewhere though.\n\nMaybe it has, but you're the first to report the problem, or at least the first to report it with enough detail to trace the cause.\n\nI've pushed a fix that removes the artificial restriction on work_mem times hash_mem_multiplier; it will be in next month's 13.4 release.\nYou'll still need to increase hash_mem_multiplier to get satisfactory performance on your workload, but at least it'll be possible to do that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Jul 2021 02:57:48 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "On Tue, Jul 27, 2021 at 7:57 PM [email protected] <\[email protected]> wrote:\n\n> hash_mem_multiplier is an upper-bound right: it doesn't reserve memory\n> ahead of time correct?\n>\n\nYes, that is what the phrasing \"maximum amount\" in the docs is trying to\nconvey.\n\nhttps://www.postgresql.org/docs/current/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-MEMORY\n\nBut also note that it is \"each operation\" that gets access to that limit.\n\nDavid J.\n\nOn Tue, Jul 27, 2021 at 7:57 PM [email protected] <[email protected]> wrote:hash_mem_multiplier is an upper-bound right: it doesn't reserve memory ahead of time correct?Yes, that is what the phrasing \"maximum amount\" in the docs is trying to convey.https://www.postgresql.org/docs/current/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-MEMORYBut also note that it is \"each operation\" that gets access to that limit.David J.",
"msg_date": "Tue, 27 Jul 2021 20:06:59 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "\"[email protected]\" <[email protected]> writes:\n> One question that popped up in my head. hash_mem_multiplier is an upper-bound right: it doesn't reserve memory ahead of time correct? So there is no reason for me to spend undue amounts of time fine-tuning this parameter? If I have work_mem to 521MB, then I can set hash_mem_multiplier to 8 and should be OK. This doesn't mean that every query will consume 4GB of memory.\n\nYeah, I wouldn't sweat over the specific value. The pre-v13 behavior\nwas effectively equivalent to hash_mem_multiplier = infinity, so if\nyou weren't having any OOM problems before, just crank it up.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Jul 2021 23:15:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "I wrote:\n> Yeah, I wouldn't sweat over the specific value. The pre-v13 behavior\n> was effectively equivalent to hash_mem_multiplier = infinity, so if\n> you weren't having any OOM problems before, just crank it up.\n\nOh, wait, scratch that: the old executor's behavior is accurately\ndescribed by that statement, but the planner's is not. The planner\nwill not pick a hashagg plan if it guesses that the hash table\nwould exceed the configured limit (work_mem before, now work_mem\ntimes hash_mem_multiplier). So raising hash_mem_multiplier to the\nmoon might bias the v13 planner to pick hashagg plans in cases\nwhere earlier versions would not have. This doesn't describe your\nimmediate problem, but it might be a reason to not just set the\nvalue as high as you can.\n\nBTW, this also suggests that the planner is underestimating the\namount of memory needed for the hashagg, both before and after.\nThat might be something to investigate at some point.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Jul 2021 23:31:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "\n-----Original Message-----\nFrom: Tom Lane <[email protected]> \nSent: Tuesday, July 27, 2021 23:31\nTo: [email protected]\nCc: Peter Geoghegan <[email protected]>; David Rowley <[email protected]>; Justin Pryzby <[email protected]>; [email protected]\nSubject: Re: Big performance slowdown from 11.2 to 13.3\n\nI wrote:\n> Yeah, I wouldn't sweat over the specific value. The pre-v13 behavior \n> was effectively equivalent to hash_mem_multiplier = infinity, so if \n> you weren't having any OOM problems before, just crank it up.\n\nOh, wait, scratch that: the old executor's behavior is accurately described by that statement, but the planner's is not. The planner will not pick a hashagg plan if it guesses that the hash table would exceed the configured limit (work_mem before, now work_mem times hash_mem_multiplier). So raising hash_mem_multiplier to the moon might bias the v13 planner to pick hashagg plans in cases where earlier versions would not have. This doesn't describe your immediate problem, but it might be a reason to not just set the value as high as you can.\n\nBTW, this also suggests that the planner is underestimating the amount of memory needed for the hashagg, both before and after.\nThat might be something to investigate at some point.\n\n\t\t\tregards, tom lane\n\n\nThis is very useful to know... all things I'll get to test after 13.4 is released. I'll report back when I am able to.\n\nThank you,\nLaurent.\n\n\n",
"msg_date": "Wed, 28 Jul 2021 20:12:53 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
},
{
"msg_contents": "Hello all...\n\nI upgraded to 13.4 and re-ran the various versions of the query. I think I get a mixed bag of results. In the spirit of closing this thread, I'll start with the fantastic news and then start a new thread with the other issue I have isolated.\n\nIt looks like the query I submitted initially is now performing very well!!! It's funny but it looks like the query is consuming a tiny bit over 2.5GB, so my use case was literally a hair above the memory limit that we had discovered. V13.4 takes 54s vs V11.2 takes 74s!! This is quite amazing to have such a performance gain! 25% better. The query was as follows:\n\nselect \"iccqa_iccassmt_fk\" \n , MAX(\"iccqar_ans_val\") as \"iccqar_ans_val\"\n , (MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DEBRIDEMENT DATE'))::text as \"iccqa_DEBRIDEMENT_DATE\"\n , (MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DEBRIDEMENT THIS VISIT') )::text as \"iccqa_DEBRIDEMENT_THIS_VISIT\"\n , (MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DEBRIDEMENT TYPE') )::text as \"iccqa_DEBRIDEMENT_TYPE\"\n , (MAX(\"iccqar_ans_val\") filter (where \"iccqar_ques_code\"= 'DEPTH (CM)'))::text as \"iccqa_DEPTH_CM\"\n , ... 50 more columns\nfrom (\n-- 'A pivoted view of ICC QA assessments'\nselect VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_iccassmt_fk\" as \"iccqa_iccassmt_fk\" -- The key identifying an ICC assessment.\n , VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" as \"iccqar_ques_code\" -- The question long code from the meta-data.\n , max(VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ans_val\") as \"iccqar_ans_val\" -- The official answer, if applicable) from the meta-data.\n from VNAHGEDW_FACTS.AssessmentICCQA_Raw\n where VNAHGEDW_FACTS.AssessmentICCQA_Raw.\"iccqar_ques_code\" in ('DEBRIDEMENT DATE', 'DEBRIDEMENT THIS VISIT', 'DEBRIDEMENT TYPE', 'DEPTH (CM)', ... 50 more values)\n group by 1, 2\n) T\n group by 1\n;\n\nThe plans are:\n\nV13.4 - 54s\nHashAggregate (cost=1486844.46..1486846.46 rows=200 width=1764) (actual time=50714.016..53813.049 rows=677899 loops=1)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk\n Batches: 1 Memory Usage: 1196065kB\n Buffers: shared hit=158815\n -> Finalize HashAggregate (cost=1100125.42..1113234.54 rows=1310912 width=56) (actual time=14487.522..20498.241 rows=12926549 loops=1)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\n Batches: 1 Memory Usage: 2572305kB\n Buffers: shared hit=158815\n -> Gather (cost=382401.10..1050966.22 rows=6554560 width=56) (actual time=2891.177..6614.288 rows=12926549 loops=1)\n Workers Planned: 5\n Workers Launched: 5\n Buffers: shared hit=158815\n -> Partial HashAggregate (cost=381401.10..394510.22 rows=1310912 width=56) (actual time=2790.736..3680.249 rows=2154425 loops=6)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\n Batches: 1 Memory Usage: 417809kB\n Buffers: shared hit=158815\n Worker 0: Batches: 1 Memory Usage: 401425kB\n Worker 1: Batches: 1 Memory Usage: 409617kB\n Worker 2: Batches: 1 Memory Usage: 393233kB\n Worker 3: Batches: 1 Memory Usage: 385041kB\n Worker 4: Batches: 1 Memory Usage: 393233kB\n -> Parallel Seq Scan on assessmenticcqa_raw (cost=0.00..362006.30 rows=2585974 width=38) (actual time=0.042..1600.138 rows=2154425 loops=6)\n Filter: ((iccqar_ques_code)::text = ANY ('{\"DEBRIDEMENT DATE\",\"DEBRIDEMENT THIS VISIT\",\"DEBRIDEMENT TYPE\",\"DEPTH (CM)\",\"DEPTH DESCRIPTION\",\"DOES PATIENT HAVE PAIN ASSOCIATED WITH THIS WOUND?\",\"DRAIN PRESENT\",\"DRAIN TYPE\",\"EDGE / SURROUNDING TISSUE - MACERATION\",EDGES,EPITHELIALIZATION,\"EXUDATE AMOUNT\",\"EXUDATE TYPE\",\"GRANULATION TISSUE\",\"INDICATE OTHER TYPE OF WOUND CLOSURE\",\"INDICATE TYPE\",\"INDICATE WOUND CLOSURE\",\"IS THIS A CLOSED SURGICAL WOUND OR SUSPECTED DEEP TISSUE INJURY?\",\"LENGTH (CM)\",\"MEASUREMENTS TAKEN\",\"NECROTIC TISSUE AMOUNT\",\"NECROTIC TISSUE TYPE\",ODOR,\"OTHER COMMENTS REGARDING DEBRIDEMENT TYPE\",\"OTHER COMMENTS REGARDING DRAIN TYPE\",\"OTHER COMMENTS REGARDING PAIN INTERVENTIONS\",\"OTHER COMMENTS REGARDING PAIN QUALITY\",\"OTHER COMMENTS REGARDING REASON MEASUREMENTS NOT TAKEN\",\"PAIN FREQUENCY\",\"PAIN INTERVENTIONS\",\"PAIN QUALITY\",\"PERIPHERAL TISSUE EDEMA\",\"PERIPHERAL TISSUE INDURATION\",\"REASON MEASUREMENTS NOT TAKEN\",\"RESPONSE TO PAIN INTERVENTIONS\",SHAPE,\"SIGNS AND SYMPTOMS OF INFECTION\",\"SKIN COLOR SURROUNDING WOUND\",STATE,\"SURFACE AREA (SQ CM)\",\"TOTAL NECROTIC TISSUE ESCHAR\",\"TOTAL NECROTIC TISSUE SLOUGH\",TUNNELING,\"TUNNELING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\",UNDERMINING,\"UNDERMINING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\",\"WIDTH (CM)\",\"WOUND PAIN LEVEL, WHERE 0 = \\\"NO PAIN\\\" AND 10 = \\\"WORST POSSIBLE PAIN\\\"\"}'::text[]))\n Rows Removed by Filter: 30428\n Buffers: shared hit=158815\nPlanning:\n Buffers: shared hit=3\nPlanning Time: 0.552 ms\nExecution Time: 54241.152 ms\n\n\n\nV11.2 - 74s\nHashAggregate (cost=1629249.48..1629251.48 rows=200 width=1764) (actual time=68833.447..73384.374 rows=742896 loops=1)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk\n Buffers: shared read=173985\n -> Finalize HashAggregate (cost=1205455.43..1219821.33 rows=1436590 width=56) (actual time=19441.489..28630.297 rows=14176024 loops=1)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\n Buffers: shared read=173985\n -> Gather (cost=418922.41..1151583.31 rows=7182950 width=56) (actual time=3698.235..8445.971 rows=14176024 loops=1)\n Workers Planned: 5\n Workers Launched: 5\n Buffers: shared read=173985\n -> Partial HashAggregate (cost=417922.41..432288.31 rows=1436590 width=56) (actual time=3559.562..4619.406 rows=2362671 loops=6)\n Group Key: assessmenticcqa_raw.iccqar_iccassmt_fk, assessmenticcqa_raw.iccqar_ques_code\n Buffers: shared read=173985\n -> Parallel Seq Scan on assessmenticcqa_raw (cost=0.00..396656.48 rows=2835457 width=38) (actual time=0.261..1817.102 rows=2362671 loops=6)\n Filter: ((iccqar_ques_code)::text = ANY ('{\"DEBRIDEMENT DATE\",\"DEBRIDEMENT THIS VISIT\",\"DEBRIDEMENT TYPE\",\"DEPTH (CM)\",\"DEPTH DESCRIPTION\",\"DOES PATIENT HAVE PAIN ASSOCIATED WITH THIS WOUND?\",\"DRAIN PRESENT\",\"DRAIN TYPE\",\"EDGE / SURROUNDING TISSUE - MACERATION\",EDGES,EPITHELIALIZATION,\"EXUDATE AMOUNT\",\"EXUDATE TYPE\",\"GRANULATION TISSUE\",\"INDICATE OTHER TYPE OF WOUND CLOSURE\",\"INDICATE TYPE\",\"INDICATE WOUND CLOSURE\",\"IS THIS A CLOSED SURGICAL WOUND OR SUSPECTED DEEP TISSUE INJURY?\",\"LENGTH (CM)\",\"MEASUREMENTS TAKEN\",\"NECROTIC TISSUE AMOUNT\",\"NECROTIC TISSUE TYPE\",ODOR,\"OTHER COMMENTS REGARDING DEBRIDEMENT TYPE\",\"OTHER COMMENTS REGARDING DRAIN TYPE\",\"OTHER COMMENTS REGARDING PAIN INTERVENTIONS\",\"OTHER COMMENTS REGARDING PAIN QUALITY\",\"OTHER COMMENTS REGARDING REASON MEASUREMENTS NOT TAKEN\",\"PAIN FREQUENCY\",\"PAIN INTERVENTIONS\",\"PAIN QUALITY\",\"PERIPHERAL TISSUE EDEMA\",\"PERIPHERAL TISSUE INDURATION\",\"REASON MEASUREMENTS NOT TAKEN\",\"RESPONSE TO PAIN INTERVENTIONS\",SHAPE,\"SIGNS AND SYMPTOMS OF INFECTION\",\"SKIN COLOR SURROUNDING WOUND\",STATE,\"SURFACE AREA (SQ CM)\",\"TOTAL NECROTIC TISSUE ESCHAR\",\"TOTAL NECROTIC TISSUE SLOUGH\",TUNNELING,\"TUNNELING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\",\"TUNNELING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\",UNDERMINING,\"UNDERMINING SIZE(CM)/LOCATION - 12 - 3 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 3 - 6 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 6 - 9 O''CLOCK\",\"UNDERMINING SIZE(CM)/LOCATION - 9 - 12 O''CLOCK\",\"WIDTH (CM)\",\"WOUND PAIN LEVEL, WHERE 0 = \\\"NO PAIN\\\" AND 10 = \\\"WORST POSSIBLE PAIN\\\"\"}'::text[]))\n Rows Removed by Filter: 31608\n Buffers: shared read=173985\nPlanning Time: 22.673 ms\nExecution Time: 74110.779 ms\n\n\nThank you so much to the whole team to this great work!\n\nI'll send a separate email with the other issue I think I have isolated.\n\nThank you!\nLaurent Hasson\n\n\n-----Original Message-----\nFrom: [email protected] <[email protected]> \nSent: Wednesday, July 28, 2021 16:13\nTo: Tom Lane <[email protected]>\nCc: Peter Geoghegan <[email protected]>; David Rowley <[email protected]>; Justin Pryzby <[email protected]>; [email protected]\nSubject: RE: Big performance slowdown from 11.2 to 13.3\n\n\n-----Original Message-----\nFrom: Tom Lane <[email protected]>\nSent: Tuesday, July 27, 2021 23:31\nTo: [email protected]\nCc: Peter Geoghegan <[email protected]>; David Rowley <[email protected]>; Justin Pryzby <[email protected]>; [email protected]\nSubject: Re: Big performance slowdown from 11.2 to 13.3\n\nI wrote:\n> Yeah, I wouldn't sweat over the specific value. The pre-v13 behavior \n> was effectively equivalent to hash_mem_multiplier = infinity, so if \n> you weren't having any OOM problems before, just crank it up.\n\nOh, wait, scratch that: the old executor's behavior is accurately described by that statement, but the planner's is not. The planner will not pick a hashagg plan if it guesses that the hash table would exceed the configured limit (work_mem before, now work_mem times hash_mem_multiplier). So raising hash_mem_multiplier to the moon might bias the v13 planner to pick hashagg plans in cases where earlier versions would not have. This doesn't describe your immediate problem, but it might be a reason to not just set the value as high as you can.\n\nBTW, this also suggests that the planner is underestimating the amount of memory needed for the hashagg, both before and after.\nThat might be something to investigate at some point.\n\n\t\t\tregards, tom lane\n\n\nThis is very useful to know... all things I'll get to test after 13.4 is released. I'll report back when I am able to.\n\nThank you,\nLaurent.\n\n\n\n\n",
"msg_date": "Sat, 21 Aug 2021 05:42:38 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Big performance slowdown from 11.2 to 13.3"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.